</> COMMUNITY FEED

[DISCOVER] [CONNECT] [INSPIRE]
See what the community is building and sharing.

$ Community_Feed.sh

10 posts loaded
Adam Chan

Adam Chan

3d ago

Project Description

a project

$ tech_stack --list
no tech
GitHub
Demo Available
Hack Night at GitHub
$ event_source --show
From: Hack Night at GitHub

VoiceOps

# VoiceOps **Amir Mirmehrkar** **VoiceOps AI** **Voice-First Incident Ingestion & Compliance Review for Production Services** --- ## Overview VoiceOps is an enterprise-ready FastAPI service that transforms voice incident reports into structured, production-grade JSON incidents. It enables teams to report incidents via voice (phone, web, mobile) and automatically converts them into validated, schema-compliant incident records ready for integration with Jira, PagerDuty, and other incident management tools. ### Core Capabilities - **Voice-First Input**: Accepts voice reports via VAPI (phone, web, mobile) - **AI-Powered Structuring**: LLM converts voice transcripts into structured JSON incidents - **Strict Schema Validation**: JSON Schema 2020-12 with `additionalProperties: false` for production safety - **Deterministic Severity**: Rule-based severity classification (sev1-sev4) - auditable and testable - **PII Safety**: Real-time PII (Personally Identifiable Information) detection and redaction for GDPR/HIPAA compliance - **Secure Development**: Integration with CodeRabbit for AI-assisted code and schema reviews --- ## The Problem Modern engineering teams struggle with: 1. **Time Pressure**: When incidents occur, operators don't have time to fill out forms 2. **Inconsistency**: Subjective severity scoring during high-stress moments leads to misclassification 3. **Compliance Risks**: Unintentional PII exposure in logs and error messages violates GDPR/HIPAA 4. **Slow Triage**: Manual review cycles delay critical fixes and increase MTTR (Mean Time To Repair) 5. **Unstructured Data**: Voice reports are lost or poorly documented, making post-incident analysis difficult --- ## The Solution VoiceOps introduces a voice-first incident ingestion layer that acts as a bridge between voice reports and incident management systems: 1. **Voice Ingestion**: Accepts voice input via VAPI (phone calls, web interface, mobile app) 2. **AI Structuring**: VAPI agent asks exactly 4 questions and outputs schema-valid JSON directly 3. **Validation**: Strict JSON Schema validation ensures production-ready output 4. **Severity Scoring**: Deterministic, rule-based severity classification (not AI guessing) 5. **PII Redaction**: Automatic detection and redaction of PII before storage 6. **Integration Ready**: Webhook delivery to Jira, PagerDuty, SIEM systems 7. **Feedback Loop**: Uses CodeRabbit to validate schema changes and generate test suggestions directly on Pull Requests --- ## Tech Stack - **Backend**: Python 3.11+, FastAPI, Pydantic - **Voice Processing**: VAPI (Voice API) for voice-to-transcript conversion - **AI/LLM**: OpenAI / Anthropic for transcript-to-JSON structuring - **Validation**: JSON Schema 2020-12 with jsonschema library - **AI/LLM Tools**: CodeRabbit (Security & PR Review), Windsurf/Cursor (Agentic Development) - **Testing**: Pytest (Unit testing & Schema validation with table-driven tests) --- ## Data Interface ### Example Input (POST /api/v1/incidents) Voice transcript from VAPI webhook: ```json { "type": "end-of-call", "call": { "id": "call_abc123", "transcript": "Production API is completely down. All services offline. Started at 6 PM. About 1200 users affected. This is critical." } } ``` ### Example Output ```json { "schema_version": "1.0.0", "incident_id": "f9847182-1827-40c1-988f-088f329c395b", "title": "Production API is completely down. All services offline.", "summary": "Production API is completely down. All services offline. Started at 6 PM. About 1200 users affected. This is critical.", "category": "service_outage", "severity": "sev1", "confidence": 0.85, "status": "new", "impact": { "services_down": true, "users_affected_estimate": 1200 }, "systems": [ { "name": "production-api", "environment": "production" } ], "pii": { "contains_pii": false, "redaction_applied": false }, "source": { "channel": "voice", "vendor": "vapi", "call_id": "call_abc123" }, "detected_at": "2025-01-12T18:00:00Z", "reported_at": "2025-01-12T18:05:00Z" } ``` --- ## AI Usage (Beyond the Hype) This project focuses on **Agentic Workflows** rather than simple API calls: ### CodeRabbit Integration We utilize CodeRabbit to perform deep-context reviews of incident-related PRs, identifying: - Security vulnerabilities (PII exposure, injection risks) - Edge cases in severity classification - Schema validation gaps - Test coverage improvements ### Windsurf / Cursor The core logic of this service was built using prompt-driven development, allowing for: - Rapid iteration during the hackathon - Refactoring with AI assistance - Schema-first design validation ### Human-in-the-Loop All AI suggestions are treated as "proposals"β€”they require explicit engineer review before being committed. This ensures: - Production safety - Compliance adherence - Auditability --- ## Key Concepts ### Severity Levels - **sev1 (Critical)**: Services down, security breach, patient safety risk - **sev2 (High)**: Severe degradation, high user impact (>100 users) - **sev3 (Medium)**: Moderate impact, limited scope - **sev4 (Low)**: Minor issues, cosmetic problems ### Category Types - `service_outage`: Complete service failure - `security_incident`: Security breach, unauthorized access - `performance_degradation`: Slow response, timeouts - `data_issue`: Data corruption, loss, or inconsistency - `patient_safety`: Healthcare-specific safety incidents - `other`: Unclassified incidents ### PII Detection Scans for and redacts: - Email addresses - Phone numbers (US, International, Iranian formats) - Credit card numbers - Social Security Numbers (SSN) - IP addresses (IPv4, IPv6) - Patient IDs - Names (heuristic-based) ### Confidence Score 0.0 to 1.0 based on: - Completeness of extracted data - Presence of required fields - PII detection (reduces confidence) - System identification accuracy --- ## Hackathon Goals & Future ### Completed βœ… - [x] Voice-first incident ingestion with VAPI - [x] LLM-powered transcript-to-JSON conversion - [x] Strict JSON Schema validation - [x] Deterministic severity classification - [x] PII detection and redaction - [x] CodeRabbit integration for PR reviews - [x] Table-driven tests for validation - [x] Production-ready error handling ### Future Roadmap πŸš€ - [ ] Real-time VAPI webhook integration - [ ] Jira/PagerDuty webhook delivery - [ ] Historical trend analysis using Vector Databases - [ ] Multi-language support - [ ] Voice quality scoring - [ ] Compliance reporting dashboard - [ ] Slack integration for notifications --- ## How to Run ### Prerequisites - Python 3.11+ - VAPI API key (for voice processing) - OpenAI or Anthropic API key (for LLM structuring) ### Installation 1. **Install dependencies**: ```bash pip install -r requirements.txt ``` 2. **Set up environment variables**: Create `.env.local`: ```bash VAPI_API_KEY=your_vapi_key VAPI_PUBLIC_KEY=your_vapi_public_key OPENAI_API_KEY=your_openai_key # Optional ANTHROPIC_API_KEY=your_anthropic_key # Optional ``` 3. **Launch service**: ```bash uvicorn main:app --reload ``` 4. **Explore API**: Visit http://localhost:8000/docs for interactive API documentation ### Testing Run table-driven tests: ```bash python -m pytest tests/test_incident_table.py -v ``` ### Demo See `/demo` folder for: - Example incident JSON outputs - Sample voice transcripts - Demo flow walkthrough --- ## Project Structure ``` VoiceOps/ β”œβ”€β”€ api/ # Core API code β”‚ β”œβ”€β”€ incident.py # Incident creation & processing β”‚ β”œβ”€β”€ scoring.py # Confidence & severity calculation β”‚ β”œβ”€β”€ schema.py # Schema validation β”‚ β”œβ”€β”€ llm.py # LLM integration β”‚ └── vapi_webhook.py # VAPI webhook handler β”œβ”€β”€ tests/ # Test files β”‚ └── test_incident_table.py # Table-driven tests β”œβ”€β”€ schemas/ # JSON schemas β”‚ └── incident.v1.json # Strict incident schema β”œβ”€β”€ prompts/ # LLM prompts β”‚ β”œβ”€β”€ incident_prompt.txt β”‚ └── repair_prompt.txt β”œβ”€β”€ demo/ # Demo materials β”œβ”€β”€ engineering/ # Technical documentation β”œβ”€β”€ coderabbit/ # CodeRabbit integration docs └── main.py # FastAPI application entry point ``` --- **Built for AI Hackathon SFxHamburg** With ❀️ using AI-assisted development (CodeRabbit, Windsurf, Cursor, VAPI)

$ tech_stack --list
AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition
$ event_source --show
From: AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition

Neural Draft

A cover letter generator based on your personal work history, the job description of the company you are applying for, the company work as a whole and your motivation of past life for that particular job

$ tech_stack --list
GitHub
Demo Available
AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition
$ event_source --show
From: AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition

Jibigame

Steam for educational games

$ tech_stack --list
GitHub
Demo Available
AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition
$ event_source --show
From: AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition

Deepstory Voice Chat

A web-based interactive voice chat app that lets users have a casual conversation with a Witcher character (in-character by default), while grounding every answer in the text of the Witcher books via retrieval-augmented generation (RAG). The app also visualizes what the character is saying as an expandable β€œrelationship explorer” (people, factions, kingdoms, alliances) with source-backed context from the books.

$ tech_stack --list
GitHub
Demo Available
AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition
$ event_source --show
From: AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition

ChunkBench

ChunkBench is a systematic evaluation framework that finds the optimal chunking strategy for your codebase.

$ tech_stack --list
CodeRabbitDaytonaWindsurfQdrantFastEmbed (ONNX)LlamaIndex (chunking)HonoReactTurborepo
GitHub
Demo Available
AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition
$ event_source --show
From: AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition

ProofFoundery

# ProofFoundery MVP (Hackathon Build) ProofFoundery (β€œBuild Your Own Product”) is a social product discovery platform for **physical products** where ideas are tested publicly, contributors take **stakes**, creators get analytics, and only market-validated products qualify for a **capital trigger** (status + admin action only; no real money flows). This repository contains a running end-to-end demo built with **Next.js**, **Supabase (Postgres)**, **Qdrant**, and **HuggingFace**. ## Primary goal Deliver running software (not slides) with an end-to-end demo: - Create - Publish - Feed interactions (including stake rules) - Threshold gate + Market Decision Card - Creator analytics dashboard - Profile stake history ## Non-goals (MVP) - No blockchain - No external social integrations - No stake marketplace - No complex manufacturing workflows - No real payments ## Tech stack - **Frontend + Backend**: Next.js (App Router) - **Relational DB**: Supabase Postgres - **Vector DB**: Qdrant - **Embeddings/AI**: HuggingFace Inference API (feature extraction) ## Core concepts ### Stake rules (MVP) - Users can place stakes on an idea as either: - `SUPPORT` - `SKEPTIC` - A product qualifies when: `support_total - skeptic_total >= NEXT_PUBLIC_STAKE_THRESHOLD` ### Capital trigger (MVP) There is **no real money flow**. β€œCapital trigger” is represented as a **status** and an **admin override** action. ## Data model (Supabase) Defined in `supabase/schema.sql`. - `users` - `id`, `handle`, `created_at` - `products` - `id`, `creator_id`, `title`, `description`, `feedback_prompt`, `status`, `created_at`, `published_at`, `qualified_at`, ... - `comments` - `id`, `product_id`, `user_id`, `body`, `created_at` - `stakes` - `id`, `product_id`, `user_id`, `type`, `amount`, `created_at` - `market_decisions` - `product_id`, `decision_card` (json), `admin_override`, `updated_at` - `product_stake_totals` (view) - `support_total`, `skeptic_total` aggregated per product ## Qdrant usage (non-decorative) Qdrant is used for two concrete platform features: 1) **Semantic discovery** in search/feed 2) **Grounded evidence retrieval** for analytics/insights (top evidence snippets) ### Collections - `product_ideas` - Points represent product-level text: title + description + feedbackPrompt + tags + targetUser + useContext - Payload includes: `productId`, `creatorId`, `status`, `createdAt`, `category` - `product_evidence` - Points represent comment-level evidence - Payload includes: `productId`, `commentId`, `userId`, `createdAt` ### Ingestion rules - On publish: upsert into `product_ideas` - On comment create: upsert into `product_evidence` - On edit: re-embed relevant product fields and upsert `product_ideas` ## Demo pages - `/` β€” Feed - `/create` β€” Create a draft idea - `/products/[id]` β€” Product detail (publish, stake, comment, decision card) - `/dashboard` β€” Creator analytics (by handle) - `/profile/[handle]` β€” Stake history - `/admin` β€” Admin override for Market Decision Card ## API endpoints (high level) ### Core - `POST /api/users/ensure` - `GET /api/feed` - `POST /api/products` - `GET /api/products/:id` - `POST /api/products/:id/publish` - `POST /api/products/:id/stakes` - `POST /api/products/:id/comments` - `GET /api/dashboard?handle=...` - `GET /api/profile/:handle` - `POST /api/admin/products/:id/override` (requires `x-admin-secret`) ### Qdrant-required (task) - `GET /api/search/products?q=...` (semantic search via Qdrant) - `GET /api/products/:id/similar` (nearest neighbors on `product_ideas`) - `GET /api/products/:id/evidence?q=...` (top-k evidence snippets from `product_evidence`, filtered by productId) ## Local setup 1) Install dependencies: ```bash npm install ``` 2) Create `.env.local` from `.env.local.example` and fill in values. 3) Create Supabase tables by running `supabase/schema.sql` in the Supabase SQL editor. 4) Start dev server: ```bash npm run dev ``` ## Demo walkthrough 1) Open `/` and set a **demo handle** (no auth) 2) Go to `/create` and create a draft 3) Open the product and **publish** it 4) Return to `/` to see it in the feed 5) Place stakes and add comments 6) Once qualified, Market Decision Card becomes visible 7) View creator analytics in `/dashboard` 8) View stake history in `/profile/[handle]` ## Security notes - Do **not** commit secrets. - Use `.env.local` for local development. - The Supabase **service role key** must remain server-side only.

$ tech_stack --list
GitHub
Demo Available
AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition
$ event_source --show
From: AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition

REELLAB

Generate video ads for your products

$ tech_stack --list
VapiWindsurfQdrantrunpodopenai
GitHub
Demo Available
AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition
$ event_source --show
From: AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition

Atomic Task

Atomic Task Management for People with ADHD

$ tech_stack --list
CodeRabbitWindsurfsupabasexcode
GitHub
Demo Available
AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition
$ event_source --show
From: AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition

TaskPilot

TaskPilot is an AI execution assistant that turns goals into adaptive daily plans and continuously updates them based on real user behavior. It closes the gap between intention and action by dynamically steering execution as behavior unfolds.

$ tech_stack --list
VapiQdrantFigmaChatGPTGemini
GitHub
Demo Available
AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition
$ event_source --show
From: AI Hackathon SFxHamburg Coderabbit x Windsurf Christmas edition