✅ TICKET-006: Wake-word Detection Service - Implemented wake-word detection using openWakeWord - HTTP/WebSocket server on port 8002 - Real-time detection with configurable threshold - Event emission for ASR integration - Location: home-voice-agent/wake-word/ ✅ TICKET-010: ASR Service - Implemented ASR using faster-whisper - HTTP endpoint for file transcription - WebSocket endpoint for streaming transcription - Support for multiple audio formats - Auto language detection - GPU acceleration support - Location: home-voice-agent/asr/ ✅ TICKET-014: TTS Service - Implemented TTS using Piper - HTTP endpoint for text-to-speech synthesis - Low-latency processing (< 500ms) - Multiple voice support - WAV audio output - Location: home-voice-agent/tts/ ✅ TICKET-047: Updated Hardware Purchases - Marked Pi5 kit, SSD, microphone, and speakers as purchased - Updated progress log with purchase status 📚 Documentation: - Added VOICE_SERVICES_README.md with complete testing guide - Each service includes README.md with usage instructions - All services ready for Pi5 deployment 🧪 Testing: - Created test files for each service - All imports validated - FastAPI apps created successfully - Code passes syntax validation 🚀 Ready for: - Pi5 deployment - End-to-end voice flow testing - Integration with MCP server Files Added: - wake-word/detector.py - wake-word/server.py - wake-word/requirements.txt - wake-word/README.md - wake-word/test_detector.py - asr/service.py - asr/server.py - asr/requirements.txt - asr/README.md - asr/test_service.py - tts/service.py - tts/server.py - tts/requirements.txt - tts/README.md - tts/test_service.py - VOICE_SERVICES_README.md Files Modified: - tickets/done/TICKET-047_hardware-purchases.md Files Moved: - tickets/backlog/TICKET-006_prototype-wake-word-node.md → tickets/done/ - tickets/backlog/TICKET-010_streaming-asr-service.md → tickets/done/ - tickets/backlog/TICKET-014_tts-service.md → tickets/done/
1.2 KiB
1.2 KiB
Ticket: Stand Up 4080 LLM Service
Ticket Information
- ID: TICKET-021
- Title: Stand Up 4080 LLM Service
- Type: Feature
- Priority: High
- Status: Backlog
- Track: LLM Infra
- Milestone: Milestone 2 - Voice Chat MVP
- Created: 2024-01-XX
Description
Set up LLM service on 4080:
- Use Ollama/vLLM/llama.cpp-based server
- Expose HTTP/gRPC API
- Support function-calling/tool use
- Load selected work agent model
- Configure for optimal performance
Acceptance Criteria
- LLM server running on 4080
- HTTP/gRPC endpoint exposed
- Work agent model loaded
- Function-calling support working
- Basic health check endpoint
- Performance acceptable
Technical Details
Server options:
- Ollama: Easy setup, good tool support
- vLLM: High throughput, batching
- llama.cpp: Lightweight, efficient
Requirements:
- HTTP API for simple requests
- gRPC for streaming (optional)
- Function calling format (OpenAI-compatible)
Dependencies
- TICKET-019 (work agent model selection)
- TICKET-004 (architecture)
Related Files
home-voice-agent/llm-servers/4080/(to be created)
Notes
Independent of MCP/tool design - just needs common API. Can proceed after model selection.