atlas/home-voice-agent/llm-servers/4080/test_connection.py
ilia bdbf09a9ac feat: Implement voice I/O services (TICKET-006, TICKET-010, TICKET-014)
 TICKET-006: Wake-word Detection Service
- Implemented wake-word detection using openWakeWord
- HTTP/WebSocket server on port 8002
- Real-time detection with configurable threshold
- Event emission for ASR integration
- Location: home-voice-agent/wake-word/

 TICKET-010: ASR Service
- Implemented ASR using faster-whisper
- HTTP endpoint for file transcription
- WebSocket endpoint for streaming transcription
- Support for multiple audio formats
- Auto language detection
- GPU acceleration support
- Location: home-voice-agent/asr/

 TICKET-014: TTS Service
- Implemented TTS using Piper
- HTTP endpoint for text-to-speech synthesis
- Low-latency processing (< 500ms)
- Multiple voice support
- WAV audio output
- Location: home-voice-agent/tts/

 TICKET-047: Updated Hardware Purchases
- Marked Pi5 kit, SSD, microphone, and speakers as purchased
- Updated progress log with purchase status

📚 Documentation:
- Added VOICE_SERVICES_README.md with complete testing guide
- Each service includes README.md with usage instructions
- All services ready for Pi5 deployment

🧪 Testing:
- Created test files for each service
- All imports validated
- FastAPI apps created successfully
- Code passes syntax validation

🚀 Ready for:
- Pi5 deployment
- End-to-end voice flow testing
- Integration with MCP server

Files Added:
- wake-word/detector.py
- wake-word/server.py
- wake-word/requirements.txt
- wake-word/README.md
- wake-word/test_detector.py
- asr/service.py
- asr/server.py
- asr/requirements.txt
- asr/README.md
- asr/test_service.py
- tts/service.py
- tts/server.py
- tts/requirements.txt
- tts/README.md
- tts/test_service.py
- VOICE_SERVICES_README.md

Files Modified:
- tickets/done/TICKET-047_hardware-purchases.md

Files Moved:
- tickets/backlog/TICKET-006_prototype-wake-word-node.md → tickets/done/
- tickets/backlog/TICKET-010_streaming-asr-service.md → tickets/done/
- tickets/backlog/TICKET-014_tts-service.md → tickets/done/
2026-01-12 22:22:38 -05:00

76 lines
2.4 KiB
Python

#!/usr/bin/env python3
"""
Test connection to 4080 LLM Server.
"""
import requests
import json
from config import OLLAMA_BASE_URL, API_TAGS, API_CHAT, MODEL_NAME
def test_server_connection():
"""Test if Ollama server is reachable."""
print(f"Testing connection to {OLLAMA_BASE_URL}...")
try:
# Test tags endpoint
response = requests.get(API_TAGS, timeout=5)
if response.status_code == 200:
data = response.json()
print(f"✅ Server is reachable!")
print(f"Available models: {len(data.get('models', []))}")
for model in data.get('models', []):
print(f" - {model.get('name', 'unknown')}")
return True
else:
print(f"❌ Server returned status {response.status_code}")
return False
except requests.exceptions.ConnectionError:
print(f"❌ Cannot connect to {OLLAMA_BASE_URL}")
print(" Make sure the server is running and accessible")
return False
except Exception as e:
print(f"❌ Error: {e}")
return False
def test_chat():
"""Test chat endpoint with a simple prompt."""
print(f"\nTesting chat endpoint with model: {MODEL_NAME}...")
payload = {
"model": MODEL_NAME,
"messages": [
{"role": "user", "content": "Say 'Hello from 4080!' in one sentence."}
],
"stream": False
}
try:
response = requests.post(API_CHAT, json=payload, timeout=60)
if response.status_code == 200:
data = response.json()
message = data.get('message', {})
content = message.get('content', '')
print(f"✅ Chat test successful!")
print(f"Response: {content}")
return True
else:
print(f"❌ Chat test failed: {response.status_code}")
print(f"Response: {response.text}")
return False
except Exception as e:
print(f"❌ Chat test error: {e}")
return False
if __name__ == "__main__":
print("=" * 60)
print("4080 LLM Server Connection Test")
print("=" * 60)
if test_server_connection():
test_chat()
else:
print("\n⚠️ Server connection failed. Check:")
print(" 1. Server is running on the GPU VM")
print(" 2. Network connectivity to 10.0.30.63:11434")
print(" 3. Firewall allows connections")