# Quick Start Guide Get the Atlas voice agent system up and running quickly. ## Prerequisites 1. **Python 3.8+** installed 2. **Ollama** installed and running (for local testing) 3. **pip** for installing dependencies ## Setup (5 minutes) ### 1. Install Dependencies ```bash cd /home/beast/Code/atlas/home-voice-agent/mcp-server pip install -r requirements.txt ``` ### 2. Configure Environment ```bash cd /home/beast/Code/atlas/home-voice-agent # Check current config cat .env | grep OLLAMA # Toggle between local/remote ./toggle_env.sh ``` **Default**: Local testing (localhost:11434, llama3:latest) ### 3. Start Ollama (if testing locally) ```bash # Check if running curl http://localhost:11434/api/tags # If not running, start it: ollama serve # Pull a model (if needed) ollama pull llama3:latest ``` ### 4. Start MCP Server ```bash cd /home/beast/Code/atlas/home-voice-agent/mcp-server ./run.sh ``` Server will start on http://localhost:8000 ## Quick Test ### Test 1: Verify Server is Running ```bash curl http://localhost:8000/health ``` Should return: `{"status": "healthy", "tools": 22}` ### Test 2: Test a Tool ```bash curl -X POST http://localhost:8000/mcp \ -H "Content-Type: application/json" \ -d '{ "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": { "name": "get_current_time", "arguments": {} } }' ``` ### Test 3: Test LLM Connection ```bash cd /home/beast/Code/atlas/home-voice-agent/llm-servers/4080 python3 test_connection.py ``` ### Test 4: Run All Tests ```bash cd /home/beast/Code/atlas/home-voice-agent ./test_all.sh ``` ## Access the Dashboard 1. Start the MCP server (see above) 2. Open browser: http://localhost:8000 3. Explore: - Status overview - Recent conversations - Active timers - Tasks - Admin panel ## Common Tasks ### Switch Between Local/Remote ```bash cd /home/beast/Code/atlas/home-voice-agent ./toggle_env.sh # Toggles between local ↔ remote ``` ### View Current Configuration ```bash cat .env | grep OLLAMA ``` ### Test Individual Components ```bash # MCP Server tools cd mcp-server && python3 test_mcp.py # LLM Connection cd llm-servers/4080 && python3 test_connection.py # Router cd routing && python3 test_router.py # Memory cd memory && python3 test_memory.py ``` ### View Logs ```bash # LLM logs tail -f data/logs/llm_*.log # Or use dashboard # http://localhost:8000 → Admin Panel → Log Browser ``` ## Troubleshooting ### Port 8000 Already in Use ```bash # Find and kill process lsof -i:8000 pkill -f "uvicorn|mcp_server" # Restart cd mcp-server && ./run.sh ``` ### Ollama Not Connecting ```bash # Check if running curl http://localhost:11434/api/tags # Check .env config cat .env | grep OLLAMA_HOST # Test connection cd llm-servers/4080 && python3 test_connection.py ``` ### Tools Not Working ```bash # Check tool registry cd mcp-server python3 -c "from tools.registry import ToolRegistry; r = ToolRegistry(); print(f'Tools: {len(r.list_tools())}')" ``` ### Import Errors ```bash # Install missing dependencies cd mcp-server pip install -r requirements.txt # Or install python-dotenv pip install python-dotenv ``` ## Next Steps 1. **Test the system**: Run `./test_all.sh` 2. **Explore the dashboard**: http://localhost:8000 3. **Try the tools**: Use the MCP API or dashboard 4. **Read the docs**: See `TESTING.md` for detailed testing guide 5. **Continue development**: Check `tickets/NEXT_STEPS.md` for recommended tickets ## Configuration Files - `.env` - Main configuration (local/remote toggle) - `.env.example` - Template file - `toggle_env.sh` - Quick toggle script ## Documentation - `TESTING.md` - Complete testing guide - `ENV_CONFIG.md` - Environment configuration details - `README.md` - Project overview - `tickets/NEXT_STEPS.md` - Recommended next tickets ## Support If you encounter issues: 1. Check the troubleshooting section above 2. Review logs in `data/logs/` 3. Check the dashboard admin panel 4. See `TESTING.md` for detailed test procedures