- Enhanced `ARCHITECTURE.md` with details on LLM models for work (Llama 3.1 70B Q4) and family agents (Phi-3 Mini 3.8B Q4). - Introduced new documents: - `ASR_EVALUATION.md` for ASR engine evaluation and selection. - `HARDWARE.md` outlining hardware requirements and purchase plans. - `IMPLEMENTATION_GUIDE.md` for Milestone 2 implementation steps. - `LLM_CAPACITY.md` assessing VRAM and context window limits. - `LLM_MODEL_SURVEY.md` surveying open-weight LLM models. - `LLM_USAGE_AND_COSTS.md` detailing LLM usage and operational costs. - `MCP_ARCHITECTURE.md` describing the Model Context Protocol architecture. - `MCP_IMPLEMENTATION_SUMMARY.md` summarizing MCP implementation status. These updates provide comprehensive guidance for the next phases of development and ensure clarity in project documentation.
MCP-LLM Adapter
Adapter that connects LLM function calls to MCP tool server.
Overview
This adapter:
- Converts LLM function calls (OpenAI format) to MCP JSON-RPC calls
- Converts MCP responses back to LLM format
- Handles tool discovery and registration
- Manages errors and retries
Architecture
LLM Server (Ollama/vLLM)
↓ (function call)
MCP Adapter
↓ (JSON-RPC)
MCP Server
↓ (tool result)
MCP Adapter
↓ (function result)
LLM Server
Quick Start
# Run tests
./run_test.sh
# Or manually
python test_adapter.py
Usage
from adapter import MCPAdapter
# Initialize adapter
adapter = MCPAdapter(mcp_server_url="http://localhost:8000/mcp")
# Discover tools
tools = adapter.discover_tools()
# Convert LLM function call to MCP call
llm_function_call = {
"name": "weather",
"arguments": {"location": "San Francisco"}
}
result = adapter.call_tool(llm_function_call)
# Result is in LLM format
print(result) # "Weather in San Francisco: 72°F, sunny..."
Integration
The adapter can be integrated into:
- LLM routing layer
- Direct LLM server integration
- Standalone service