feat: Major UI/UX improvements and production readiness
Some checks failed
CI / backend-test (push) Successful in 4m9s
CI / frontend-test (push) Failing after 3m48s
CI / lint-python (push) Successful in 1m41s
CI / secret-scanning (push) Successful in 1m20s
CI / dependency-scan (push) Successful in 10m50s
CI / workflow-summary (push) Successful in 1m11s
Some checks failed
CI / backend-test (push) Successful in 4m9s
CI / frontend-test (push) Failing after 3m48s
CI / lint-python (push) Successful in 1m41s
CI / secret-scanning (push) Successful in 1m20s
CI / dependency-scan (push) Successful in 10m50s
CI / workflow-summary (push) Successful in 1m11s
## Features Added
### Document Reference System
- Implemented numbered document references (@1, @2, etc.) with autocomplete dropdown
- Added fuzzy filename matching for @filename references
- Document filtering now prioritizes numeric refs > filename refs > all documents
- Autocomplete dropdown appears when typing @ with keyboard navigation (Up/Down, Enter/Tab, Escape)
- Document numbers displayed in UI for easy reference
### Conversation Management
- Added conversation rename functionality with inline editing
- Implemented conversation search (by title and content)
- Search box always visible, even when no conversations exist
- Export reports now replace @N references with actual filenames
### UI/UX Improvements
- Removed debug toggle button
- Improved text contrast in dark mode (better visibility)
- Made input textarea expand to full available width
- Fixed file text color for better readability
- Enhanced document display with numbered badges
### Configuration & Timeouts
- Made HTTP client timeouts configurable (connect, write, pool)
- Added .env.example with all configuration options
- Updated timeout documentation
### Developer Experience
- Added `make test-setup` target for automated test conversation creation
- Test setup script supports TEST_MESSAGE and TEST_DOCS env vars
- Improved Makefile with dev and test-setup targets
### Documentation
- Updated ARCHITECTURE.md with all new features
- Created comprehensive deployment documentation
- Added GPU VM setup guides
- Removed unnecessary markdown files (CLAUDE.md, CONTRIBUTING.md, header.jpg)
- Organized documentation in docs/ directory
### GPU VM / Ollama (Stability + GPU Offload)
- Updated GPU VM docs to reflect the working systemd environment for remote Ollama
- Standardized remote Ollama port to 11434 (and added /v1/models verification)
- Documented required env for GPU offload on this VM:
- `OLLAMA_MODELS=/mnt/data/ollama`, `HOME=/mnt/data/ollama/home`
- `OLLAMA_LLM_LIBRARY=cuda_v12` (not `cuda`)
- `LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12`
## Technical Changes
### Backend
- Enhanced `docs_context.py` with reference parsing (numeric and filename)
- Added `update_conversation_title` to storage.py
- New endpoints: PATCH /api/conversations/{id}/title, GET /api/conversations/search
- Improved report generation with filename substitution
### Frontend
- Removed debugMode state and related code
- Added autocomplete dropdown component
- Implemented search functionality in Sidebar
- Enhanced ChatInterface with autocomplete and improved textarea sizing
- Updated CSS for better contrast and responsive design
## Files Changed
- Backend: config.py, council.py, docs_context.py, main.py, storage.py
- Frontend: App.jsx, ChatInterface.jsx, Sidebar.jsx, and related CSS files
- Documentation: README.md, ARCHITECTURE.md, new docs/ directory
- Configuration: .env.example, Makefile
- Scripts: scripts/test_setup.py
## Breaking Changes
None - all changes are backward compatible
## Testing
- All existing tests pass
- New test-setup script validates conversation creation workflow
- Manual testing of autocomplete, search, and rename features
This commit is contained in:
commit
3546c04348
27
.cursor/rules.md
Normal file
27
.cursor/rules.md
Normal file
@ -0,0 +1,27 @@
|
||||
## Cursor Rules (Project Standards)
|
||||
|
||||
### Non-negotiables
|
||||
- Never commit secrets. Never paste credentials into code, docs, or logs.
|
||||
- Prefer env vars over hardcoding (see `.env.example`).
|
||||
- Add/extend tests for any behavior change.
|
||||
|
||||
### Backend standards
|
||||
- Use the provider abstraction (`backend/llm_client.py`) for all model calls.
|
||||
- Keep request/response payloads explicit and logged safely (no secrets).
|
||||
- Treat uploaded documents as untrusted input:
|
||||
- enforce `.md` extension
|
||||
- enforce size limits
|
||||
- never execute content
|
||||
|
||||
### Frontend standards
|
||||
- Keep API calls in `frontend/src/api.js`.
|
||||
- Keep app state + orchestration in `frontend/src/App.jsx`.
|
||||
- Components should be readable and small.
|
||||
|
||||
### Testing
|
||||
- Backend: `make test-backend`
|
||||
- Frontend: `make test-frontend`
|
||||
|
||||
### Architecture
|
||||
- The app runs locally; the GPU VM runs an OpenAI-compatible inference server.
|
||||
- The app calls the VM over HTTP (or via SSH tunnel) and does not "use the GPU" directly.
|
||||
15
.editorconfig
Normal file
15
.editorconfig
Normal file
@ -0,0 +1,15 @@
|
||||
root = true
|
||||
|
||||
[*]
|
||||
charset = utf-8
|
||||
end_of_line = lf
|
||||
insert_final_newline = true
|
||||
trim_trailing_whitespace = true
|
||||
|
||||
[*.py]
|
||||
indent_style = space
|
||||
indent_size = 4
|
||||
|
||||
[*.{js,jsx,ts,tsx,json,css,md}]
|
||||
indent_style = space
|
||||
indent_size = 2
|
||||
80
.env.example
Normal file
80
.env.example
Normal file
@ -0,0 +1,80 @@
|
||||
# ============================================================================
|
||||
# LLM Council Configuration
|
||||
# ============================================================================
|
||||
# Copy this file to .env and customize as needed
|
||||
# ============================================================================
|
||||
|
||||
# ----------------------------------------------------------------------------
|
||||
# Provider & Server Configuration
|
||||
# ----------------------------------------------------------------------------
|
||||
|
||||
# Use local Ollama (automatically sets base URL to http://localhost:11434)
|
||||
USE_LOCAL_OLLAMA=true
|
||||
|
||||
# For remote Ollama or other OpenAI-compatible servers, comment USE_LOCAL_OLLAMA
|
||||
# and set the base URL instead:
|
||||
# OPENAI_COMPAT_BASE_URL=http://your-server:11434
|
||||
# OPENAI_COMPAT_BASE_URL=http://10.0.30.63:8000
|
||||
|
||||
# Optional API key if your server requires authentication
|
||||
# OPENAI_COMPAT_API_KEY=your_api_key_here
|
||||
|
||||
# ----------------------------------------------------------------------------
|
||||
# Model Configuration
|
||||
# ----------------------------------------------------------------------------
|
||||
|
||||
# Models for the council (comma or newline separated)
|
||||
# Tip: Check available models with: curl -s 'http://localhost:8001/api/llm/status?probe=true'
|
||||
COUNCIL_MODELS=llama3.2:3b,qwen2.5:3b,gemma2:2b
|
||||
|
||||
# Chairman model (synthesizes final response from council)
|
||||
CHAIRMAN_MODEL=llama3.2:3b
|
||||
|
||||
# Maximum tokens per request
|
||||
MAX_TOKENS=1024
|
||||
|
||||
# ----------------------------------------------------------------------------
|
||||
# Timeout Configuration
|
||||
# ----------------------------------------------------------------------------
|
||||
|
||||
# Default timeout for general LLM queries (Stage 1: council responses)
|
||||
LLM_TIMEOUT_SECONDS=300.0
|
||||
|
||||
# Timeout for chairman synthesis (may need longer for complex responses)
|
||||
CHAIRMAN_TIMEOUT_SECONDS=180.0
|
||||
|
||||
# Timeout for title generation (short responses)
|
||||
TITLE_GENERATION_TIMEOUT_SECONDS=120.0
|
||||
|
||||
# HTTP client timeout for OpenAI-compatible server (fallback, rarely used)
|
||||
OPENAI_COMPAT_TIMEOUT_SECONDS=300
|
||||
|
||||
# ----------------------------------------------------------------------------
|
||||
# Retry Configuration
|
||||
# ----------------------------------------------------------------------------
|
||||
|
||||
# Number of retries for failed requests (retryable HTTP errors)
|
||||
OPENAI_COMPAT_RETRIES=2
|
||||
|
||||
# Exponential backoff base delay between retries (seconds)
|
||||
OPENAI_COMPAT_RETRY_BACKOFF_SECONDS=0.5
|
||||
|
||||
# ----------------------------------------------------------------------------
|
||||
# Concurrency Configuration
|
||||
# ----------------------------------------------------------------------------
|
||||
|
||||
# Maximum concurrent model requests (0 = unlimited, 1 = sequential)
|
||||
LLM_MAX_CONCURRENCY=1
|
||||
|
||||
# ----------------------------------------------------------------------------
|
||||
# Document Upload Configuration (Optional)
|
||||
# ----------------------------------------------------------------------------
|
||||
|
||||
# Directory for uploaded markdown documents (per-conversation)
|
||||
# DOCS_DIR=data/docs
|
||||
|
||||
# Maximum document size in bytes (default: 1MB)
|
||||
# MAX_DOC_BYTES=1000000
|
||||
|
||||
# Maximum characters to preview when fetching documents (default: 20000)
|
||||
# MAX_DOC_PREVIEW_CHARS=20000
|
||||
142
.github/workflows/ci.yml
vendored
Normal file
142
.github/workflows/ci.yml
vendored
Normal file
@ -0,0 +1,142 @@
|
||||
name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, master]
|
||||
pull_request:
|
||||
branches: [main, master]
|
||||
|
||||
jobs:
|
||||
backend-test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.10'
|
||||
|
||||
- name: Install uv
|
||||
run: |
|
||||
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
|
||||
|
||||
- name: Install dependencies
|
||||
run: uv sync
|
||||
|
||||
- name: Run backend tests
|
||||
run: uv run python -m unittest discover -s backend/tests -p "test_*.py" -v
|
||||
|
||||
frontend-test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
cd frontend
|
||||
npm ci
|
||||
|
||||
- name: Run frontend tests
|
||||
run: |
|
||||
cd frontend
|
||||
npm test
|
||||
|
||||
- name: Lint frontend
|
||||
run: |
|
||||
cd frontend
|
||||
npm run lint
|
||||
|
||||
lint-python:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.10'
|
||||
|
||||
- name: Install uv
|
||||
run: |
|
||||
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
|
||||
|
||||
- name: Install dependencies
|
||||
run: uv sync
|
||||
|
||||
- name: Check Python syntax
|
||||
run: |
|
||||
python3 -m py_compile backend/**/*.py || true
|
||||
echo "Python syntax check complete"
|
||||
|
||||
secret-scanning:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Run Gitleaks
|
||||
uses: gitleaks/gitleaks-action@v2
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
continue-on-error: true
|
||||
|
||||
dependency-scan:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Trivy vulnerability scanner
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
scan-type: 'fs'
|
||||
scan-ref: '.'
|
||||
format: 'sarif'
|
||||
output: 'trivy-results.sarif'
|
||||
severity: 'CRITICAL,HIGH'
|
||||
continue-on-error: true
|
||||
|
||||
- name: Upload Trivy results to GitHub Security
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
if: always()
|
||||
with:
|
||||
sarif_file: 'trivy-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
workflow-summary:
|
||||
runs-on: ubuntu-latest
|
||||
needs: [backend-test, frontend-test, lint-python, secret-scanning, dependency-scan]
|
||||
if: always()
|
||||
steps:
|
||||
- name: Generate workflow summary
|
||||
run: |
|
||||
echo "## 🔍 CI Workflow Summary" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "### Job Results" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "| Job | Status |" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "|-----|--------|" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "| 🐍 Backend Tests | ${{ needs.backend-test.result }} |" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "| ⚛️ Frontend Tests | ${{ needs.frontend-test.result }} |" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "| 📝 Python Lint | ${{ needs.lint-python.result }} |" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "| 🔐 Secret Scanning | ${{ needs.secret-scanning.result }} |" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "| 📦 Dependency Scan | ${{ needs.dependency-scan.result }} |" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "### 📊 Summary" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "" >> $GITHUB_STEP_SUMMARY || true
|
||||
echo "All checks have completed. Review individual job logs for details." >> $GITHUB_STEP_SUMMARY || true
|
||||
continue-on-error: true
|
||||
|
||||
52
.gitignore
vendored
Normal file
52
.gitignore
vendored
Normal file
@ -0,0 +1,52 @@
|
||||
# Python-generated files
|
||||
__pycache__/
|
||||
*.py[oc]
|
||||
*.pyc
|
||||
build/
|
||||
dist/
|
||||
wheels/
|
||||
*.egg-info
|
||||
.pytest_cache/
|
||||
.coverage
|
||||
htmlcov/
|
||||
|
||||
# Virtual environments
|
||||
.venv
|
||||
venv/
|
||||
ENV/
|
||||
env/
|
||||
|
||||
# Keys and secrets
|
||||
.env
|
||||
.env.local
|
||||
.env.*.local
|
||||
|
||||
# Data files
|
||||
data/
|
||||
|
||||
# Frontend
|
||||
frontend/node_modules/
|
||||
frontend/dist/
|
||||
frontend/.vite/
|
||||
frontend/.vite/
|
||||
frontend/build/
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Logs
|
||||
*.log
|
||||
logs/
|
||||
|
||||
# Temporary files
|
||||
*.tmp
|
||||
*.temp
|
||||
.cache/
|
||||
1
.python-version
Normal file
1
.python-version
Normal file
@ -0,0 +1 @@
|
||||
3.10
|
||||
80
ARCHITECTURE.md
Normal file
80
ARCHITECTURE.md
Normal file
@ -0,0 +1,80 @@
|
||||
## Architecture
|
||||
|
||||
### Overview
|
||||
LLM Council is a local web app with:
|
||||
- **Frontend**: React + Vite (`frontend/`) on `:5173`
|
||||
- **Backend**: FastAPI (`backend/`) on `:8001`
|
||||
- **Storage**: JSON conversations + uploaded markdown docs on disk (`data/`)
|
||||
- **LLM Provider**: pluggable backend client
|
||||
|
||||
### Runtime data flow
|
||||
1. UI sends a message to backend (`/api/conversations/{id}/message/stream`).
|
||||
2. Backend loads any uploaded markdown docs for the conversation and injects them as additional context.
|
||||
3. Backend runs a 3-stage pipeline:
|
||||
- **Stage 1**: query each council model in parallel
|
||||
- **Stage 2**: anonymized peer review + ranking
|
||||
- **Stage 3**: chairman synthesis
|
||||
|
||||
### LLM provider layer
|
||||
The backend uses OpenAI-compatible API servers (Ollama, vLLM, TGI, etc.).
|
||||
|
||||
Configuration:
|
||||
- `USE_LOCAL_OLLAMA=true` - automatically sets base URL to `http://localhost:11434`
|
||||
- `OPENAI_COMPAT_BASE_URL` - set to your server URL (e.g., `http://remote-server:11434`)
|
||||
|
||||
The provider (`backend/openai_compat.py`) targets servers that expose:
|
||||
- `POST /v1/chat/completions`
|
||||
- `GET /v1/models`
|
||||
|
||||
The council orchestration uses the unified interface in `backend/llm_client.py`.
|
||||
|
||||
### Document uploads and references
|
||||
- Per-conversation markdown documents are stored under: `data/docs/<conversation_id>/`
|
||||
- Documents are automatically numbered (1, 2, 3, etc.) based on upload order
|
||||
- Documents can be referenced in prompts using:
|
||||
- Numeric references: `@1`, `@2`, `@3` (by upload order)
|
||||
- Filename references: `@filename` (fuzzy matching)
|
||||
- Backend endpoints:
|
||||
- `GET /api/conversations/{id}/documents`
|
||||
- `POST /api/conversations/{id}/documents` (multipart file)
|
||||
- `GET /api/conversations/{id}/documents/{doc_id}` (preview/truncated)
|
||||
- `DELETE /api/conversations/{id}/documents/{doc_id}`
|
||||
- Document context is automatically injected when referenced in user queries
|
||||
- Export reports replace `@1`, `@2` references with actual filenames
|
||||
|
||||
### Conversation management
|
||||
- Conversations stored as JSON files in `data/conversations/`
|
||||
- Features:
|
||||
- Create, list, get, delete conversations
|
||||
- Rename conversations (inline editing)
|
||||
- Search conversations by title and message content
|
||||
- Export conversations as markdown reports
|
||||
- Auto-generate titles from first message
|
||||
|
||||
### Frontend features
|
||||
- **Document autocomplete**: Type `@` to see numbered document list with autocomplete
|
||||
- **Conversation search**: Search box filters conversations by title/content
|
||||
- **Theme toggle**: Light/dark mode support
|
||||
- **Streaming responses**: Real-time updates as models respond
|
||||
- **Document preview**: View uploaded documents inline
|
||||
- **Export reports**: Download conversations as markdown files
|
||||
|
||||
### Configuration
|
||||
Primary runtime config is via `.env` (gitignored). Key settings:
|
||||
- Model configuration: `COUNCIL_MODELS`, `CHAIRMAN_MODEL`
|
||||
- Timeouts: `LLM_TIMEOUT_SECONDS`, `CHAIRMAN_TIMEOUT_SECONDS`, `OPENAI_COMPAT_TIMEOUT_SECONDS`
|
||||
- HTTP client timeouts: `OPENAI_COMPAT_CONNECT_TIMEOUT_SECONDS`, `OPENAI_COMPAT_WRITE_TIMEOUT_SECONDS`, `OPENAI_COMPAT_POOL_TIMEOUT_SECONDS`
|
||||
- Document limits: `MAX_DOC_BYTES`, `MAX_DOC_PREVIEW_CHARS`
|
||||
|
||||
Useful endpoints:
|
||||
- `GET /api/llm/status` and `GET /api/llm/status?probe=true`
|
||||
- `GET /api/conversations/search?q=...` - Search conversations
|
||||
- `PATCH /api/conversations/{id}/title` - Rename conversation
|
||||
|
||||
### Security model (local dev)
|
||||
This is currently built for local/private network usage.
|
||||
If you deploy beyond localhost, add:
|
||||
- auth (session/token)
|
||||
- rate limits
|
||||
- upload limits
|
||||
- network restrictions / TLS
|
||||
48
CHANGELOG.md
Normal file
48
CHANGELOG.md
Normal file
@ -0,0 +1,48 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
## [Unreleased] - Production Ready Release
|
||||
|
||||
### Added
|
||||
- **Document Reference System**: Numbered references (@1, @2, etc.) with autocomplete dropdown
|
||||
- **Fuzzy Filename Matching**: Support for @filename references with partial matching
|
||||
- **Conversation Rename**: Inline editing to rename conversations
|
||||
- **Conversation Search**: Search by title and content, always-visible search box
|
||||
- **Autocomplete Dropdown**: Keyboard-navigable dropdown when typing @
|
||||
- **Test Setup Script**: `make test-setup` for automated test conversation creation
|
||||
- **Configuration Template**: `.env.example` with all available options
|
||||
- **HTTP Client Timeouts**: Configurable connect, write, and pool timeouts
|
||||
- **Comprehensive Documentation**: Deployment guides, GPU VM setup, architecture docs
|
||||
|
||||
### Changed
|
||||
- **UI Improvements**: Better text contrast, larger input textarea, improved document display
|
||||
- **Report Generation**: @N references replaced with actual filenames in exports
|
||||
- **Document Filtering**: Prioritizes numeric refs > filename refs > all documents
|
||||
- **Removed Debug UI**: Cleaned up debug toggle button
|
||||
- **Documentation Organization**: Moved deployment docs to `docs/` directory
|
||||
|
||||
### Fixed
|
||||
- **Text Visibility**: Improved contrast in dark mode
|
||||
- **Input Sizing**: Textarea now expands to full available width
|
||||
- **File Text Color**: Better visibility for document names
|
||||
- **Search Visibility**: Search box remains visible even with no conversations
|
||||
- **ReferenceError**: Fixed `debugMode is not defined` error
|
||||
|
||||
### Technical
|
||||
- Enhanced `docs_context.py` with reference parsing
|
||||
- New API endpoints: `PATCH /api/conversations/{id}/title`, `GET /api/conversations/search`
|
||||
- Improved error handling and user feedback
|
||||
- Better state management in React components
|
||||
|
||||
## [0.1.0] - Initial Release
|
||||
|
||||
### Added
|
||||
- Multi-LLM council system with Stage 1 (individual responses), Stage 2 (peer review), Stage 3 (synthesis)
|
||||
- OpenAI-compatible API support (Ollama, vLLM, TGI)
|
||||
- Document upload and management
|
||||
- Conversation management
|
||||
- Streaming responses
|
||||
- Light/dark theme toggle
|
||||
- Basic UI with React + Vite
|
||||
|
||||
79
COMMIT_MESSAGE.md
Normal file
79
COMMIT_MESSAGE.md
Normal file
@ -0,0 +1,79 @@
|
||||
feat: Major UI/UX improvements and production readiness
|
||||
|
||||
## Features Added
|
||||
|
||||
### Document Reference System
|
||||
- Implemented numbered document references (@1, @2, etc.) with autocomplete dropdown
|
||||
- Added fuzzy filename matching for @filename references
|
||||
- Document filtering now prioritizes numeric refs > filename refs > all documents
|
||||
- Autocomplete dropdown appears when typing @ with keyboard navigation (Up/Down, Enter/Tab, Escape)
|
||||
- Document numbers displayed in UI for easy reference
|
||||
|
||||
### Conversation Management
|
||||
- Added conversation rename functionality with inline editing
|
||||
- Implemented conversation search (by title and content)
|
||||
- Search box always visible, even when no conversations exist
|
||||
- Export reports now replace @N references with actual filenames
|
||||
|
||||
### UI/UX Improvements
|
||||
- Removed debug toggle button
|
||||
- Improved text contrast in dark mode (better visibility)
|
||||
- Made input textarea expand to full available width
|
||||
- Fixed file text color for better readability
|
||||
- Enhanced document display with numbered badges
|
||||
|
||||
### Configuration & Timeouts
|
||||
- Made HTTP client timeouts configurable (connect, write, pool)
|
||||
- Added .env.example with all configuration options
|
||||
- Updated timeout documentation
|
||||
|
||||
### Developer Experience
|
||||
- Added `make test-setup` target for automated test conversation creation
|
||||
- Test setup script supports TEST_MESSAGE and TEST_DOCS env vars
|
||||
- Improved Makefile with dev and test-setup targets
|
||||
|
||||
### Documentation
|
||||
- Updated ARCHITECTURE.md with all new features
|
||||
- Created comprehensive deployment documentation
|
||||
- Added GPU VM setup guides
|
||||
- Removed unnecessary markdown files (CLAUDE.md, CONTRIBUTING.md, header.jpg)
|
||||
- Organized documentation in docs/ directory
|
||||
|
||||
### GPU VM / Ollama (Stability + GPU Offload)
|
||||
- Updated GPU VM docs to reflect the working systemd environment for remote Ollama
|
||||
- Standardized remote Ollama port to 11434 (and added /v1/models verification)
|
||||
- Documented required env for GPU offload on this VM:
|
||||
- `OLLAMA_MODELS=/mnt/data/ollama`, `HOME=/mnt/data/ollama/home`
|
||||
- `OLLAMA_LLM_LIBRARY=cuda_v12` (not `cuda`)
|
||||
- `LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12`
|
||||
|
||||
## Technical Changes
|
||||
|
||||
### Backend
|
||||
- Enhanced `docs_context.py` with reference parsing (numeric and filename)
|
||||
- Added `update_conversation_title` to storage.py
|
||||
- New endpoints: PATCH /api/conversations/{id}/title, GET /api/conversations/search
|
||||
- Improved report generation with filename substitution
|
||||
|
||||
### Frontend
|
||||
- Removed debugMode state and related code
|
||||
- Added autocomplete dropdown component
|
||||
- Implemented search functionality in Sidebar
|
||||
- Enhanced ChatInterface with autocomplete and improved textarea sizing
|
||||
- Updated CSS for better contrast and responsive design
|
||||
|
||||
## Files Changed
|
||||
- Backend: config.py, council.py, docs_context.py, main.py, storage.py
|
||||
- Frontend: App.jsx, ChatInterface.jsx, Sidebar.jsx, and related CSS files
|
||||
- Documentation: README.md, ARCHITECTURE.md, new docs/ directory
|
||||
- Configuration: .env.example, Makefile
|
||||
- Scripts: scripts/test_setup.py
|
||||
|
||||
## Breaking Changes
|
||||
None - all changes are backward compatible
|
||||
|
||||
## Testing
|
||||
- All existing tests pass
|
||||
- New test-setup script validates conversation creation workflow
|
||||
- Manual testing of autocomplete, search, and rename features
|
||||
|
||||
298
Makefile
Normal file
298
Makefile
Normal file
@ -0,0 +1,298 @@
|
||||
.PHONY: dev backend frontend test test-backend test-frontend test-setup \
|
||||
stop-backend stop-frontend stop-ollama reset \
|
||||
start-ollama start-backend start-frontend up restart \
|
||||
status logs help
|
||||
|
||||
help:
|
||||
@# Use color only when output is a TTY. Keep plain output for logs/CI.
|
||||
@is_tty=0; if [ -t 1 ]; then is_tty=1; fi; \
|
||||
if [ $$is_tty -eq 1 ]; then \
|
||||
BOLD=$$(printf '\033[1m'); DIM=$$(printf '\033[2m'); RESET=$$(printf '\033[0m'); \
|
||||
CYAN=$$(printf '\033[36m'); GREEN=$$(printf '\033[32m'); YELLOW=$$(printf '\033[33m'); \
|
||||
else \
|
||||
BOLD=""; DIM=""; RESET=""; CYAN=""; GREEN=""; YELLOW=""; \
|
||||
fi; \
|
||||
printf "%sLLM Council%s %sMake targets%s\n" "$$BOLD" "$$RESET" "$$DIM" "$$RESET"; \
|
||||
printf "\n"; \
|
||||
printf "%sCore%s\n" "$$CYAN" "$$RESET"; \
|
||||
printf " %smake dev%s Start backend + frontend (foreground, Ctrl+C to stop)\n" "$$GREEN" "$$RESET"; \
|
||||
printf " %smake up%s Start ollama + backend + frontend (background via nohup)\n" "$$GREEN" "$$RESET"; \
|
||||
printf " %smake restart%s Reset (stop) then start everything (up)\n" "$$GREEN" "$$RESET"; \
|
||||
printf "\n"; \
|
||||
printf "%sStart/Stop%s\n" "$$CYAN" "$$RESET"; \
|
||||
printf " %smake start-ollama%s Start Ollama (systemd if available, else 'ollama serve' nohup)\n" "$$GREEN" "$$RESET"; \
|
||||
printf " %smake start-backend%s Start backend (nohup, logs: %s/tmp/llm-council-backend.log%s)\n" "$$GREEN" "$$RESET" "$$DIM" "$$RESET"; \
|
||||
printf " %smake start-frontend%s Start frontend (nohup, logs: %s/tmp/llm-council-frontend.log%s)\n" "$$GREEN" "$$RESET" "$$DIM" "$$RESET"; \
|
||||
printf " %smake stop-backend%s Stop backend on port %s8001%s\n" "$$GREEN" "$$RESET" "$$YELLOW" "$$RESET"; \
|
||||
printf " %smake stop-frontend%s Stop frontend on port %s5173%s\n" "$$GREEN" "$$RESET" "$$YELLOW" "$$RESET"; \
|
||||
printf " %smake stop-ollama%s Attempt to stop Ollama (systemd if available)\n" "$$GREEN" "$$RESET"; \
|
||||
printf " %smake reset%s Stop frontend + backend + attempt Ollama stop\n" "$$GREEN" "$$RESET"; \
|
||||
printf "\n"; \
|
||||
printf "%sTesting / setup%s\n" "$$CYAN" "$$RESET"; \
|
||||
printf " %smake test%s Run backend + frontend tests\n" "$$GREEN" "$$RESET"; \
|
||||
printf " %smake test-backend%s Run backend unit tests\n" "$$GREEN" "$$RESET"; \
|
||||
printf " %smake test-frontend%s Run frontend tests\n" "$$GREEN" "$$RESET"; \
|
||||
printf " %smake test-setup%s Create a new conversation, upload TEST_DOCS, prefill TEST_MESSAGE in UI\n" "$$GREEN" "$$RESET"; \
|
||||
printf " %sEnv:%s TEST_DOCS=path1.md,path2.md TEST_MESSAGE='your msg'\n" "$$DIM" "$$RESET"; \
|
||||
printf "\n"; \
|
||||
printf "%sDiagnostics%s\n" "$$CYAN" "$$RESET"; \
|
||||
printf " %smake status%s Show whether backend/frontend/ollama are up + PIDs + basic info\n" "$$GREEN" "$$RESET"; \
|
||||
printf " %smake logs%s Tail recent backend/frontend logs (from /tmp)\n" "$$GREEN" "$$RESET"; \
|
||||
printf "\n"; \
|
||||
printf "%sHelp%s\n" "$$CYAN" "$$RESET"; \
|
||||
printf " %smake help%s Show this message\n" "$$GREEN" "$$RESET"
|
||||
|
||||
backend:
|
||||
uv run python -m backend.main
|
||||
|
||||
frontend:
|
||||
cd frontend && npm run dev
|
||||
|
||||
dev:
|
||||
./start.sh
|
||||
|
||||
test-backend:
|
||||
uv run python -m unittest discover -s backend/tests -p "test_*.py" -q
|
||||
|
||||
test-frontend:
|
||||
cd frontend && npm test
|
||||
|
||||
test: test-backend test-frontend
|
||||
|
||||
stop-backend:
|
||||
@echo "Stopping backend processes on port 8001..."
|
||||
@if lsof -ti:8001 > /dev/null 2>&1; then \
|
||||
lsof -ti:8001 | xargs kill -9 2>/dev/null || true; \
|
||||
echo "✓ Backend stopped"; \
|
||||
sleep 1; \
|
||||
else \
|
||||
echo "✓ No backend process found on port 8001"; \
|
||||
fi
|
||||
|
||||
stop-frontend:
|
||||
@echo "Stopping frontend processes on port 5173..."
|
||||
@if lsof -ti:5173 > /dev/null 2>&1; then \
|
||||
lsof -ti:5173 | xargs kill -9 2>/dev/null || true; \
|
||||
echo "✓ Frontend stopped"; \
|
||||
sleep 1; \
|
||||
else \
|
||||
echo "✓ No frontend process found on port 5173"; \
|
||||
fi
|
||||
|
||||
stop-ollama:
|
||||
@echo "Stopping Ollama..."
|
||||
@# Prefer systemd if available; do not force-kill processes (safer).
|
||||
@if command -v systemctl >/dev/null 2>&1; then \
|
||||
if systemctl is-active --quiet ollama 2>/dev/null; then \
|
||||
echo "Stopping systemd service: ollama"; \
|
||||
systemctl stop ollama 2>/dev/null || true; \
|
||||
sleep 2; \
|
||||
else \
|
||||
echo "Ollama systemd service is not active"; \
|
||||
fi; \
|
||||
else \
|
||||
echo "systemctl not found (not a systemd environment)"; \
|
||||
fi
|
||||
@PIDS=$$(pgrep -x ollama 2>/dev/null || true); \
|
||||
if [ -n "$$PIDS" ]; then \
|
||||
echo "⚠️ Ollama processes still running (PIDs: $$PIDS)"; \
|
||||
echo " Stop manually with one of:"; \
|
||||
echo " - systemctl stop ollama (if installed as a service)"; \
|
||||
echo " - pkill -x ollama"; \
|
||||
echo " - sudo pkill -x ollama"; \
|
||||
else \
|
||||
echo "✓ Ollama is stopped"; \
|
||||
fi
|
||||
|
||||
reset: stop-frontend stop-backend stop-ollama
|
||||
@echo ""
|
||||
@echo "✓ Reset complete - Frontend/Backend stopped (Ollama stop attempted)"
|
||||
@echo ""
|
||||
@echo "To start fresh (recommended):"
|
||||
@echo " make up"
|
||||
@echo ""
|
||||
@echo "Or manually:"
|
||||
@echo " 1) make start-ollama"
|
||||
@echo " 2) make start-backend"
|
||||
@echo " 3) make start-frontend"
|
||||
|
||||
start-ollama:
|
||||
@echo "Starting Ollama..."
|
||||
@if curl -s --max-time 2 http://localhost:11434/api/tags >/dev/null 2>&1; then \
|
||||
echo "✓ Ollama already responding on http://localhost:11434"; \
|
||||
exit 0; \
|
||||
fi
|
||||
@if command -v systemctl >/dev/null 2>&1 && systemctl list-unit-files 2>/dev/null | grep -q "^ollama\\.service"; then \
|
||||
echo "Starting systemd service: ollama"; \
|
||||
systemctl start ollama 2>/dev/null || true; \
|
||||
sleep 2; \
|
||||
else \
|
||||
echo "systemd service not available; starting 'ollama serve' in background"; \
|
||||
nohup ollama serve > /tmp/llm-council-ollama.log 2>&1 & \
|
||||
echo "ollama serve PID: $$!"; \
|
||||
sleep 2; \
|
||||
fi
|
||||
@if curl -s --max-time 2 http://localhost:11434/api/tags >/dev/null 2>&1; then \
|
||||
echo "✓ Ollama is up: http://localhost:11434"; \
|
||||
else \
|
||||
echo "⚠️ Ollama did not respond yet. Check: /tmp/llm-council-ollama.log"; \
|
||||
fi
|
||||
|
||||
start-backend:
|
||||
@echo "Starting backend..."
|
||||
@if curl -s --max-time 2 http://localhost:8001/ >/dev/null 2>&1; then \
|
||||
echo "✓ Backend already responding on http://localhost:8001"; \
|
||||
exit 0; \
|
||||
fi
|
||||
@nohup uv run python -m backend.main > /tmp/llm-council-backend.log 2>&1 & \
|
||||
echo "backend PID: $$!"; \
|
||||
sleep 2; \
|
||||
if curl -s --max-time 2 http://localhost:8001/ >/dev/null 2>&1; then \
|
||||
echo "✓ Backend is up: http://localhost:8001"; \
|
||||
else \
|
||||
echo "⚠️ Backend did not respond yet. Check: /tmp/llm-council-backend.log"; \
|
||||
fi
|
||||
|
||||
start-frontend:
|
||||
@echo "Starting frontend..."
|
||||
@if curl -s --max-time 2 http://localhost:5173/ >/dev/null 2>&1; then \
|
||||
echo "✓ Frontend already responding on http://localhost:5173"; \
|
||||
exit 0; \
|
||||
fi
|
||||
@nohup sh -c "cd frontend && npm run dev" > /tmp/llm-council-frontend.log 2>&1 & \
|
||||
echo "frontend PID: $$!"; \
|
||||
sleep 2; \
|
||||
if curl -s --max-time 2 http://localhost:5173/ >/dev/null 2>&1; then \
|
||||
echo "✓ Frontend is up: http://localhost:5173"; \
|
||||
else \
|
||||
echo "⚠️ Frontend did not respond yet. Check: /tmp/llm-council-frontend.log"; \
|
||||
fi
|
||||
|
||||
up: start-ollama start-backend start-frontend
|
||||
@echo ""
|
||||
@echo "✓ All services started (or already running)"
|
||||
@echo "Frontend: http://localhost:5173"
|
||||
@echo "Backend: http://localhost:8001"
|
||||
@echo "Ollama: http://localhost:11434"
|
||||
|
||||
restart: reset up
|
||||
|
||||
status:
|
||||
@echo "=== LLM Council status ==="
|
||||
@date
|
||||
@echo ""
|
||||
@echo "-- Backend (8001) --"
|
||||
@if curl -s --max-time 2 http://localhost:8001/ >/dev/null 2>&1; then \
|
||||
echo "✓ UP: http://localhost:8001"; \
|
||||
else \
|
||||
echo "✗ DOWN: http://localhost:8001"; \
|
||||
fi
|
||||
@echo "PIDs:"; lsof -ti:8001 2>/dev/null | tr '\n' ' ' || true; echo ""
|
||||
@echo ""
|
||||
@echo "-- Frontend (5173) --"
|
||||
@if curl -s --max-time 2 http://localhost:5173/ >/dev/null 2>&1; then \
|
||||
echo "✓ UP: http://localhost:5173"; \
|
||||
else \
|
||||
echo "✗ DOWN: http://localhost:5173"; \
|
||||
fi
|
||||
@echo "PIDs:"; lsof -ti:5173 2>/dev/null | tr '\n' ' ' || true; echo ""
|
||||
@echo ""
|
||||
@echo "-- Ollama (11434) --"
|
||||
@if curl -s --max-time 2 http://localhost:11434/api/tags >/dev/null 2>&1; then \
|
||||
echo "✓ UP: http://localhost:11434"; \
|
||||
else \
|
||||
echo "✗ DOWN/UNREACHABLE: http://localhost:11434"; \
|
||||
fi
|
||||
@O_PID=$$(pgrep -x ollama 2>/dev/null || true); \
|
||||
R_CNT=$$(pgrep -f "ollama runner" 2>/dev/null | wc -l | tr -d ' '); \
|
||||
if [ -n "$$O_PID" ]; then \
|
||||
echo "ollama PIDs: $$O_PID"; \
|
||||
echo "runner count: $$R_CNT"; \
|
||||
echo ""; \
|
||||
echo "top ollama processes:"; \
|
||||
ps -o pid,ppid,pcpu,pmem,etime,command -p $$O_PID 2>/dev/null || true; \
|
||||
pgrep -f "ollama runner" 2>/dev/null | head -5 | while read pid; do \
|
||||
ps -o pid,ppid,pcpu,pmem,etime,command -p $$pid 2>/dev/null || true; \
|
||||
done; \
|
||||
else \
|
||||
echo "ollama not running"; \
|
||||
fi
|
||||
@echo ""
|
||||
@echo "-- Quick health hints --"
|
||||
@echo "Backend log: /tmp/llm-council-backend.log"
|
||||
@echo "Frontend log: /tmp/llm-council-frontend.log"
|
||||
@echo "Run: make logs"
|
||||
|
||||
logs:
|
||||
@echo "=== Recent logs ==="
|
||||
@echo ""
|
||||
@echo "-- Backend log (/tmp/llm-council-backend.log) --"
|
||||
@tail -n 80 /tmp/llm-council-backend.log 2>/dev/null || echo "(no backend log found)"
|
||||
@echo ""
|
||||
@echo "-- Frontend log (/tmp/llm-council-frontend.log) --"
|
||||
@tail -n 120 /tmp/llm-council-frontend.log 2>/dev/null || echo "(no frontend log found)"
|
||||
|
||||
test-setup:
|
||||
@echo "Setting up test conversation..."
|
||||
@echo "Usage: TEST_MESSAGE='your message' TEST_DOCS='path1.md,path2.md' make test-setup"
|
||||
@echo ""
|
||||
@echo "⚠️ Safeguard check: Verifying backend is running..."
|
||||
@if ! curl -s http://localhost:8001/ > /dev/null 2>&1; then \
|
||||
echo "✗ Backend is NOT running on port 8001"; \
|
||||
echo " Starting backend in background..."; \
|
||||
uv run python -m backend.main > /tmp/llm-council-backend.log 2>&1 & \
|
||||
BACKEND_PID=$$!; \
|
||||
echo " Backend PID: $$BACKEND_PID"; \
|
||||
sleep 2; \
|
||||
if ! curl -s http://localhost:8001/ > /dev/null 2>&1; then \
|
||||
echo "✗ Backend failed to start"; \
|
||||
exit 1; \
|
||||
fi; \
|
||||
echo "✓ Backend started"; \
|
||||
else \
|
||||
echo "✓ Backend is already running"; \
|
||||
fi
|
||||
@echo ""
|
||||
@echo "Running test setup script..."
|
||||
@uv run python scripts/test_setup.py > /tmp/llm-council-test-setup.log 2>&1; \
|
||||
cat /tmp/llm-council-test-setup.log; \
|
||||
CONV_ID=$$(grep "CONVERSATION_ID=" /tmp/llm-council-test-setup.log | cut -d'=' -f2 | head -1); \
|
||||
OPEN_URL=$$(grep "OPEN_URL=" /tmp/llm-council-test-setup.log | cut -d'=' -f2- | head -1); \
|
||||
echo ""; \
|
||||
echo "Checking frontend status..."; \
|
||||
if pgrep -f "vite" > /dev/null 2>&1 || pgrep -f "npm.*dev" > /dev/null 2>&1; then \
|
||||
echo "✓ Frontend process detected (already running)"; \
|
||||
elif timeout 2 curl -s http://localhost:5173/ > /dev/null 2>&1; then \
|
||||
echo "✓ Frontend is responding"; \
|
||||
else \
|
||||
echo "Starting frontend in background..."; \
|
||||
cd frontend && npm run dev > /tmp/llm-council-frontend.log 2>&1 & \
|
||||
echo "Frontend starting (PID: $$!)"; \
|
||||
echo "Waiting up to 10 seconds for frontend..."; \
|
||||
for i in 1 2 3 4 5 6 7 8 9 10; do \
|
||||
if timeout 1 curl -s http://localhost:5173/ > /dev/null 2>&1; then \
|
||||
echo "✓ Frontend is ready!"; \
|
||||
break; \
|
||||
fi; \
|
||||
sleep 1; \
|
||||
done; \
|
||||
fi; \
|
||||
echo ""; \
|
||||
echo "✓ Test setup complete!"; \
|
||||
echo ""; \
|
||||
if [ -n "$$OPEN_URL" ]; then \
|
||||
echo "Opening browser: $$OPEN_URL"; \
|
||||
xdg-open "$$OPEN_URL" 2>/dev/null || \
|
||||
open "$$OPEN_URL" 2>/dev/null || \
|
||||
echo "Open manually: $$OPEN_URL"; \
|
||||
elif [ -n "$$CONV_ID" ]; then \
|
||||
echo "Opening browser with new conversation: $$CONV_ID"; \
|
||||
xdg-open "http://localhost:5173/?conversation=$$CONV_ID" 2>/dev/null || \
|
||||
open "http://localhost:5173/?conversation=$$CONV_ID" 2>/dev/null || \
|
||||
echo "Open manually: http://localhost:5173/?conversation=$$CONV_ID"; \
|
||||
else \
|
||||
echo "Open: http://localhost:5173"; \
|
||||
echo "The new conversation should appear in the sidebar"; \
|
||||
fi; \
|
||||
echo ""; \
|
||||
echo "Note: TEST_MESSAGE is NOT sent automatically - check the script output above for the message to paste"
|
||||
187
README.md
Normal file
187
README.md
Normal file
@ -0,0 +1,187 @@
|
||||
# LLM Council
|
||||
|
||||
|
||||
The idea of this repo is that instead of asking a question to a single LLM, you can group multiple LLMs into your "LLM Council". This repo is a simple, local web app that essentially looks like ChatGPT except it sends your query to multiple LLMs via an OpenAI-compatible API (Ollama, vLLM, TGI, etc.), it then asks them to review and rank each other's work, and finally a Chairman LLM produces the final response.
|
||||
|
||||
In a bit more detail, here is what happens when you submit a query:
|
||||
|
||||
1. **Stage 1: First opinions**. The user query is given to all LLMs individually, and the responses are collected. The individual responses are shown in a "tab view", so that the user can inspect them all one by one.
|
||||
2. **Stage 2: Review**. Each individual LLM is given the responses of the other LLMs. Under the hood, the LLM identities are anonymized so that the LLM can't play favorites when judging their outputs. The LLM is asked to rank them in accuracy and insight.
|
||||
3. **Stage 3: Final response**. The designated Chairman of the LLM Council takes all of the model's responses and compiles them into a single final answer that is presented to the user.
|
||||
|
||||
## Vibe Code Alert
|
||||
|
||||
This project was 99% vibe coded as a fun Saturday hack because I wanted to explore and evaluate a number of LLMs side by side in the process of [reading books together with LLMs](https://x.com/karpathy/status/1990577951671509438). It's nice and useful to see multiple responses side by side, and also the cross-opinions of all LLMs on each other's outputs. You're not going to support it in any way, it's provided here as is for other people's inspiration and you don't intend to improve it. Code is ephemeral now and libraries are over, ask your LLM to change it in whatever way you like.
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Install Dependencies
|
||||
|
||||
The project uses [uv](https://docs.astral.sh/uv/) for project management.
|
||||
|
||||
**Backend:**
|
||||
```bash
|
||||
uv sync
|
||||
```
|
||||
|
||||
**Frontend:**
|
||||
```bash
|
||||
cd frontend
|
||||
npm install
|
||||
cd ..
|
||||
```
|
||||
|
||||
### 2. Configure Ollama Server
|
||||
|
||||
LLM Council requires an OpenAI-compatible API server. The easiest way to get started is with Ollama running locally or on a remote server.
|
||||
|
||||
**For local Ollama:**
|
||||
1. Install and start Ollama: https://ollama.ai
|
||||
2. Pull some models:
|
||||
```bash
|
||||
ollama pull llama3.2:3b
|
||||
ollama pull qwen2.5:3b
|
||||
ollama pull gemma2:2b
|
||||
```
|
||||
|
||||
### 3. Configure Environment
|
||||
|
||||
Create a `.env` file in the project root with your configuration:
|
||||
|
||||
**For local Ollama:**
|
||||
```bash
|
||||
USE_LOCAL_OLLAMA=true
|
||||
COUNCIL_MODELS=llama3.2:3b,qwen2.5:3b,gemma2:2b
|
||||
CHAIRMAN_MODEL=llama3.2:3b
|
||||
MAX_TOKENS=1024
|
||||
LLM_MAX_CONCURRENCY=1
|
||||
```
|
||||
|
||||
**For remote Ollama or other OpenAI-compatible server:**
|
||||
```bash
|
||||
OPENAI_COMPAT_BASE_URL=http://your-server:11434
|
||||
COUNCIL_MODELS=llama3.2:3b,qwen2.5:3b,gemma2:2b
|
||||
CHAIRMAN_MODEL=llama3.2:3b
|
||||
MAX_TOKENS=2048
|
||||
LLM_MAX_CONCURRENCY=1
|
||||
```
|
||||
|
||||
**Optional timeout configuration:**
|
||||
```bash
|
||||
LLM_TIMEOUT_SECONDS=120.0 # Default timeout for LLM queries
|
||||
CHAIRMAN_TIMEOUT_SECONDS=180.0 # Timeout for chairman synthesis
|
||||
TITLE_GENERATION_TIMEOUT_SECONDS=120.0 # Timeout for title generation
|
||||
OPENAI_COMPAT_TIMEOUT_SECONDS=300.0 # Timeout for OpenAI-compatible server
|
||||
OPENAI_COMPAT_CONNECT_TIMEOUT_SECONDS=10.0 # HTTP connection timeout
|
||||
OPENAI_COMPAT_WRITE_TIMEOUT_SECONDS=10.0 # HTTP write timeout
|
||||
OPENAI_COMPAT_POOL_TIMEOUT_SECONDS=10.0 # HTTP pool timeout
|
||||
```
|
||||
|
||||
See `.env.example` for all available configuration options. Alternatively, you can edit `backend/config.py` directly to set defaults.
|
||||
|
||||
## Running the Application
|
||||
|
||||
**Option 1: Use the start script**
|
||||
```bash
|
||||
./start.sh
|
||||
```
|
||||
|
||||
**Option 2: Use Makefile**
|
||||
```bash
|
||||
make dev
|
||||
```
|
||||
|
||||
**Option 3: Run manually**
|
||||
|
||||
Terminal 1 (Backend):
|
||||
```bash
|
||||
uv run python -m backend.main
|
||||
```
|
||||
|
||||
Terminal 2 (Frontend):
|
||||
```bash
|
||||
cd frontend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
Then open http://localhost:5173 in your browser.
|
||||
|
||||
**Option 4: Test setup with pre-configured conversation**
|
||||
```bash
|
||||
# Set in .env:
|
||||
# TEST_MESSAGE="Your message"
|
||||
# TEST_DOCS="doc1.md,doc2.md"
|
||||
make test-setup
|
||||
```
|
||||
|
||||
This creates a new conversation with today's date/time, uploads documents, and **pre-fills** the message in the UI (it does **not** auto-send).
|
||||
|
||||
### Frontend theme default (optional)
|
||||
|
||||
By default, the UI theme is persisted in `localStorage`. If there is no saved theme yet, you can set a default theme via a Vite env var:
|
||||
|
||||
```bash
|
||||
# Example (starts in dark mode if there's no localStorage value yet)
|
||||
VITE_DEFAULT_THEME=dark make dev
|
||||
```
|
||||
|
||||
## Using Ollama on a Remote Server
|
||||
|
||||
If you have Ollama running on a remote server or VM:
|
||||
|
||||
1. In your project `.env`, set:
|
||||
|
||||
```bash
|
||||
OPENAI_COMPAT_BASE_URL=http://your-server-ip:11434
|
||||
COUNCIL_MODELS=llama3.2:3b,qwen2.5:3b,gemma2:2b
|
||||
CHAIRMAN_MODEL=llama3.2:3b
|
||||
MAX_TOKENS=2048
|
||||
LLM_MAX_CONCURRENCY=1
|
||||
```
|
||||
|
||||
2. Verify connectivity from your machine:
|
||||
|
||||
```bash
|
||||
curl http://your-server-ip:11434/api/tags
|
||||
```
|
||||
|
||||
## Using Other OpenAI-Compatible Servers (vLLM, TGI, etc.)
|
||||
|
||||
If you're running vLLM, TGI, or another OpenAI-compatible server:
|
||||
|
||||
1. Ensure your server exposes:
|
||||
- `POST /v1/chat/completions`
|
||||
- `GET /v1/models`
|
||||
|
||||
2. In your project `.env`, set:
|
||||
|
||||
```bash
|
||||
OPENAI_COMPAT_BASE_URL=http://your-server:port
|
||||
COUNCIL_MODELS=your-model-1,your-model-2,your-model-3
|
||||
CHAIRMAN_MODEL=your-model-1
|
||||
MAX_TOKENS=2048
|
||||
LLM_MAX_CONCURRENCY=1
|
||||
|
||||
# (optional) if your server requires auth:
|
||||
# OPENAI_COMPAT_API_KEY=...
|
||||
```
|
||||
|
||||
3. Verify connectivity:
|
||||
|
||||
```bash
|
||||
curl http://your-server:port/v1/models
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
- **[Architecture](ARCHITECTURE.md)** - System architecture and design
|
||||
- **[Deployment Guide](docs/DEPLOYMENT.md)** - How to deploy with remote GPU VM
|
||||
- **[Deployment Recommendations](docs/DEPLOYMENT_RECOMMENDATIONS.md)** - Professional deployment options
|
||||
|
||||
## Tech Stack
|
||||
|
||||
- **Backend:** FastAPI (Python 3.10+), async httpx, OpenAI-compatible API
|
||||
- **Frontend:** React + Vite, react-markdown for rendering
|
||||
- **Storage:** JSON files in `data/conversations/`
|
||||
- **Package Management:** uv for Python, npm for JavaScript
|
||||
- **LLM Backend:** Ollama, vLLM, TGI, or any OpenAI-compatible server
|
||||
1
backend/__init__.py
Normal file
1
backend/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
"""LLM Council backend package."""
|
||||
109
backend/config.py
Normal file
109
backend/config.py
Normal file
@ -0,0 +1,109 @@
|
||||
"""Configuration for the LLM Council."""
|
||||
|
||||
import os
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv()
|
||||
|
||||
# Helpers
|
||||
def _parse_int_env(name: str, default: int) -> int:
|
||||
raw = os.getenv(name)
|
||||
if raw is None or raw.strip() == "":
|
||||
return default
|
||||
try:
|
||||
return int(raw.strip())
|
||||
except ValueError:
|
||||
return default
|
||||
|
||||
|
||||
def _parse_float_env(name: str, default: float) -> float:
|
||||
raw = os.getenv(name)
|
||||
if raw is None or raw.strip() == "":
|
||||
return default
|
||||
try:
|
||||
return float(raw.strip())
|
||||
except ValueError:
|
||||
return default
|
||||
|
||||
|
||||
def _parse_list_env(name: str) -> list[str] | None:
|
||||
"""
|
||||
Parses a list from an env var.
|
||||
|
||||
Supported formats:
|
||||
- Comma-separated: "a,b,c"
|
||||
- Newline-separated: "a\\nb\\nc"
|
||||
"""
|
||||
raw = os.getenv(name)
|
||||
if raw is None:
|
||||
return None
|
||||
raw = raw.strip()
|
||||
if raw == "":
|
||||
return []
|
||||
|
||||
# Allow either commas or newlines.
|
||||
parts = []
|
||||
for chunk in raw.replace("\r\n", "\n").split("\n"):
|
||||
parts.extend(chunk.split(","))
|
||||
return [p.strip() for p in parts if p.strip()]
|
||||
|
||||
|
||||
# Council members - list of model identifiers (Ollama model names)
|
||||
# Can be overridden via env var COUNCIL_MODELS (comma or newline separated).
|
||||
_DEFAULT_COUNCIL_MODELS = [
|
||||
"llama3.2:3b",
|
||||
"qwen2.5:3b",
|
||||
"gemma2:2b",
|
||||
]
|
||||
COUNCIL_MODELS = _parse_list_env("COUNCIL_MODELS") or _DEFAULT_COUNCIL_MODELS
|
||||
|
||||
# Chairman model - synthesizes final response
|
||||
CHAIRMAN_MODEL = os.getenv("CHAIRMAN_MODEL") or "llama3.2:3b"
|
||||
|
||||
# Maximum tokens per request
|
||||
# Default: 2048 tokens (reasonable for most responses)
|
||||
# Increase if you need longer responses
|
||||
MAX_TOKENS = _parse_int_env("MAX_TOKENS", 2048)
|
||||
|
||||
# Request timeout configuration (in seconds)
|
||||
# Default timeout for general LLM queries (Stage 1: council responses)
|
||||
# Used by llm_client.py and passed to openai_compat.query_model()
|
||||
LLM_TIMEOUT_SECONDS = _parse_float_env("LLM_TIMEOUT_SECONDS", 120.0)
|
||||
# Timeout for chairman synthesis (may need longer for complex responses)
|
||||
CHAIRMAN_TIMEOUT_SECONDS = _parse_float_env("CHAIRMAN_TIMEOUT_SECONDS", 180.0)
|
||||
# Timeout for title generation (short responses)
|
||||
TITLE_GENERATION_TIMEOUT_SECONDS = _parse_float_env("TITLE_GENERATION_TIMEOUT_SECONDS", 120.0)
|
||||
|
||||
# OpenAI-compatible provider tuning (Ollama / vLLM / TGI)
|
||||
# If USE_LOCAL_OLLAMA=true, automatically set base URL to localhost:11434 (convenience flag)
|
||||
if os.getenv("USE_LOCAL_OLLAMA", "").strip().lower() in ("true", "1", "yes"):
|
||||
_openai_compat_base_url = "http://localhost:11434"
|
||||
else:
|
||||
_openai_compat_base_url = os.getenv("OPENAI_COMPAT_BASE_URL")
|
||||
OPENAI_COMPAT_BASE_URL = _openai_compat_base_url
|
||||
|
||||
# HTTP client timeout (fallback when timeout not explicitly passed to openai_compat functions)
|
||||
# Used by: list_models() and as fallback in query_model() if called directly without timeout
|
||||
# Should be >= LLM_TIMEOUT_SECONDS for safety, but list_models() is fast so can be lower
|
||||
OPENAI_COMPAT_TIMEOUT_SECONDS = _parse_float_env("OPENAI_COMPAT_TIMEOUT_SECONDS", 300.0)
|
||||
# HTTP client connection timeout (time to establish connection)
|
||||
OPENAI_COMPAT_CONNECT_TIMEOUT_SECONDS = _parse_float_env("OPENAI_COMPAT_CONNECT_TIMEOUT_SECONDS", 10.0)
|
||||
# HTTP client write timeout (time to send request)
|
||||
OPENAI_COMPAT_WRITE_TIMEOUT_SECONDS = _parse_float_env("OPENAI_COMPAT_WRITE_TIMEOUT_SECONDS", 10.0)
|
||||
# HTTP client pool timeout (time to get connection from pool)
|
||||
OPENAI_COMPAT_POOL_TIMEOUT_SECONDS = _parse_float_env("OPENAI_COMPAT_POOL_TIMEOUT_SECONDS", 10.0)
|
||||
# Number of retries for failed requests (retryable HTTP errors: 408, 409, 425, 429, 500, 502, 503, 504)
|
||||
OPENAI_COMPAT_RETRIES = _parse_int_env("OPENAI_COMPAT_RETRIES", 2)
|
||||
# Exponential backoff base delay between retries (seconds) - actual delay is backoff * (2^attempt)
|
||||
OPENAI_COMPAT_RETRY_BACKOFF_SECONDS = _parse_float_env("OPENAI_COMPAT_RETRY_BACKOFF_SECONDS", 0.5)
|
||||
|
||||
# Debug mode - show debug logs in console (set DEBUG=true in .env)
|
||||
DEBUG = os.getenv("DEBUG", "").strip().lower() in ("true", "1", "yes")
|
||||
|
||||
# Markdown uploads (per-conversation)
|
||||
DOCS_DIR = os.getenv("DOCS_DIR") or "data/docs"
|
||||
MAX_DOC_BYTES = _parse_int_env("MAX_DOC_BYTES", 1_000_000) # 1MB
|
||||
MAX_DOC_PREVIEW_CHARS = _parse_int_env("MAX_DOC_PREVIEW_CHARS", 20_000)
|
||||
|
||||
# Data directory for conversation storage
|
||||
DATA_DIR = "data/conversations"
|
||||
537
backend/council.py
Normal file
537
backend/council.py
Normal file
@ -0,0 +1,537 @@
|
||||
"""3-stage LLM Council orchestration."""
|
||||
|
||||
import time
|
||||
from typing import List, Dict, Any, Tuple, Optional
|
||||
from .llm_client import query_models_parallel, query_model
|
||||
from .config import COUNCIL_MODELS, CHAIRMAN_MODEL, CHAIRMAN_TIMEOUT_SECONDS, TITLE_GENERATION_TIMEOUT_SECONDS
|
||||
|
||||
|
||||
def _format_docs_context(docs_text: Optional[str]) -> str:
|
||||
if not docs_text or not docs_text.strip():
|
||||
return ""
|
||||
return (
|
||||
"\n\nREFERENCE DOCUMENTS (user-provided markdown):\n"
|
||||
"Use these as additional context if relevant. Quote sparingly and cite sections when helpful.\n"
|
||||
f"{docs_text.strip()}\n"
|
||||
)
|
||||
|
||||
|
||||
async def stage1_collect_responses(user_query: str, docs_text: Optional[str] = None) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:
|
||||
"""
|
||||
Stage 1: Collect individual responses from all council models.
|
||||
|
||||
Args:
|
||||
user_query: The user's question
|
||||
|
||||
Returns:
|
||||
Tuple of (results list, metadata dict with timing info)
|
||||
"""
|
||||
start_time = time.time()
|
||||
prompt = f"{user_query}{_format_docs_context(docs_text)}"
|
||||
messages = [{"role": "user", "content": prompt}]
|
||||
|
||||
# Query all models in parallel
|
||||
responses = await query_models_parallel(COUNCIL_MODELS, messages)
|
||||
duration = time.time() - start_time
|
||||
|
||||
# Format results
|
||||
stage1_results = []
|
||||
successful_models = []
|
||||
failed_models = []
|
||||
for model, response in responses.items():
|
||||
if response is not None: # Only include successful responses
|
||||
stage1_results.append({
|
||||
"model": model,
|
||||
"response": response.get('content', '')
|
||||
})
|
||||
successful_models.append(model)
|
||||
else:
|
||||
failed_models.append(model)
|
||||
|
||||
metadata = {
|
||||
"duration_seconds": round(duration, 2),
|
||||
"successful_models": successful_models,
|
||||
"failed_models": failed_models,
|
||||
"total_models": len(COUNCIL_MODELS)
|
||||
}
|
||||
|
||||
return stage1_results, metadata
|
||||
|
||||
|
||||
async def stage1_collect_responses_streaming(
|
||||
user_query: str,
|
||||
docs_text: Optional[str] = None,
|
||||
on_response = None
|
||||
) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:
|
||||
"""
|
||||
Stage 1: Collect individual responses from all council models, streaming as they complete.
|
||||
|
||||
Args:
|
||||
user_query: The user's question
|
||||
docs_text: Optional document context
|
||||
on_response: Optional callback(model, response_dict) called as each response completes
|
||||
|
||||
Returns:
|
||||
Tuple of (results list, metadata dict with timing info)
|
||||
"""
|
||||
import asyncio
|
||||
from .llm_client import query_model, LLM_TIMEOUT_SECONDS, MAX_TOKENS
|
||||
|
||||
start_time = time.time()
|
||||
prompt = f"{user_query}{_format_docs_context(docs_text)}"
|
||||
messages = [{"role": "user", "content": prompt}]
|
||||
|
||||
# Query all models in parallel, but yield results as they complete
|
||||
async def query_and_notify(model: str):
|
||||
response = await query_model(model, messages, timeout=LLM_TIMEOUT_SECONDS, max_tokens_override=MAX_TOKENS)
|
||||
if on_response and response is not None:
|
||||
result = {"model": model, "response": response.get('content', '')}
|
||||
await on_response(model, result)
|
||||
return model, response
|
||||
|
||||
tasks = [query_and_notify(model) for model in COUNCIL_MODELS]
|
||||
responses_dict = {}
|
||||
|
||||
# Use as_completed to process results as they finish
|
||||
for coro in asyncio.as_completed(tasks):
|
||||
model, response = await coro
|
||||
responses_dict[model] = response
|
||||
|
||||
duration = time.time() - start_time
|
||||
|
||||
# Format results
|
||||
stage1_results = []
|
||||
successful_models = []
|
||||
failed_models = []
|
||||
for model, response in responses_dict.items():
|
||||
if response is not None: # Only include successful responses
|
||||
stage1_results.append({
|
||||
"model": model,
|
||||
"response": response.get('content', '')
|
||||
})
|
||||
successful_models.append(model)
|
||||
else:
|
||||
failed_models.append(model)
|
||||
|
||||
metadata = {
|
||||
"duration_seconds": round(duration, 2),
|
||||
"successful_models": successful_models,
|
||||
"failed_models": failed_models,
|
||||
"total_models": len(COUNCIL_MODELS)
|
||||
}
|
||||
|
||||
return stage1_results, metadata
|
||||
|
||||
|
||||
async def stage2_collect_rankings(
|
||||
user_query: str,
|
||||
stage1_results: List[Dict[str, Any]],
|
||||
docs_text: Optional[str] = None,
|
||||
) -> Tuple[List[Dict[str, Any]], Dict[str, str], Dict[str, Any]]:
|
||||
"""
|
||||
Stage 2: Each model ranks the anonymized responses.
|
||||
|
||||
Args:
|
||||
user_query: The original user query
|
||||
stage1_results: Results from Stage 1
|
||||
|
||||
Returns:
|
||||
Tuple of (rankings list, label_to_model mapping, metadata dict with timing)
|
||||
"""
|
||||
start_time = time.time()
|
||||
# Handle empty stage1_results
|
||||
if not stage1_results:
|
||||
return [], {}, {"duration_seconds": 0.0, "successful_models": [], "failed_models": [], "total_models": len(COUNCIL_MODELS)}
|
||||
|
||||
# Create anonymized labels for responses (Response A, Response B, etc.)
|
||||
labels = [chr(65 + i) for i in range(len(stage1_results))] # A, B, C, ...
|
||||
|
||||
# Create mapping from label to model name
|
||||
label_to_model = {
|
||||
f"Response {label}": result['model']
|
||||
for label, result in zip(labels, stage1_results)
|
||||
}
|
||||
|
||||
# Build the ranking prompt
|
||||
responses_text = "\n\n".join([
|
||||
f"Response {label}:\n{result['response']}"
|
||||
for label, result in zip(labels, stage1_results)
|
||||
])
|
||||
|
||||
ranking_prompt = f"""You are evaluating different responses to the following question:
|
||||
|
||||
Question: {user_query}
|
||||
{_format_docs_context(docs_text)}
|
||||
|
||||
Here are the responses from different models (anonymized):
|
||||
|
||||
{responses_text}
|
||||
|
||||
Your task:
|
||||
1. First, evaluate each response individually. For each response, explain what it does well and what it does poorly.
|
||||
2. Then, at the very end of your response, provide a final ranking.
|
||||
|
||||
IMPORTANT: Your final ranking MUST be formatted EXACTLY as follows:
|
||||
- Start with the line "FINAL RANKING:" (all caps, with colon)
|
||||
- Then list the responses from best to worst as a numbered list
|
||||
- Each line should be: number, period, space, then ONLY the response label (e.g., "1. Response A")
|
||||
- Do not add any other text or explanations in the ranking section
|
||||
|
||||
Example of the correct format for your ENTIRE response:
|
||||
|
||||
Response A provides good detail on X but misses Y...
|
||||
Response B is accurate but lacks depth on Z...
|
||||
Response C offers the most comprehensive answer...
|
||||
|
||||
FINAL RANKING:
|
||||
1. Response C
|
||||
2. Response A
|
||||
3. Response B
|
||||
|
||||
Now provide your evaluation and ranking:"""
|
||||
|
||||
messages = [{"role": "user", "content": ranking_prompt}]
|
||||
|
||||
# Get rankings from all council models in parallel
|
||||
responses = await query_models_parallel(COUNCIL_MODELS, messages)
|
||||
duration = time.time() - start_time
|
||||
|
||||
# Format results
|
||||
stage2_results = []
|
||||
successful_models = []
|
||||
failed_models = []
|
||||
for model, response in responses.items():
|
||||
if response is not None:
|
||||
full_text = response.get('content', '')
|
||||
parsed = parse_ranking_from_text(full_text)
|
||||
stage2_results.append({
|
||||
"model": model,
|
||||
"ranking": full_text,
|
||||
"parsed_ranking": parsed
|
||||
})
|
||||
successful_models.append(model)
|
||||
else:
|
||||
failed_models.append(model)
|
||||
|
||||
metadata = {
|
||||
"duration_seconds": round(duration, 2),
|
||||
"successful_models": successful_models,
|
||||
"failed_models": failed_models,
|
||||
"total_models": len(COUNCIL_MODELS)
|
||||
}
|
||||
|
||||
return stage2_results, label_to_model, metadata
|
||||
|
||||
|
||||
async def stage3_synthesize_final(
|
||||
user_query: str,
|
||||
stage1_results: List[Dict[str, Any]],
|
||||
stage2_results: List[Dict[str, Any]],
|
||||
docs_text: Optional[str] = None,
|
||||
) -> Tuple[Dict[str, Any], Dict[str, Any]]:
|
||||
"""
|
||||
Stage 3: Chairman synthesizes final response.
|
||||
|
||||
Args:
|
||||
user_query: The original user query
|
||||
stage1_results: Individual model responses from Stage 1
|
||||
stage2_results: Rankings from Stage 2
|
||||
|
||||
Returns:
|
||||
Tuple of (result dict with 'model' and 'response' keys, metadata dict with timing)
|
||||
"""
|
||||
start_time = time.time()
|
||||
# Handle empty inputs
|
||||
if not stage1_results:
|
||||
duration = time.time() - start_time
|
||||
return {
|
||||
"model": CHAIRMAN_MODEL,
|
||||
"response": "Error: Cannot synthesize final answer - no responses from Stage 1."
|
||||
}, {
|
||||
"duration_seconds": round(duration, 2),
|
||||
"model": CHAIRMAN_MODEL,
|
||||
"success": False
|
||||
}
|
||||
|
||||
# Build comprehensive context for chairman
|
||||
# Truncate very long responses to avoid exceeding token/context limits
|
||||
# More aggressive truncation to keep total prompt under ~2000 tokens (~8000 chars)
|
||||
MAX_RESPONSE_LENGTH = 2000 # Characters per response
|
||||
MAX_RANKING_LENGTH = 1000 # Characters per ranking
|
||||
MAX_DOCS_LENGTH = 2000 # Max characters for docs context
|
||||
MAX_TOTAL_PROMPT_LENGTH = 8000 # Max total prompt length (safety limit)
|
||||
|
||||
def truncate_text(text: str, max_length: int) -> str:
|
||||
"""Truncate text to max_length, adding ellipsis if truncated."""
|
||||
if len(text) <= max_length:
|
||||
return text
|
||||
return text[:max_length-3] + "..."
|
||||
|
||||
# Truncate docs_text if provided
|
||||
truncated_docs_text = docs_text
|
||||
if docs_text and len(docs_text) > MAX_DOCS_LENGTH:
|
||||
truncated_docs_text = truncate_text(docs_text, MAX_DOCS_LENGTH)
|
||||
|
||||
stage1_text = "\n\n".join([
|
||||
f"Model: {result['model']}\nResponse: {truncate_text(result['response'], MAX_RESPONSE_LENGTH)}"
|
||||
for result in stage1_results
|
||||
])
|
||||
|
||||
stage2_text = "\n\n".join([
|
||||
f"Model: {result['model']}\nRanking: {truncate_text(result['ranking'], MAX_RANKING_LENGTH)}"
|
||||
for result in stage2_results
|
||||
]) if stage2_results else "No rankings available from Stage 2."
|
||||
|
||||
chairman_prompt = f"""You are the Chairman of an LLM Council. Multiple AI models have provided responses to a user's question, and then ranked each other's responses.
|
||||
|
||||
Original Question: {user_query}
|
||||
{_format_docs_context(truncated_docs_text)}
|
||||
|
||||
STAGE 1 - Individual Responses:
|
||||
{stage1_text}
|
||||
|
||||
STAGE 2 - Peer Rankings:
|
||||
{stage2_text}
|
||||
|
||||
Your task as Chairman is to synthesize all of this information into a single, comprehensive, accurate answer to the user's original question. Consider:
|
||||
- The individual responses and their insights
|
||||
- The peer rankings and what they reveal about response quality
|
||||
- Any patterns of agreement or disagreement
|
||||
|
||||
Provide a clear, well-reasoned final answer that represents the council's collective wisdom:"""
|
||||
|
||||
# Apply final safety truncation if prompt is still too long
|
||||
if len(chairman_prompt) > MAX_TOTAL_PROMPT_LENGTH:
|
||||
# Truncate the prompt itself if it exceeds the limit
|
||||
chairman_prompt = chairman_prompt[:MAX_TOTAL_PROMPT_LENGTH - 100] + "\n\n[Content truncated due to length limits...]\n\nProvide a clear, well-reasoned final answer:"
|
||||
|
||||
messages = [{"role": "user", "content": chairman_prompt}]
|
||||
|
||||
# Query the chairman model
|
||||
# Note: For very long prompts, we might need to truncate or summarize
|
||||
# For now, we'll try with the full prompt and handle errors gracefully
|
||||
# Use the default max_tokens (2048) to stay within credit limits
|
||||
# If you have more credits, you can increase MAX_TOKENS in config.py
|
||||
response = await query_model(CHAIRMAN_MODEL, messages, timeout=CHAIRMAN_TIMEOUT_SECONDS)
|
||||
duration = time.time() - start_time
|
||||
|
||||
if response is None:
|
||||
# Try to get more specific error info - check if prompt might be too long
|
||||
prompt_length = len(chairman_prompt)
|
||||
estimated_tokens = prompt_length // 4 # Rough estimate: ~4 chars per token
|
||||
|
||||
error_msg = (
|
||||
"Error: Unable to generate final synthesis.\n\n"
|
||||
"The chairman model failed to respond. Possible causes:\n"
|
||||
"- Model '{}' not available on the server\n"
|
||||
"- Server not running or unreachable\n"
|
||||
"- Network/API errors\n"
|
||||
"- Prompt too long (estimated ~{} tokens)\n"
|
||||
"- Server timeout or overloaded\n\n"
|
||||
"Check the backend terminal logs for the exact error message."
|
||||
).format(CHAIRMAN_MODEL, estimated_tokens)
|
||||
return {
|
||||
"model": CHAIRMAN_MODEL,
|
||||
"response": error_msg
|
||||
}, {
|
||||
"duration_seconds": round(duration, 2),
|
||||
"model": CHAIRMAN_MODEL,
|
||||
"success": False
|
||||
}
|
||||
|
||||
return {
|
||||
"model": CHAIRMAN_MODEL,
|
||||
"response": response.get('content', '')
|
||||
}, {
|
||||
"duration_seconds": round(duration, 2),
|
||||
"model": CHAIRMAN_MODEL,
|
||||
"success": True
|
||||
}
|
||||
|
||||
|
||||
def parse_ranking_from_text(ranking_text: str) -> List[str]:
|
||||
"""
|
||||
Parse the FINAL RANKING section from the model's response.
|
||||
|
||||
Args:
|
||||
ranking_text: The full text response from the model
|
||||
|
||||
Returns:
|
||||
List of response labels in ranked order
|
||||
"""
|
||||
import re
|
||||
|
||||
# Look for "FINAL RANKING:" section
|
||||
if "FINAL RANKING:" in ranking_text:
|
||||
# Extract everything after "FINAL RANKING:"
|
||||
parts = ranking_text.split("FINAL RANKING:")
|
||||
if len(parts) >= 2:
|
||||
ranking_section = parts[1]
|
||||
# Try to extract numbered list format (e.g., "1. Response A")
|
||||
# This pattern looks for: number, period, optional space, "Response X"
|
||||
numbered_matches = re.findall(r'\d+\.\s*Response [A-Z]', ranking_section)
|
||||
if numbered_matches:
|
||||
# Extract just the "Response X" part
|
||||
return [re.search(r'Response [A-Z]', m).group() for m in numbered_matches]
|
||||
|
||||
# Fallback: Extract all "Response X" patterns in order
|
||||
matches = re.findall(r'Response [A-Z]', ranking_section)
|
||||
return matches
|
||||
|
||||
# Fallback: try to find any "Response X" patterns in order
|
||||
matches = re.findall(r'Response [A-Z]', ranking_text)
|
||||
return matches
|
||||
|
||||
|
||||
def calculate_aggregate_rankings(
|
||||
stage2_results: List[Dict[str, Any]],
|
||||
label_to_model: Dict[str, str]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Calculate aggregate rankings across all models.
|
||||
|
||||
Args:
|
||||
stage2_results: Rankings from each model
|
||||
label_to_model: Mapping from anonymous labels to model names
|
||||
|
||||
Returns:
|
||||
List of dicts with model name and average rank, sorted best to worst
|
||||
"""
|
||||
from collections import defaultdict
|
||||
|
||||
# Track positions for each model
|
||||
model_positions = defaultdict(list)
|
||||
|
||||
for ranking in stage2_results:
|
||||
ranking_text = ranking['ranking']
|
||||
|
||||
# Parse the ranking from the structured format
|
||||
parsed_ranking = parse_ranking_from_text(ranking_text)
|
||||
|
||||
for position, label in enumerate(parsed_ranking, start=1):
|
||||
if label in label_to_model:
|
||||
model_name = label_to_model[label]
|
||||
model_positions[model_name].append(position)
|
||||
|
||||
# Calculate average position for each model
|
||||
aggregate = []
|
||||
for model, positions in model_positions.items():
|
||||
if positions:
|
||||
avg_rank = sum(positions) / len(positions)
|
||||
aggregate.append({
|
||||
"model": model,
|
||||
"average_rank": round(avg_rank, 2),
|
||||
"rankings_count": len(positions)
|
||||
})
|
||||
|
||||
# Sort by average rank (lower is better)
|
||||
aggregate.sort(key=lambda x: x['average_rank'])
|
||||
|
||||
return aggregate
|
||||
|
||||
|
||||
async def generate_conversation_title(user_query: str) -> str:
|
||||
"""
|
||||
Generate a short title for a conversation based on the first user message.
|
||||
|
||||
Args:
|
||||
user_query: The first user message
|
||||
|
||||
Returns:
|
||||
A short title (3-5 words)
|
||||
"""
|
||||
title_prompt = f"""Generate a very short title (3-5 words maximum) that summarizes the following question.
|
||||
The title should be concise and descriptive. Do not use quotes or punctuation in the title.
|
||||
|
||||
Question: {user_query}
|
||||
|
||||
Title:"""
|
||||
|
||||
messages = [{"role": "user", "content": title_prompt}]
|
||||
|
||||
# Use chairman model for title generation
|
||||
# Use configurable timeout (may need longer for local models which load on first request)
|
||||
response = await query_model(CHAIRMAN_MODEL, messages, timeout=TITLE_GENERATION_TIMEOUT_SECONDS, max_tokens_override=50)
|
||||
|
||||
if response is None:
|
||||
# Fallback to a generic title
|
||||
return "New Conversation"
|
||||
|
||||
title = response.get('content', 'New Conversation').strip()
|
||||
|
||||
# Clean up the title - remove quotes, limit length
|
||||
title = title.strip('"\'')
|
||||
|
||||
# Truncate if too long
|
||||
if len(title) > 50:
|
||||
title = title[:47] + "..."
|
||||
|
||||
return title
|
||||
|
||||
|
||||
async def run_full_council(user_query: str, docs_text: Optional[str] = None) -> Tuple[List, List, Dict, Dict]:
|
||||
"""
|
||||
Run the complete 3-stage council process.
|
||||
|
||||
Args:
|
||||
user_query: The user's question
|
||||
|
||||
Returns:
|
||||
Tuple of (stage1_results, stage2_results, stage3_result, metadata)
|
||||
"""
|
||||
total_start_time = time.time()
|
||||
|
||||
# Stage 1: Collect individual responses
|
||||
stage1_results, stage1_metadata = await stage1_collect_responses(user_query, docs_text=docs_text)
|
||||
|
||||
# If no models responded successfully, return error with helpful message
|
||||
if not stage1_results:
|
||||
error_msg = (
|
||||
"All models failed to respond. This could be due to:\n"
|
||||
"- Server not running or unreachable\n"
|
||||
"- Model names not available on the server\n"
|
||||
"- Network/API errors\n"
|
||||
"- Server timeout or overloaded\n"
|
||||
"- Invalid OPENAI_COMPAT_BASE_URL configuration\n\n"
|
||||
"Check the backend logs for detailed error messages."
|
||||
)
|
||||
total_duration = time.time() - total_start_time
|
||||
return [], [], {
|
||||
"model": "error",
|
||||
"response": error_msg
|
||||
}, {
|
||||
"label_to_model": {},
|
||||
"aggregate_rankings": {},
|
||||
"stage1_metadata": stage1_metadata,
|
||||
"stage2_metadata": {},
|
||||
"stage3_metadata": {},
|
||||
"total_duration_seconds": round(total_duration, 2)
|
||||
}
|
||||
|
||||
# Stage 2: Collect rankings
|
||||
stage2_results, label_to_model, stage2_metadata = await stage2_collect_rankings(user_query, stage1_results, docs_text=docs_text)
|
||||
|
||||
# Calculate aggregate rankings
|
||||
aggregate_rankings = calculate_aggregate_rankings(stage2_results, label_to_model)
|
||||
|
||||
# Stage 3: Synthesize final answer
|
||||
stage3_result, stage3_metadata = await stage3_synthesize_final(
|
||||
user_query,
|
||||
stage1_results,
|
||||
stage2_results,
|
||||
docs_text=docs_text,
|
||||
)
|
||||
|
||||
total_duration = time.time() - total_start_time
|
||||
|
||||
# Prepare metadata
|
||||
metadata = {
|
||||
"label_to_model": label_to_model,
|
||||
"aggregate_rankings": aggregate_rankings,
|
||||
"stage1_metadata": stage1_metadata,
|
||||
"stage2_metadata": stage2_metadata,
|
||||
"stage3_metadata": stage3_metadata,
|
||||
"total_duration_seconds": round(total_duration, 2)
|
||||
}
|
||||
|
||||
return stage1_results, stage2_results, stage3_result, metadata
|
||||
106
backend/docs_context.py
Normal file
106
backend/docs_context.py
Normal file
@ -0,0 +1,106 @@
|
||||
"""Helpers to load and format uploaded markdown docs as prompt context."""
|
||||
|
||||
from __future__ import annotations
|
||||
import re
|
||||
from typing import Optional, List
|
||||
|
||||
from . import documents
|
||||
|
||||
|
||||
def _normalize_filename_for_matching(filename: str) -> str:
|
||||
"""Normalize filename for matching @filename references."""
|
||||
# Convert to lowercase, replace spaces/underscores/hyphens with single underscore
|
||||
normalized = filename.lower()
|
||||
normalized = re.sub(r'[_\s\-]+', '_', normalized)
|
||||
# Remove .md extension for matching
|
||||
normalized = normalized.replace('.md', '')
|
||||
return normalized
|
||||
|
||||
|
||||
def _extract_filename_references(text: str) -> List[str]:
|
||||
"""Extract @filename references from text."""
|
||||
# Match @filename patterns (with or without .md extension)
|
||||
pattern = r'@([a-zA-Z0-9_\s\-\+\.]+)'
|
||||
matches = re.findall(pattern, text)
|
||||
# Normalize each match
|
||||
return [_normalize_filename_for_matching(m) for m in matches]
|
||||
|
||||
|
||||
def _extract_numeric_references(text: str) -> List[int]:
|
||||
"""Extract numeric document references like @1, @2, @3 from text."""
|
||||
# Match @ followed by digits
|
||||
pattern = r'@(\d+)'
|
||||
matches = re.findall(pattern, text)
|
||||
# Convert to integers (1-indexed, will be converted to 0-indexed when used)
|
||||
return [int(m) for m in matches]
|
||||
|
||||
|
||||
def build_docs_context(
|
||||
conversation_id: str,
|
||||
user_query: Optional[str] = None,
|
||||
*,
|
||||
max_chars: int = 8000,
|
||||
max_docs: int = 5
|
||||
) -> Optional[str]:
|
||||
"""
|
||||
Return a single markdown string containing (truncated) docs for a conversation.
|
||||
|
||||
If user_query is provided and contains references:
|
||||
- @1, @2, @3 etc. (numeric): Include documents by their numbered position (1-indexed)
|
||||
- @filename (text): Include documents whose filenames match (fuzzy matching)
|
||||
- If both are present, numeric references take precedence
|
||||
Otherwise, include all documents up to max_docs.
|
||||
"""
|
||||
all_metas = documents.list_documents(conversation_id)
|
||||
if not all_metas:
|
||||
return None
|
||||
|
||||
# Check for numeric references first (e.g., @1, @2, @3)
|
||||
if user_query:
|
||||
numeric_refs = _extract_numeric_references(user_query)
|
||||
if numeric_refs:
|
||||
# Convert 1-indexed to 0-indexed and filter
|
||||
filtered_metas = []
|
||||
for num in numeric_refs:
|
||||
idx = num - 1 # Convert to 0-indexed
|
||||
if 0 <= idx < len(all_metas):
|
||||
filtered_metas.append(all_metas[idx])
|
||||
if filtered_metas:
|
||||
all_metas = filtered_metas
|
||||
else:
|
||||
# If no numeric refs, check for filename references
|
||||
refs = _extract_filename_references(user_query)
|
||||
if refs:
|
||||
filtered_metas = []
|
||||
for meta in all_metas:
|
||||
normalized = _normalize_filename_for_matching(meta.filename)
|
||||
# Check if any reference matches this filename
|
||||
if any(ref in normalized or normalized in ref for ref in refs):
|
||||
filtered_metas.append(meta)
|
||||
if filtered_metas:
|
||||
all_metas = filtered_metas
|
||||
|
||||
# Limit to max_docs
|
||||
metas = all_metas[:max_docs]
|
||||
if not metas:
|
||||
return None
|
||||
|
||||
chunks = []
|
||||
remaining = max_chars
|
||||
for meta in metas:
|
||||
if remaining <= 0:
|
||||
break
|
||||
text = documents.read_document_text(conversation_id, meta.id)
|
||||
header = f"\n\n---\nDOC: {meta.filename} ({meta.bytes} bytes)\n---\n"
|
||||
body = text
|
||||
if len(header) >= remaining:
|
||||
break
|
||||
remaining -= len(header)
|
||||
if len(body) > remaining:
|
||||
body = body[: max(0, remaining - 3)] + "..."
|
||||
remaining -= len(body)
|
||||
chunks.append(header + body)
|
||||
|
||||
return "".join(chunks).strip() if chunks else None
|
||||
|
||||
|
||||
103
backend/documents.py
Normal file
103
backend/documents.py
Normal file
@ -0,0 +1,103 @@
|
||||
"""Markdown document storage for conversations.
|
||||
|
||||
Stores uploaded .md files on disk under data/docs/<conversation_id>/.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import re
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
|
||||
from .config import DOCS_DIR, MAX_DOC_BYTES
|
||||
|
||||
|
||||
_SAFE_NAME_RE = re.compile(r"[^a-zA-Z0-9._ -]+")
|
||||
|
||||
|
||||
def _safe_filename(name: str) -> str:
|
||||
name = name.strip().replace("\\", "/").split("/")[-1] # drop any path
|
||||
name = _SAFE_NAME_RE.sub("_", name)
|
||||
name = name.strip(" .")
|
||||
if not name:
|
||||
name = "document.md"
|
||||
if not name.lower().endswith(".md"):
|
||||
name = f"{name}.md"
|
||||
return name
|
||||
|
||||
|
||||
def _conversation_dir(conversation_id: str) -> Path:
|
||||
base = Path(DOCS_DIR)
|
||||
return base / conversation_id
|
||||
|
||||
|
||||
def ensure_docs_dir(conversation_id: str) -> Path:
|
||||
d = _conversation_dir(conversation_id)
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
return d
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class DocumentMeta:
|
||||
id: str
|
||||
filename: str
|
||||
bytes: int
|
||||
|
||||
|
||||
def save_markdown_document(conversation_id: str, filename: str, content: bytes) -> DocumentMeta:
|
||||
if len(content) > MAX_DOC_BYTES:
|
||||
raise ValueError(f"Document too large. Max {MAX_DOC_BYTES} bytes.")
|
||||
|
||||
safe_name = _safe_filename(filename)
|
||||
doc_id = str(uuid.uuid4())
|
||||
|
||||
d = ensure_docs_dir(conversation_id)
|
||||
path = d / f"{doc_id}__{safe_name}"
|
||||
path.write_bytes(content)
|
||||
return DocumentMeta(id=doc_id, filename=safe_name, bytes=len(content))
|
||||
|
||||
|
||||
def list_documents(conversation_id: str) -> List[DocumentMeta]:
|
||||
d = _conversation_dir(conversation_id)
|
||||
if not d.exists():
|
||||
return []
|
||||
|
||||
out: List[DocumentMeta] = []
|
||||
for p in sorted(d.iterdir()):
|
||||
if not p.is_file():
|
||||
continue
|
||||
if "__" not in p.name:
|
||||
continue
|
||||
doc_id, fname = p.name.split("__", 1)
|
||||
out.append(DocumentMeta(id=doc_id, filename=fname, bytes=p.stat().st_size))
|
||||
return out
|
||||
|
||||
|
||||
def read_document_text(conversation_id: str, doc_id: str) -> str:
|
||||
d = _conversation_dir(conversation_id)
|
||||
if not d.exists():
|
||||
raise FileNotFoundError("Conversation docs not found")
|
||||
|
||||
matches = [p for p in d.iterdir() if p.is_file() and p.name.startswith(f"{doc_id}__")]
|
||||
if not matches:
|
||||
raise FileNotFoundError("Document not found")
|
||||
|
||||
raw = matches[0].read_bytes()
|
||||
# Best-effort UTF-8; replace invalid sequences
|
||||
return raw.decode("utf-8", errors="replace")
|
||||
|
||||
|
||||
def delete_document(conversation_id: str, doc_id: str) -> None:
|
||||
d = _conversation_dir(conversation_id)
|
||||
if not d.exists():
|
||||
raise FileNotFoundError("Conversation docs not found")
|
||||
|
||||
matches = [p for p in d.iterdir() if p.is_file() and p.name.startswith(f"{doc_id}__")]
|
||||
if not matches:
|
||||
raise FileNotFoundError("Document not found")
|
||||
matches[0].unlink()
|
||||
|
||||
|
||||
132
backend/llm_client.py
Normal file
132
backend/llm_client.py
Normal file
@ -0,0 +1,132 @@
|
||||
"""Unified LLM client.
|
||||
|
||||
This module routes LLM requests to OpenAI-compatible servers (Ollama, vLLM, TGI, etc.).
|
||||
|
||||
The base URL is determined by:
|
||||
- If USE_LOCAL_OLLAMA=true: uses http://localhost:11434
|
||||
- Else if OPENAI_COMPAT_BASE_URL is set: uses that URL
|
||||
- Else: raises an error (base URL must be configured)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from .config import MAX_TOKENS, OPENAI_COMPAT_BASE_URL, LLM_TIMEOUT_SECONDS, DEBUG
|
||||
|
||||
|
||||
def _get_provider_name() -> str:
|
||||
"""Returns the provider name (always 'openai_compat' now)."""
|
||||
return "openai_compat"
|
||||
|
||||
|
||||
def _get_max_concurrency() -> int:
|
||||
"""
|
||||
Maximum number of in-flight model requests when calling query_models_parallel.
|
||||
|
||||
- If LLM_MAX_CONCURRENCY is unset/empty/invalid: unlimited (0)
|
||||
- If set to 1: strictly sequential
|
||||
- If set to N>1: at most N in flight
|
||||
"""
|
||||
raw = (os.getenv("LLM_MAX_CONCURRENCY") or "").strip()
|
||||
if not raw:
|
||||
return 0
|
||||
try:
|
||||
v = int(raw)
|
||||
except ValueError:
|
||||
return 0
|
||||
return max(0, v)
|
||||
|
||||
|
||||
def get_provider_info() -> Dict[str, Any]:
|
||||
"""Get information about the configured provider."""
|
||||
from .config import OPENAI_COMPAT_BASE_URL
|
||||
return {
|
||||
"provider": "openai_compat",
|
||||
"base_url": OPENAI_COMPAT_BASE_URL
|
||||
}
|
||||
|
||||
|
||||
async def list_models() -> Optional[List[str]]:
|
||||
"""List available models from the OpenAI-compatible server."""
|
||||
from .openai_compat import list_models as _list
|
||||
return await _list()
|
||||
|
||||
|
||||
async def query_model(
|
||||
model: str,
|
||||
messages: List[Dict[str, str]],
|
||||
timeout: Optional[float] = None,
|
||||
max_tokens_override: Optional[int] = None,
|
||||
) -> Optional[Dict[str, Any]]:
|
||||
"""Query a model via OpenAI-compatible API."""
|
||||
from .openai_compat import query_model as _query
|
||||
|
||||
max_tokens = max_tokens_override if max_tokens_override is not None else MAX_TOKENS
|
||||
resolved_timeout = timeout if timeout is not None else LLM_TIMEOUT_SECONDS
|
||||
|
||||
return await _query(
|
||||
model,
|
||||
messages,
|
||||
max_tokens=max_tokens,
|
||||
timeout=resolved_timeout,
|
||||
)
|
||||
|
||||
|
||||
async def query_models_parallel(
|
||||
models: List[str],
|
||||
messages: List[Dict[str, str]],
|
||||
timeout: Optional[float] = None,
|
||||
max_tokens_override: Optional[int] = None,
|
||||
) -> Dict[str, Optional[Dict[str, Any]]]:
|
||||
import asyncio
|
||||
|
||||
resolved_timeout = timeout if timeout is not None else LLM_TIMEOUT_SECONDS
|
||||
limit = _get_max_concurrency()
|
||||
|
||||
# If limit is 1, run completely sequentially (one at a time, wait for each to finish)
|
||||
if limit == 1:
|
||||
results = {}
|
||||
for model in models:
|
||||
if DEBUG:
|
||||
print(f"[DEBUG] Running model '{model}' sequentially (concurrency=1)")
|
||||
results[model] = await query_model(
|
||||
model,
|
||||
messages,
|
||||
timeout=resolved_timeout,
|
||||
max_tokens_override=max_tokens_override,
|
||||
)
|
||||
return results
|
||||
|
||||
# If limit <= 0 or >= len(models), run all in parallel (no limit)
|
||||
if limit <= 0 or limit >= len(models):
|
||||
tasks = [
|
||||
query_model(
|
||||
model,
|
||||
messages,
|
||||
timeout=resolved_timeout,
|
||||
max_tokens_override=max_tokens_override,
|
||||
)
|
||||
for model in models
|
||||
]
|
||||
responses = await asyncio.gather(*tasks)
|
||||
return {model: response for model, response in zip(models, responses)}
|
||||
|
||||
# Otherwise, use semaphore to limit concurrency (2, 3, etc.)
|
||||
sem = asyncio.Semaphore(limit)
|
||||
|
||||
async def _run_one(model: str) -> Optional[Dict[str, Any]]:
|
||||
async with sem:
|
||||
return await query_model(
|
||||
model,
|
||||
messages,
|
||||
timeout=resolved_timeout,
|
||||
max_tokens_override=max_tokens_override,
|
||||
)
|
||||
|
||||
tasks = [_run_one(model) for model in models]
|
||||
responses = await asyncio.gather(*tasks)
|
||||
return {model: response for model, response in zip(models, responses)}
|
||||
|
||||
|
||||
579
backend/main.py
Normal file
579
backend/main.py
Normal file
@ -0,0 +1,579 @@
|
||||
"""FastAPI backend for LLM Council."""
|
||||
|
||||
from fastapi import FastAPI, HTTPException, UploadFile, File, Query
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.responses import StreamingResponse, Response
|
||||
from pydantic import BaseModel
|
||||
from typing import List, Dict, Any
|
||||
import uuid
|
||||
import json
|
||||
import asyncio
|
||||
import time
|
||||
from datetime import datetime
|
||||
|
||||
from . import storage
|
||||
from . import documents
|
||||
from .config import MAX_DOC_PREVIEW_CHARS, COUNCIL_MODELS
|
||||
from .docs_context import build_docs_context
|
||||
from .llm_client import get_provider_info, list_models as llm_list_models, query_model, LLM_TIMEOUT_SECONDS, MAX_TOKENS
|
||||
from .council import run_full_council, generate_conversation_title, stage1_collect_responses, stage2_collect_rankings, stage3_synthesize_final, calculate_aggregate_rankings, _format_docs_context
|
||||
|
||||
app = FastAPI(title="LLM Council API")
|
||||
|
||||
# Enable CORS for local development
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["http://localhost:5173", "http://localhost:5174", "http://localhost:3000"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
|
||||
class CreateConversationRequest(BaseModel):
|
||||
"""Request to create a new conversation."""
|
||||
pass
|
||||
|
||||
|
||||
class SendMessageRequest(BaseModel):
|
||||
"""Request to send a message in a conversation."""
|
||||
content: str
|
||||
|
||||
|
||||
class ConversationMetadata(BaseModel):
|
||||
"""Conversation metadata for list view."""
|
||||
id: str
|
||||
created_at: str
|
||||
title: str
|
||||
message_count: int
|
||||
|
||||
|
||||
class Conversation(BaseModel):
|
||||
"""Full conversation with all messages."""
|
||||
id: str
|
||||
created_at: str
|
||||
title: str
|
||||
messages: List[Dict[str, Any]]
|
||||
|
||||
|
||||
@app.get("/")
|
||||
async def root():
|
||||
"""Health check endpoint."""
|
||||
return {"status": "ok", "service": "LLM Council API"}
|
||||
|
||||
|
||||
@app.get("/api/llm/status")
|
||||
async def llm_status(probe: bool = Query(False, description="If true, query the provider for available models")):
|
||||
"""
|
||||
Returns current LLM provider configuration and (optionally) probes the provider.
|
||||
"""
|
||||
info = get_provider_info()
|
||||
if probe:
|
||||
info["remote_models"] = await llm_list_models()
|
||||
return info
|
||||
|
||||
|
||||
@app.get("/api/conversations", response_model=List[ConversationMetadata])
|
||||
async def list_conversations():
|
||||
"""List all conversations (metadata only)."""
|
||||
return storage.list_conversations()
|
||||
|
||||
|
||||
@app.post("/api/conversations", response_model=Conversation)
|
||||
async def create_conversation(request: CreateConversationRequest):
|
||||
"""Create a new conversation."""
|
||||
conversation_id = str(uuid.uuid4())
|
||||
conversation = storage.create_conversation(conversation_id)
|
||||
return conversation
|
||||
|
||||
|
||||
@app.get("/api/conversations/{conversation_id}", response_model=Conversation)
|
||||
async def get_conversation(conversation_id: str):
|
||||
"""Get a specific conversation with all its messages."""
|
||||
conversation = storage.get_conversation(conversation_id)
|
||||
if conversation is None:
|
||||
raise HTTPException(status_code=404, detail="Conversation not found")
|
||||
return conversation
|
||||
|
||||
|
||||
@app.delete("/api/conversations/{conversation_id}")
|
||||
async def delete_conversation(conversation_id: str):
|
||||
"""Delete a conversation and its associated documents."""
|
||||
try:
|
||||
storage.delete_conversation(conversation_id)
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=404, detail=str(e))
|
||||
return {"ok": True}
|
||||
|
||||
|
||||
@app.get("/api/conversations/{conversation_id}/documents")
|
||||
async def list_conversation_documents(conversation_id: str):
|
||||
"""List uploaded markdown documents for a conversation."""
|
||||
conversation = storage.get_conversation(conversation_id)
|
||||
if conversation is None:
|
||||
raise HTTPException(status_code=404, detail="Conversation not found")
|
||||
docs = documents.list_documents(conversation_id)
|
||||
return [{"id": d.id, "filename": d.filename, "bytes": d.bytes} for d in docs]
|
||||
|
||||
|
||||
@app.post("/api/conversations/{conversation_id}/documents")
|
||||
async def upload_conversation_document(
|
||||
conversation_id: str,
|
||||
files: List[UploadFile] = File(default=[]),
|
||||
file: UploadFile = File(default=None),
|
||||
):
|
||||
"""
|
||||
Upload one or more markdown documents (.md) for a conversation.
|
||||
|
||||
Backwards compatible:
|
||||
- old clients send a single "file" field
|
||||
- new clients can send multiple "files" fields
|
||||
"""
|
||||
conversation = storage.get_conversation(conversation_id)
|
||||
if conversation is None:
|
||||
raise HTTPException(status_code=404, detail="Conversation not found")
|
||||
|
||||
incoming: List[UploadFile] = []
|
||||
if file is not None:
|
||||
incoming.append(file)
|
||||
if files:
|
||||
incoming.extend(files)
|
||||
|
||||
if not incoming:
|
||||
raise HTTPException(status_code=400, detail="No files uploaded")
|
||||
|
||||
uploaded = []
|
||||
for f in incoming:
|
||||
filename = f.filename or "document.md"
|
||||
if not filename.lower().endswith(".md"):
|
||||
raise HTTPException(status_code=400, detail="Only .md files are supported")
|
||||
content = await f.read()
|
||||
try:
|
||||
meta = documents.save_markdown_document(conversation_id, filename, content)
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
uploaded.append({"id": meta.id, "filename": meta.filename, "bytes": meta.bytes})
|
||||
|
||||
# Back-compat: if single file uploaded, return the single object shape.
|
||||
if len(uploaded) == 1:
|
||||
return uploaded[0]
|
||||
|
||||
return {"uploaded": uploaded}
|
||||
|
||||
|
||||
@app.get("/api/conversations/{conversation_id}/documents/{doc_id}")
|
||||
async def get_conversation_document(conversation_id: str, doc_id: str):
|
||||
"""Fetch a markdown document's text (truncated for safety)."""
|
||||
conversation = storage.get_conversation(conversation_id)
|
||||
if conversation is None:
|
||||
raise HTTPException(status_code=404, detail="Conversation not found")
|
||||
|
||||
try:
|
||||
text = documents.read_document_text(conversation_id, doc_id)
|
||||
except FileNotFoundError:
|
||||
raise HTTPException(status_code=404, detail="Document not found")
|
||||
|
||||
if len(text) > MAX_DOC_PREVIEW_CHARS:
|
||||
text = text[: MAX_DOC_PREVIEW_CHARS - 3] + "..."
|
||||
return {"id": doc_id, "content": text}
|
||||
|
||||
|
||||
@app.delete("/api/conversations/{conversation_id}/documents/{doc_id}")
|
||||
async def delete_conversation_document(conversation_id: str, doc_id: str):
|
||||
"""Delete a previously uploaded document."""
|
||||
conversation = storage.get_conversation(conversation_id)
|
||||
if conversation is None:
|
||||
raise HTTPException(status_code=404, detail="Conversation not found")
|
||||
|
||||
try:
|
||||
documents.delete_document(conversation_id, doc_id)
|
||||
except FileNotFoundError:
|
||||
raise HTTPException(status_code=404, detail="Document not found")
|
||||
|
||||
return {"ok": True}
|
||||
|
||||
|
||||
@app.patch("/api/conversations/{conversation_id}/title")
|
||||
async def update_conversation_title_endpoint(conversation_id: str, request: dict):
|
||||
"""Update the title of a conversation."""
|
||||
conversation = storage.get_conversation(conversation_id)
|
||||
if conversation is None:
|
||||
raise HTTPException(status_code=404, detail="Conversation not found")
|
||||
|
||||
new_title = request.get('title', '').strip()
|
||||
if not new_title:
|
||||
raise HTTPException(status_code=400, detail="Title cannot be empty")
|
||||
|
||||
storage.update_conversation_title(conversation_id, new_title)
|
||||
return {"ok": True, "title": new_title}
|
||||
|
||||
|
||||
@app.get("/api/conversations/search")
|
||||
async def search_conversations(q: str = ""):
|
||||
"""Search conversations by title and content."""
|
||||
if not q or len(q.strip()) < 2:
|
||||
return []
|
||||
|
||||
query = q.strip().lower()
|
||||
all_conversations = storage.list_conversations()
|
||||
results = []
|
||||
|
||||
for conv_meta in all_conversations:
|
||||
# Search in title
|
||||
title_match = query in (conv_meta.get('title', '') or '').lower()
|
||||
|
||||
# Search in content
|
||||
conv = storage.get_conversation(conv_meta['id'])
|
||||
content_match = False
|
||||
if conv:
|
||||
for msg in conv.get('messages', []):
|
||||
if query in msg.get('content', '').lower():
|
||||
content_match = True
|
||||
break
|
||||
|
||||
if title_match or content_match:
|
||||
results.append(conv_meta)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
@app.post("/api/conversations/{conversation_id}/message")
|
||||
async def send_message(conversation_id: str, request: SendMessageRequest):
|
||||
"""
|
||||
Send a message and run the 3-stage council process.
|
||||
Returns the complete response with all stages.
|
||||
"""
|
||||
# Check if conversation exists
|
||||
conversation = storage.get_conversation(conversation_id)
|
||||
if conversation is None:
|
||||
raise HTTPException(status_code=404, detail="Conversation not found")
|
||||
|
||||
# Check if this is the first message
|
||||
is_first_message = len(conversation["messages"]) == 0
|
||||
|
||||
# Add user message
|
||||
storage.add_user_message(conversation_id, request.content)
|
||||
|
||||
# If this is the first message, generate a title
|
||||
if is_first_message:
|
||||
title = await generate_conversation_title(request.content)
|
||||
storage.update_conversation_title(conversation_id, title)
|
||||
|
||||
# Run the 3-stage council process
|
||||
docs_text = build_docs_context(conversation_id, user_query=request.content)
|
||||
stage1_results, stage2_results, stage3_result, metadata = await run_full_council(
|
||||
request.content,
|
||||
docs_text=docs_text,
|
||||
)
|
||||
|
||||
# Add assistant message with all stages
|
||||
storage.add_assistant_message(
|
||||
conversation_id,
|
||||
stage1_results,
|
||||
stage2_results,
|
||||
stage3_result,
|
||||
metadata
|
||||
)
|
||||
|
||||
# Return the complete response with metadata
|
||||
return {
|
||||
"stage1": stage1_results,
|
||||
"stage2": stage2_results,
|
||||
"stage3": stage3_result,
|
||||
"metadata": metadata
|
||||
}
|
||||
|
||||
|
||||
@app.post("/api/conversations/{conversation_id}/message/stream")
|
||||
async def send_message_stream(conversation_id: str, request: SendMessageRequest):
|
||||
"""
|
||||
Send a message and stream the 3-stage council process.
|
||||
Returns Server-Sent Events as each stage completes.
|
||||
"""
|
||||
# Check if conversation exists
|
||||
conversation = storage.get_conversation(conversation_id)
|
||||
if conversation is None:
|
||||
raise HTTPException(status_code=404, detail="Conversation not found")
|
||||
|
||||
# Check if this is the first message
|
||||
is_first_message = len(conversation["messages"]) == 0
|
||||
|
||||
async def event_generator():
|
||||
try:
|
||||
# Add user message
|
||||
storage.add_user_message(conversation_id, request.content)
|
||||
|
||||
# Start title generation in parallel (don't await yet)
|
||||
title_task = None
|
||||
if is_first_message:
|
||||
title_task = asyncio.create_task(generate_conversation_title(request.content))
|
||||
|
||||
# Load docs context once per request
|
||||
docs_text = build_docs_context(conversation_id, user_query=request.content)
|
||||
|
||||
# Stage 1: Collect responses - stream individual responses as they complete
|
||||
yield f"data: {json.dumps({'type': 'stage1_start'})}\n\n"
|
||||
|
||||
# Stream responses as they complete
|
||||
from .config import COUNCIL_MODELS, OPENAI_COMPAT_BASE_URL, DEBUG
|
||||
from .llm_client import query_model, LLM_TIMEOUT_SECONDS, MAX_TOKENS
|
||||
from .council import _format_docs_context
|
||||
|
||||
if DEBUG:
|
||||
print(f"[DEBUG] Stage 1: Querying {len(COUNCIL_MODELS)} models: {COUNCIL_MODELS}")
|
||||
print(f"[DEBUG] Using base URL: {OPENAI_COMPAT_BASE_URL}")
|
||||
|
||||
start_time = time.time()
|
||||
prompt = f"{request.content}{_format_docs_context(docs_text)}"
|
||||
messages = [{"role": "user", "content": prompt}]
|
||||
|
||||
stage1_results = []
|
||||
successful_models = []
|
||||
failed_models = []
|
||||
response_queue = asyncio.Queue()
|
||||
|
||||
async def process_model(model: str):
|
||||
try:
|
||||
if DEBUG:
|
||||
print(f"[DEBUG] Processing model: {model}")
|
||||
response = await query_model(model, messages, timeout=LLM_TIMEOUT_SECONDS, max_tokens_override=MAX_TOKENS)
|
||||
if response is not None:
|
||||
result = {"model": model, "response": response.get('content', '')}
|
||||
await response_queue.put(('success', model, result))
|
||||
if DEBUG:
|
||||
print(f"[DEBUG] Model {model} succeeded")
|
||||
else:
|
||||
await response_queue.put(('failed', model, None))
|
||||
if DEBUG:
|
||||
print(f"[DEBUG] Model {model} failed (returned None)")
|
||||
except Exception as e:
|
||||
await response_queue.put(('failed', model, None))
|
||||
if DEBUG:
|
||||
print(f"[DEBUG] Model {model} exception: {e}")
|
||||
|
||||
# Create tasks
|
||||
tasks = [asyncio.create_task(process_model(model)) for model in COUNCIL_MODELS]
|
||||
|
||||
# Process responses as they arrive
|
||||
completed = 0
|
||||
while completed < len(COUNCIL_MODELS):
|
||||
status, model, result = await response_queue.get()
|
||||
if status == 'success':
|
||||
stage1_results.append(result)
|
||||
successful_models.append(model)
|
||||
# Stream this response immediately
|
||||
yield f"data: {json.dumps({'type': 'stage1_response', 'model': model, 'response': result})}\n\n"
|
||||
else:
|
||||
failed_models.append(model)
|
||||
# Stream failure notification
|
||||
yield f"data: {json.dumps({'type': 'stage1_response_failed', 'model': model})}\n\n"
|
||||
completed += 1
|
||||
|
||||
# Wait for all tasks to complete
|
||||
await asyncio.gather(*tasks, return_exceptions=True)
|
||||
|
||||
duration = time.time() - start_time
|
||||
stage1_metadata = {
|
||||
"duration_seconds": round(duration, 2),
|
||||
"successful_models": successful_models,
|
||||
"failed_models": failed_models,
|
||||
"total_models": len(COUNCIL_MODELS)
|
||||
}
|
||||
|
||||
yield f"data: {json.dumps({'type': 'stage1_complete', 'data': stage1_results, 'metadata': stage1_metadata})}\n\n"
|
||||
|
||||
# Stage 2: Collect rankings
|
||||
yield f"data: {json.dumps({'type': 'stage2_start'})}\n\n"
|
||||
stage2_results, label_to_model, stage2_metadata = await stage2_collect_rankings(request.content, stage1_results, docs_text=docs_text)
|
||||
aggregate_rankings = calculate_aggregate_rankings(stage2_results, label_to_model)
|
||||
yield f"data: {json.dumps({'type': 'stage2_complete', 'data': stage2_results, 'metadata': {'label_to_model': label_to_model, 'aggregate_rankings': aggregate_rankings, 'stage2_metadata': stage2_metadata}})}\n\n"
|
||||
|
||||
# Stage 3: Synthesize final answer
|
||||
yield f"data: {json.dumps({'type': 'stage3_start'})}\n\n"
|
||||
stage3_result, stage3_metadata = await stage3_synthesize_final(request.content, stage1_results, stage2_results, docs_text=docs_text)
|
||||
yield f"data: {json.dumps({'type': 'stage3_complete', 'data': stage3_result, 'metadata': stage3_metadata})}\n\n"
|
||||
|
||||
# Wait for title generation if it was started
|
||||
if title_task:
|
||||
title = await title_task
|
||||
storage.update_conversation_title(conversation_id, title)
|
||||
yield f"data: {json.dumps({'type': 'title_complete', 'data': {'title': title}})}\n\n"
|
||||
|
||||
# Prepare metadata
|
||||
metadata = {
|
||||
"label_to_model": label_to_model,
|
||||
"aggregate_rankings": aggregate_rankings,
|
||||
"stage1_metadata": stage1_metadata,
|
||||
"stage2_metadata": stage2_metadata,
|
||||
"stage3_metadata": stage3_metadata
|
||||
}
|
||||
|
||||
# Save complete assistant message
|
||||
storage.add_assistant_message(
|
||||
conversation_id,
|
||||
stage1_results,
|
||||
stage2_results,
|
||||
stage3_result,
|
||||
metadata
|
||||
)
|
||||
|
||||
# Send completion event
|
||||
yield f"data: {json.dumps({'type': 'complete'})}\n\n"
|
||||
|
||||
except Exception as e:
|
||||
# Send error event with details
|
||||
import traceback
|
||||
from .config import DEBUG
|
||||
error_msg = str(e)
|
||||
if DEBUG:
|
||||
error_msg += f"\n{traceback.format_exc()}"
|
||||
print(f"[ERROR] Stream error: {error_msg}")
|
||||
yield f"data: {json.dumps({'type': 'error', 'message': error_msg})}\n\n"
|
||||
|
||||
return StreamingResponse(
|
||||
event_generator(),
|
||||
media_type="text/event-stream",
|
||||
headers={
|
||||
"Cache-Control": "no-cache",
|
||||
"Connection": "keep-alive",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@app.get("/api/conversations/{conversation_id}/export")
|
||||
async def export_conversation_report(conversation_id: str):
|
||||
"""Export conversation as a markdown report file."""
|
||||
conversation = storage.get_conversation(conversation_id)
|
||||
if conversation is None:
|
||||
raise HTTPException(status_code=404, detail="Conversation not found")
|
||||
|
||||
# Get document list to map @1, @2, etc. to filenames
|
||||
from . import documents
|
||||
doc_list = documents.list_documents(conversation_id)
|
||||
doc_map = {} # Maps @1 -> filename, @2 -> filename, etc.
|
||||
for idx, doc in enumerate(doc_list, 1):
|
||||
doc_map[f"@{idx}"] = doc.filename
|
||||
doc_map[f"@ {idx}"] = doc.filename # Also handle @ 1 (with space)
|
||||
|
||||
def replace_doc_references(text: str) -> str:
|
||||
"""Replace @1, @2, etc. with actual filenames."""
|
||||
import re
|
||||
# Replace @1, @2, @3, etc. with filenames
|
||||
for ref, filename in doc_map.items():
|
||||
# Match @1, @2, etc. (with optional space after @)
|
||||
pattern = re.escape(ref)
|
||||
text = re.sub(pattern, filename, text)
|
||||
return text
|
||||
|
||||
# Generate markdown report
|
||||
lines = []
|
||||
lines.append(f"# {conversation.get('title', 'Conversation')}\n")
|
||||
|
||||
# Add metadata
|
||||
created_at = conversation.get('created_at', '')
|
||||
if created_at:
|
||||
try:
|
||||
dt = datetime.fromisoformat(created_at.replace('Z', '+00:00'))
|
||||
lines.append(f"**Created:** {dt.strftime('%Y-%m-%d %H:%M:%S UTC')}\n")
|
||||
except:
|
||||
lines.append(f"**Created:** {created_at}\n")
|
||||
lines.append(f"**Conversation ID:** {conversation_id}\n")
|
||||
lines.append("\n---\n\n")
|
||||
|
||||
# Add messages
|
||||
for msg_idx, msg in enumerate(conversation.get('messages', []), 1):
|
||||
if msg['role'] == 'user':
|
||||
lines.append(f"## User Message {msg_idx}\n\n")
|
||||
content = replace_doc_references(msg['content'])
|
||||
lines.append(f"{content}\n\n")
|
||||
lines.append("---\n\n")
|
||||
elif msg['role'] == 'assistant':
|
||||
lines.append(f"## LLM Council Response {msg_idx}\n\n")
|
||||
|
||||
metadata = msg.get('metadata', {})
|
||||
|
||||
# Stage 1
|
||||
if msg.get('stage1'):
|
||||
stage1_meta = metadata.get('stage1_metadata', {})
|
||||
duration = stage1_meta.get('duration_seconds', 0)
|
||||
successful = len(stage1_meta.get('successful_models', []))
|
||||
total = stage1_meta.get('total_models', 0)
|
||||
|
||||
lines.append("### Stage 1: Individual Responses\n\n")
|
||||
if duration:
|
||||
lines.append(f"*Duration: {duration}s | Successful: {successful}/{total} models*\n\n")
|
||||
|
||||
for response in msg['stage1']:
|
||||
lines.append(f"#### {response['model']}\n\n")
|
||||
content = replace_doc_references(response['response'])
|
||||
lines.append(f"{content}\n\n")
|
||||
lines.append("\n---\n\n")
|
||||
|
||||
# Stage 2
|
||||
if msg.get('stage2'):
|
||||
stage2_meta = metadata.get('stage2_metadata', {})
|
||||
duration = stage2_meta.get('duration_seconds', 0)
|
||||
successful = len(stage2_meta.get('successful_models', []))
|
||||
total = stage2_meta.get('total_models', 0)
|
||||
|
||||
lines.append("### Stage 2: Peer Rankings\n\n")
|
||||
if duration:
|
||||
lines.append(f"*Duration: {duration}s | Successful: {successful}/{total} models*\n\n")
|
||||
|
||||
for ranking in msg['stage2']:
|
||||
lines.append(f"#### {ranking['model']}\n\n")
|
||||
content = replace_doc_references(ranking['ranking'])
|
||||
lines.append(f"{content}\n\n")
|
||||
|
||||
# Add aggregate rankings if available
|
||||
agg_rankings = metadata.get('aggregate_rankings', [])
|
||||
if agg_rankings:
|
||||
lines.append("#### Aggregate Rankings\n\n")
|
||||
for item in agg_rankings:
|
||||
lines.append(f"- **{item['model']}**: Average rank {item['average_rank']:.2f}\n")
|
||||
lines.append("\n")
|
||||
|
||||
lines.append("\n---\n\n")
|
||||
|
||||
# Stage 3
|
||||
if msg.get('stage3'):
|
||||
stage3_meta = metadata.get('stage3_metadata', {})
|
||||
duration = stage3_meta.get('duration_seconds', 0)
|
||||
model = stage3_meta.get('model', msg['stage3'].get('model', 'Unknown'))
|
||||
|
||||
lines.append("### Stage 3: Final Synthesis\n\n")
|
||||
if duration:
|
||||
lines.append(f"*Duration: {duration}s | Model: {model}*\n\n")
|
||||
|
||||
content = replace_doc_references(msg['stage3'].get('response', ''))
|
||||
lines.append(f"{content}\n\n")
|
||||
|
||||
# Total duration
|
||||
total_duration = metadata.get('total_duration_seconds')
|
||||
if total_duration:
|
||||
lines.append(f"**Total processing time:** {total_duration}s\n\n")
|
||||
|
||||
lines.append("---\n\n")
|
||||
|
||||
# Convert to string
|
||||
content = "".join(lines)
|
||||
|
||||
# Generate filename
|
||||
title = conversation.get('title', 'conversation')
|
||||
# Sanitize filename
|
||||
safe_title = "".join(c if c.isalnum() or c in (' ', '-', '_') else '_' for c in title)
|
||||
safe_title = safe_title[:50].strip() # Limit length
|
||||
filename = f"{safe_title}_{conversation_id[:8]}.md"
|
||||
|
||||
return Response(
|
||||
content=content,
|
||||
media_type="text/markdown",
|
||||
headers={
|
||||
"Content-Disposition": f'attachment; filename="{filename}"'
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8001)
|
||||
253
backend/openai_compat.py
Normal file
253
backend/openai_compat.py
Normal file
@ -0,0 +1,253 @@
|
||||
"""OpenAI-compatible API client (for Ollama / vLLM / TGI / OpenAI-style servers).
|
||||
|
||||
This lets LLM Council talk to any OpenAI-compatible server (local Ollama,
|
||||
remote Ollama, vLLM, TGI, etc.).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
import httpx
|
||||
|
||||
from .config import (
|
||||
OPENAI_COMPAT_BASE_URL,
|
||||
OPENAI_COMPAT_RETRIES,
|
||||
OPENAI_COMPAT_RETRY_BACKOFF_SECONDS,
|
||||
OPENAI_COMPAT_TIMEOUT_SECONDS,
|
||||
OPENAI_COMPAT_CONNECT_TIMEOUT_SECONDS,
|
||||
OPENAI_COMPAT_WRITE_TIMEOUT_SECONDS,
|
||||
OPENAI_COMPAT_POOL_TIMEOUT_SECONDS,
|
||||
DEBUG,
|
||||
)
|
||||
|
||||
|
||||
def _resolve_chat_completions_url(base_url: str) -> str:
|
||||
"""
|
||||
Accepts either:
|
||||
- http://host:8000 -> http://host:8000/v1/chat/completions
|
||||
- http://host:8000/v1 -> http://host:8000/v1/chat/completions
|
||||
- http://host:8000/v1/ -> http://host:8000/v1/chat/completions
|
||||
"""
|
||||
base = base_url.rstrip("/")
|
||||
if base.endswith("/v1"):
|
||||
return f"{base}/chat/completions"
|
||||
if "/v1/" in f"{base}/":
|
||||
# Already has /v1 somewhere; assume caller gave full root including /v1
|
||||
return f"{base}/chat/completions"
|
||||
return f"{base}/v1/chat/completions"
|
||||
|
||||
|
||||
def _resolve_models_url(base_url: str) -> str:
|
||||
base = base_url.rstrip("/")
|
||||
if base.endswith("/v1"):
|
||||
return f"{base}/models"
|
||||
if "/v1/" in f"{base}/":
|
||||
return f"{base}/models"
|
||||
return f"{base}/v1/models"
|
||||
|
||||
|
||||
def _resolve_ollama_tags_url(base_url: str) -> str:
|
||||
"""Resolve Ollama's native /api/tags endpoint URL."""
|
||||
base = base_url.rstrip("/")
|
||||
return f"{base}/api/tags"
|
||||
|
||||
|
||||
def _should_retry(status_code: int) -> bool:
|
||||
return status_code in {408, 409, 425, 429, 500, 502, 503, 504}
|
||||
|
||||
|
||||
async def query_model(
|
||||
model: str,
|
||||
messages: List[Dict[str, str]],
|
||||
*,
|
||||
base_url: Optional[str] = None,
|
||||
api_key: Optional[str] = None,
|
||||
max_tokens: int = 2048,
|
||||
timeout: Optional[float] = None,
|
||||
client: Optional[httpx.AsyncClient] = None,
|
||||
) -> Optional[Dict[str, Any]]:
|
||||
"""Query a model via an OpenAI-compatible chat completions endpoint."""
|
||||
resolved_base_url = base_url or OPENAI_COMPAT_BASE_URL
|
||||
if not resolved_base_url:
|
||||
print("Error querying OpenAI-compatible provider: OPENAI_COMPAT_BASE_URL not set")
|
||||
return None
|
||||
|
||||
resolved_api_key = api_key if api_key is not None else os.getenv("OPENAI_COMPAT_API_KEY")
|
||||
resolved_timeout = OPENAI_COMPAT_TIMEOUT_SECONDS if timeout is None else timeout
|
||||
retries = OPENAI_COMPAT_RETRIES
|
||||
backoff = OPENAI_COMPAT_RETRY_BACKOFF_SECONDS
|
||||
|
||||
url = _resolve_chat_completions_url(resolved_base_url)
|
||||
headers = {"Content-Type": "application/json"}
|
||||
if resolved_api_key:
|
||||
headers["Authorization"] = f"Bearer {resolved_api_key}"
|
||||
|
||||
payload: Dict[str, Any] = {
|
||||
"model": model,
|
||||
"messages": messages,
|
||||
"max_tokens": max_tokens,
|
||||
}
|
||||
|
||||
if DEBUG:
|
||||
print(f"[DEBUG] Querying model '{model}' at {url} (timeout={resolved_timeout}s, max_tokens={max_tokens})")
|
||||
|
||||
close_client = False
|
||||
try:
|
||||
if client is None:
|
||||
# Use explicit Timeout object to ensure read timeout is set correctly
|
||||
# For LLM requests, we need a long read timeout since generation can take time
|
||||
timeout_config = httpx.Timeout(
|
||||
connect=OPENAI_COMPAT_CONNECT_TIMEOUT_SECONDS,
|
||||
read=resolved_timeout, # Read timeout: use the configured timeout
|
||||
write=OPENAI_COMPAT_WRITE_TIMEOUT_SECONDS,
|
||||
pool=OPENAI_COMPAT_POOL_TIMEOUT_SECONDS
|
||||
)
|
||||
client = httpx.AsyncClient(timeout=timeout_config)
|
||||
close_client = True
|
||||
|
||||
attempt = 0
|
||||
while True:
|
||||
if DEBUG:
|
||||
print(f"[DEBUG] Attempt {attempt + 1}/{retries + 1}: POST {url}")
|
||||
resp = await client.post(url, headers=headers, json=payload)
|
||||
if resp.status_code != 200:
|
||||
# Preserve server-provided error text for debugging.
|
||||
try:
|
||||
err_json = resp.json()
|
||||
err_msg = err_json.get("error", {}).get("message", resp.text)
|
||||
except Exception:
|
||||
err_msg = resp.text
|
||||
|
||||
if attempt < retries and _should_retry(resp.status_code):
|
||||
await asyncio.sleep(backoff * (2**attempt))
|
||||
attempt += 1
|
||||
continue
|
||||
|
||||
print(f"Error querying model {model} (HTTP {resp.status_code}): {err_msg}")
|
||||
return None
|
||||
|
||||
data = resp.json()
|
||||
msg = data["choices"][0]["message"]
|
||||
if DEBUG:
|
||||
print(f"[DEBUG] Model '{model}' responded successfully")
|
||||
return {
|
||||
"content": msg.get("content"),
|
||||
"reasoning_details": msg.get("reasoning_details"),
|
||||
}
|
||||
except httpx.TimeoutException as e:
|
||||
print(f"[ERROR] Model '{model}' timeout after {resolved_timeout}s at {url}")
|
||||
print(
|
||||
f"[ERROR] This can mean the model is loading / slow, OR that the server/port is unreachable.\n"
|
||||
f"[ERROR] Check connectivity: curl {resolved_base_url}/api/tags"
|
||||
)
|
||||
return None
|
||||
except httpx.ConnectError as e:
|
||||
print(f"[ERROR] Cannot connect to {url}: {e}")
|
||||
print(f"[ERROR] Is Ollama running? Check: curl {resolved_base_url}/api/tags")
|
||||
return None
|
||||
except Exception as e:
|
||||
print(f"[ERROR] Unexpected error querying model '{model}' at {url}: {type(e).__name__}: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return None
|
||||
finally:
|
||||
if close_client and client is not None:
|
||||
await client.aclose()
|
||||
|
||||
|
||||
async def list_models(
|
||||
*,
|
||||
base_url: Optional[str] = None,
|
||||
api_key: Optional[str] = None,
|
||||
timeout: Optional[float] = None,
|
||||
client: Optional[httpx.AsyncClient] = None,
|
||||
) -> Optional[List[str]]:
|
||||
"""Return model IDs from an OpenAI-compatible server (/v1/models)."""
|
||||
resolved_base_url = base_url or OPENAI_COMPAT_BASE_URL
|
||||
if not resolved_base_url:
|
||||
return None
|
||||
|
||||
resolved_api_key = api_key if api_key is not None else os.getenv("OPENAI_COMPAT_API_KEY")
|
||||
resolved_timeout = OPENAI_COMPAT_TIMEOUT_SECONDS if timeout is None else timeout
|
||||
retries = OPENAI_COMPAT_RETRIES
|
||||
backoff = OPENAI_COMPAT_RETRY_BACKOFF_SECONDS
|
||||
|
||||
# Try OpenAI-compatible endpoint first
|
||||
url = _resolve_models_url(resolved_base_url)
|
||||
headers = {"Content-Type": "application/json"}
|
||||
if resolved_api_key:
|
||||
headers["Authorization"] = f"Bearer {resolved_api_key}"
|
||||
|
||||
close_client = False
|
||||
try:
|
||||
if client is None:
|
||||
# Use explicit Timeout object for list_models (faster operation)
|
||||
timeout_config = httpx.Timeout(
|
||||
connect=OPENAI_COMPAT_CONNECT_TIMEOUT_SECONDS,
|
||||
read=resolved_timeout,
|
||||
write=OPENAI_COMPAT_WRITE_TIMEOUT_SECONDS,
|
||||
pool=OPENAI_COMPAT_POOL_TIMEOUT_SECONDS
|
||||
)
|
||||
client = httpx.AsyncClient(timeout=timeout_config)
|
||||
close_client = True
|
||||
|
||||
attempt = 0
|
||||
while True:
|
||||
resp = await client.get(url, headers=headers)
|
||||
if resp.status_code == 200:
|
||||
data = resp.json()
|
||||
# Try OpenAI-compatible format first
|
||||
items = data.get("data", [])
|
||||
if items:
|
||||
ids: List[str] = []
|
||||
for it in items:
|
||||
mid = it.get("id")
|
||||
if mid:
|
||||
ids.append(mid)
|
||||
return ids
|
||||
# Fallback: check if it's already in Ollama format
|
||||
items = data.get("models", [])
|
||||
if items:
|
||||
ids: List[str] = []
|
||||
for it in items:
|
||||
mid = it.get("name") or it.get("model")
|
||||
if mid:
|
||||
ids.append(mid)
|
||||
return ids
|
||||
return []
|
||||
|
||||
# If /v1/models fails, try Ollama's native /api/tags endpoint
|
||||
if resp.status_code == 404 and attempt == 0:
|
||||
ollama_url = _resolve_ollama_tags_url(resolved_base_url)
|
||||
if DEBUG:
|
||||
print(f"[DEBUG] /v1/models not found, trying Ollama native API: {ollama_url}")
|
||||
resp = await client.get(ollama_url, headers=headers)
|
||||
if resp.status_code == 200:
|
||||
data = resp.json()
|
||||
items = data.get("models", [])
|
||||
if items:
|
||||
ids: List[str] = []
|
||||
for it in items:
|
||||
mid = it.get("name") or it.get("model")
|
||||
if mid:
|
||||
ids.append(mid)
|
||||
return ids
|
||||
|
||||
if attempt < retries and _should_retry(resp.status_code):
|
||||
await asyncio.sleep(backoff * (2**attempt))
|
||||
attempt += 1
|
||||
continue
|
||||
return None
|
||||
except Exception as e:
|
||||
if DEBUG:
|
||||
msg = str(e) if str(e) else "(no message)"
|
||||
print(f"[DEBUG] Error listing models: {type(e).__name__}: {msg}")
|
||||
return None
|
||||
finally:
|
||||
if close_client and client is not None:
|
||||
await client.aclose()
|
||||
|
||||
|
||||
224
backend/storage.py
Normal file
224
backend/storage.py
Normal file
@ -0,0 +1,224 @@
|
||||
"""JSON-based storage for conversations."""
|
||||
|
||||
import json
|
||||
import os
|
||||
from datetime import datetime
|
||||
from typing import List, Dict, Any, Optional
|
||||
from pathlib import Path
|
||||
from .config import DATA_DIR
|
||||
|
||||
|
||||
def ensure_data_dir():
|
||||
"""Ensure the data directory exists."""
|
||||
Path(DATA_DIR).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
|
||||
def get_conversation_path(conversation_id: str) -> str:
|
||||
"""Get the file path for a conversation."""
|
||||
return os.path.join(DATA_DIR, f"{conversation_id}.json")
|
||||
|
||||
|
||||
def create_conversation(conversation_id: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Create a new conversation.
|
||||
|
||||
Args:
|
||||
conversation_id: Unique identifier for the conversation
|
||||
|
||||
Returns:
|
||||
New conversation dict
|
||||
"""
|
||||
ensure_data_dir()
|
||||
|
||||
conversation = {
|
||||
"id": conversation_id,
|
||||
"created_at": datetime.utcnow().isoformat(),
|
||||
"title": "New Conversation",
|
||||
"messages": []
|
||||
}
|
||||
|
||||
# Save to file
|
||||
path = get_conversation_path(conversation_id)
|
||||
with open(path, 'w') as f:
|
||||
json.dump(conversation, f, indent=2)
|
||||
|
||||
return conversation
|
||||
|
||||
|
||||
def get_conversation(conversation_id: str) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Load a conversation from storage.
|
||||
|
||||
Args:
|
||||
conversation_id: Unique identifier for the conversation
|
||||
|
||||
Returns:
|
||||
Conversation dict or None if not found
|
||||
"""
|
||||
path = get_conversation_path(conversation_id)
|
||||
|
||||
if not os.path.exists(path):
|
||||
return None
|
||||
|
||||
with open(path, 'r') as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def save_conversation(conversation: Dict[str, Any]):
|
||||
"""
|
||||
Save a conversation to storage.
|
||||
|
||||
Args:
|
||||
conversation: Conversation dict to save
|
||||
"""
|
||||
ensure_data_dir()
|
||||
|
||||
path = get_conversation_path(conversation['id'])
|
||||
with open(path, 'w') as f:
|
||||
json.dump(conversation, f, indent=2)
|
||||
|
||||
|
||||
def list_conversations(include_archived: bool = False) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
List all conversations (metadata only).
|
||||
|
||||
Args:
|
||||
include_archived: If True, include archived conversations
|
||||
|
||||
Returns:
|
||||
List of conversation metadata dicts
|
||||
"""
|
||||
ensure_data_dir()
|
||||
|
||||
conversations = []
|
||||
for filename in os.listdir(DATA_DIR):
|
||||
if filename.endswith('.json'):
|
||||
path = os.path.join(DATA_DIR, filename)
|
||||
with open(path, 'r') as f:
|
||||
data = json.load(f)
|
||||
# Return metadata only
|
||||
conversations.append({
|
||||
"id": data["id"],
|
||||
"created_at": data["created_at"],
|
||||
"title": data.get("title", "New Conversation"),
|
||||
"message_count": len(data["messages"])
|
||||
})
|
||||
|
||||
# Sort by creation time, newest first
|
||||
conversations.sort(key=lambda x: x["created_at"], reverse=True)
|
||||
|
||||
return conversations
|
||||
|
||||
|
||||
def add_user_message(conversation_id: str, content: str):
|
||||
"""
|
||||
Add a user message to a conversation.
|
||||
|
||||
Args:
|
||||
conversation_id: Conversation identifier
|
||||
content: User message content
|
||||
"""
|
||||
conversation = get_conversation(conversation_id)
|
||||
if conversation is None:
|
||||
raise ValueError(f"Conversation {conversation_id} not found")
|
||||
|
||||
conversation["messages"].append({
|
||||
"role": "user",
|
||||
"content": content
|
||||
})
|
||||
|
||||
save_conversation(conversation)
|
||||
|
||||
|
||||
def add_assistant_message(
|
||||
conversation_id: str,
|
||||
stage1: List[Dict[str, Any]],
|
||||
stage2: List[Dict[str, Any]],
|
||||
stage3: Dict[str, Any],
|
||||
metadata: Optional[Dict[str, Any]] = None
|
||||
):
|
||||
"""
|
||||
Add an assistant message with all 3 stages to a conversation.
|
||||
|
||||
Args:
|
||||
conversation_id: Conversation identifier
|
||||
stage1: List of individual model responses
|
||||
stage2: List of model rankings
|
||||
stage3: Final synthesized response
|
||||
metadata: Optional metadata dict with timing and other info
|
||||
"""
|
||||
conversation = get_conversation(conversation_id)
|
||||
if conversation is None:
|
||||
raise ValueError(f"Conversation {conversation_id} not found")
|
||||
|
||||
message = {
|
||||
"role": "assistant",
|
||||
"stage1": stage1,
|
||||
"stage2": stage2,
|
||||
"stage3": stage3
|
||||
}
|
||||
if metadata:
|
||||
message["metadata"] = metadata
|
||||
|
||||
conversation["messages"].append(message)
|
||||
|
||||
save_conversation(conversation)
|
||||
|
||||
|
||||
def update_conversation_title(conversation_id: str, title: str):
|
||||
"""
|
||||
Update the title of a conversation.
|
||||
|
||||
Args:
|
||||
conversation_id: Conversation identifier
|
||||
title: New title for the conversation
|
||||
"""
|
||||
conversation = get_conversation(conversation_id)
|
||||
if conversation is None:
|
||||
raise ValueError(f"Conversation {conversation_id} not found")
|
||||
|
||||
conversation["title"] = title
|
||||
save_conversation(conversation)
|
||||
|
||||
|
||||
def delete_conversation(conversation_id: str):
|
||||
"""
|
||||
Delete a conversation (and its associated documents).
|
||||
|
||||
Args:
|
||||
conversation_id: Conversation identifier
|
||||
"""
|
||||
path = get_conversation_path(conversation_id)
|
||||
if not os.path.exists(path):
|
||||
raise ValueError(f"Conversation {conversation_id} not found")
|
||||
|
||||
# Delete the conversation file
|
||||
os.remove(path)
|
||||
|
||||
# Also delete associated documents directory
|
||||
from .documents import _conversation_dir
|
||||
docs_dir = _conversation_dir(conversation_id)
|
||||
if docs_dir.exists():
|
||||
import shutil
|
||||
shutil.rmtree(docs_dir, ignore_errors=True)
|
||||
|
||||
|
||||
def archive_conversation(conversation_id: str, archived: bool = True):
|
||||
"""
|
||||
Archive or unarchive a conversation.
|
||||
|
||||
Args:
|
||||
conversation_id: Conversation identifier
|
||||
archived: True to archive, False to unarchive
|
||||
"""
|
||||
conversation = get_conversation(conversation_id)
|
||||
if conversation is None:
|
||||
raise ValueError(f"Conversation {conversation_id} not found")
|
||||
|
||||
conversation["archived"] = archived
|
||||
if archived:
|
||||
conversation["archived_at"] = datetime.utcnow().isoformat()
|
||||
else:
|
||||
conversation.pop("archived_at", None)
|
||||
|
||||
save_conversation(conversation)
|
||||
35
backend/tests/test_config_env.py
Normal file
35
backend/tests/test_config_env.py
Normal file
@ -0,0 +1,35 @@
|
||||
import importlib
|
||||
import os
|
||||
import unittest
|
||||
|
||||
|
||||
class TestConfigEnvOverrides(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self._old_env = dict(os.environ)
|
||||
|
||||
def tearDown(self):
|
||||
os.environ.clear()
|
||||
os.environ.update(self._old_env)
|
||||
|
||||
def test_council_models_override_from_env_csv(self):
|
||||
os.environ["COUNCIL_MODELS"] = "a,b, c"
|
||||
import backend.config as config
|
||||
|
||||
importlib.reload(config)
|
||||
self.assertEqual(config.COUNCIL_MODELS, ["a", "b", "c"])
|
||||
|
||||
def test_chairman_model_override(self):
|
||||
os.environ["CHAIRMAN_MODEL"] = "chair"
|
||||
import backend.config as config
|
||||
|
||||
importlib.reload(config)
|
||||
self.assertEqual(config.CHAIRMAN_MODEL, "chair")
|
||||
|
||||
def test_max_tokens_override(self):
|
||||
os.environ["MAX_TOKENS"] = "1234"
|
||||
import backend.config as config
|
||||
|
||||
importlib.reload(config)
|
||||
self.assertEqual(config.MAX_TOKENS, 1234)
|
||||
|
||||
|
||||
60
backend/tests/test_doc_preview_truncation.py
Normal file
60
backend/tests/test_doc_preview_truncation.py
Normal file
@ -0,0 +1,60 @@
|
||||
import importlib
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
|
||||
import httpx
|
||||
|
||||
|
||||
class TestDocPreviewTruncation(unittest.IsolatedAsyncioTestCase):
|
||||
async def asyncSetUp(self):
|
||||
self._old_env = dict(os.environ)
|
||||
self.tmp = tempfile.mkdtemp(prefix="llm-council-docprev-")
|
||||
os.environ["DOCS_DIR"] = self.tmp
|
||||
os.environ["MAX_DOC_BYTES"] = "1000000"
|
||||
os.environ["MAX_DOC_PREVIEW_CHARS"] = "10"
|
||||
|
||||
import backend.config as config
|
||||
import backend.documents as documents
|
||||
import backend.main as main
|
||||
|
||||
importlib.reload(config)
|
||||
importlib.reload(documents)
|
||||
self.main = importlib.reload(main)
|
||||
|
||||
self.client = httpx.AsyncClient(
|
||||
transport=httpx.ASGITransport(app=self.main.app),
|
||||
base_url="http://test",
|
||||
)
|
||||
|
||||
# Create a conversation
|
||||
resp = await self.client.post("/api/conversations", json={})
|
||||
resp.raise_for_status()
|
||||
self.conversation_id = resp.json()["id"]
|
||||
|
||||
# Upload a long doc
|
||||
files = {"file": ("long.md", b"0123456789ABCDEFGHIJ", "text/markdown")}
|
||||
up = await self.client.post(
|
||||
f"/api/conversations/{self.conversation_id}/documents",
|
||||
files=files,
|
||||
)
|
||||
up.raise_for_status()
|
||||
self.doc_id = up.json()["id"]
|
||||
|
||||
async def asyncTearDown(self):
|
||||
await self.client.aclose()
|
||||
os.environ.clear()
|
||||
os.environ.update(self._old_env)
|
||||
shutil.rmtree(self.tmp, ignore_errors=True)
|
||||
|
||||
async def test_preview_truncates(self):
|
||||
resp = await self.client.get(
|
||||
f"/api/conversations/{self.conversation_id}/documents/{self.doc_id}"
|
||||
)
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
content = resp.json()["content"]
|
||||
self.assertEqual(len(content), 10)
|
||||
self.assertTrue(content.endswith("..."))
|
||||
|
||||
|
||||
110
backend/tests/test_docs_api.py
Normal file
110
backend/tests/test_docs_api.py
Normal file
@ -0,0 +1,110 @@
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
import importlib
|
||||
|
||||
import httpx
|
||||
|
||||
|
||||
class TestDocumentsApi(unittest.IsolatedAsyncioTestCase):
|
||||
async def asyncSetUp(self):
|
||||
self._old_env = dict(os.environ)
|
||||
self.tmp = tempfile.mkdtemp(prefix="llm-council-docsapi-")
|
||||
os.environ["DOCS_DIR"] = self.tmp
|
||||
os.environ["MAX_DOC_BYTES"] = "1000000"
|
||||
|
||||
# Reload config/documents so they see DOCS_DIR override
|
||||
import backend.config as config
|
||||
import backend.documents as documents
|
||||
import backend.main as main
|
||||
|
||||
importlib.reload(config)
|
||||
importlib.reload(documents)
|
||||
self.main = importlib.reload(main)
|
||||
|
||||
self.transport = httpx.ASGITransport(app=self.main.app)
|
||||
self.client = httpx.AsyncClient(transport=self.transport, base_url="http://test")
|
||||
|
||||
# Create a conversation
|
||||
resp = await self.client.post("/api/conversations", json={})
|
||||
resp.raise_for_status()
|
||||
self.conversation_id = resp.json()["id"]
|
||||
|
||||
async def asyncTearDown(self):
|
||||
await self.client.aclose()
|
||||
os.environ.clear()
|
||||
os.environ.update(self._old_env)
|
||||
shutil.rmtree(self.tmp, ignore_errors=True)
|
||||
|
||||
async def test_upload_and_list_documents(self):
|
||||
# Upload
|
||||
files = {"file": ("notes.md", b"# Hi\n", "text/markdown")}
|
||||
resp = await self.client.post(
|
||||
f"/api/conversations/{self.conversation_id}/documents",
|
||||
files=files,
|
||||
)
|
||||
self.assertEqual(resp.status_code, 200, resp.text)
|
||||
meta = resp.json()
|
||||
self.assertIn("id", meta)
|
||||
self.assertEqual(meta["filename"], "notes.md")
|
||||
|
||||
# List
|
||||
resp2 = await self.client.get(
|
||||
f"/api/conversations/{self.conversation_id}/documents"
|
||||
)
|
||||
self.assertEqual(resp2.status_code, 200, resp2.text)
|
||||
items = resp2.json()
|
||||
self.assertEqual(len(items), 1)
|
||||
self.assertEqual(items[0]["id"], meta["id"])
|
||||
|
||||
async def test_upload_multiple_documents(self):
|
||||
files = [
|
||||
("files", ("a.md", b"one", "text/markdown")),
|
||||
("files", ("b.md", b"two", "text/markdown")),
|
||||
]
|
||||
resp = await self.client.post(
|
||||
f"/api/conversations/{self.conversation_id}/documents",
|
||||
files=files,
|
||||
)
|
||||
self.assertEqual(resp.status_code, 200, resp.text)
|
||||
payload = resp.json()
|
||||
self.assertIn("uploaded", payload)
|
||||
self.assertEqual(len(payload["uploaded"]), 2)
|
||||
self.assertEqual({d["filename"] for d in payload["uploaded"]}, {"a.md", "b.md"})
|
||||
|
||||
async def test_rejects_non_md(self):
|
||||
files = {"file": ("notes.txt", b"hello", "text/plain")}
|
||||
resp = await self.client.post(
|
||||
f"/api/conversations/{self.conversation_id}/documents",
|
||||
files=files,
|
||||
)
|
||||
self.assertEqual(resp.status_code, 400)
|
||||
|
||||
async def test_get_and_delete_document(self):
|
||||
files = {"file": ("a.md", b"hello", "text/markdown")}
|
||||
up = await self.client.post(
|
||||
f"/api/conversations/{self.conversation_id}/documents",
|
||||
files=files,
|
||||
)
|
||||
self.assertEqual(up.status_code, 200)
|
||||
doc_id = up.json()["id"]
|
||||
|
||||
get = await self.client.get(
|
||||
f"/api/conversations/{self.conversation_id}/documents/{doc_id}"
|
||||
)
|
||||
self.assertEqual(get.status_code, 200)
|
||||
self.assertEqual(get.json()["content"], "hello")
|
||||
|
||||
dele = await self.client.delete(
|
||||
f"/api/conversations/{self.conversation_id}/documents/{doc_id}"
|
||||
)
|
||||
self.assertEqual(dele.status_code, 200)
|
||||
self.assertTrue(dele.json()["ok"])
|
||||
|
||||
get2 = await self.client.get(
|
||||
f"/api/conversations/{self.conversation_id}/documents/{doc_id}"
|
||||
)
|
||||
self.assertEqual(get2.status_code, 404)
|
||||
|
||||
|
||||
38
backend/tests/test_docs_context.py
Normal file
38
backend/tests/test_docs_context.py
Normal file
@ -0,0 +1,38 @@
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
import importlib
|
||||
|
||||
|
||||
class TestDocsContext(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self._old_env = dict(os.environ)
|
||||
self.tmp = tempfile.mkdtemp(prefix="llm-council-docsctx-")
|
||||
os.environ["DOCS_DIR"] = self.tmp
|
||||
os.environ["MAX_DOC_BYTES"] = "1000000"
|
||||
|
||||
import backend.config as config
|
||||
import backend.documents as documents
|
||||
import backend.docs_context as docs_context
|
||||
|
||||
self.config = importlib.reload(config)
|
||||
self.documents = importlib.reload(documents)
|
||||
self.docs_context = importlib.reload(docs_context)
|
||||
|
||||
def tearDown(self):
|
||||
os.environ.clear()
|
||||
os.environ.update(self._old_env)
|
||||
shutil.rmtree(self.tmp, ignore_errors=True)
|
||||
|
||||
def test_build_docs_context_truncates(self):
|
||||
conv = "c1"
|
||||
self.documents.save_markdown_document(conv, "a.md", b"A" * 50)
|
||||
self.documents.save_markdown_document(conv, "b.md", b"B" * 50)
|
||||
|
||||
ctx = self.docs_context.build_docs_context(conv, max_chars=60, max_docs=5)
|
||||
self.assertIsNotNone(ctx)
|
||||
self.assertIn("DOC:", ctx)
|
||||
self.assertTrue(len(ctx) <= 60)
|
||||
|
||||
|
||||
48
backend/tests/test_documents.py
Normal file
48
backend/tests/test_documents.py
Normal file
@ -0,0 +1,48 @@
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import unittest
|
||||
import importlib
|
||||
|
||||
|
||||
class TestDocumentsStorage(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self._old_env = dict(os.environ)
|
||||
self.tmp = tempfile.mkdtemp(prefix="llm-council-docs-")
|
||||
os.environ["DOCS_DIR"] = self.tmp
|
||||
os.environ["MAX_DOC_BYTES"] = "100"
|
||||
|
||||
import backend.config as config
|
||||
import backend.documents as documents
|
||||
|
||||
self.config = importlib.reload(config)
|
||||
self.documents = importlib.reload(documents)
|
||||
|
||||
def tearDown(self):
|
||||
os.environ.clear()
|
||||
os.environ.update(self._old_env)
|
||||
shutil.rmtree(self.tmp, ignore_errors=True)
|
||||
|
||||
def test_save_and_list_document(self):
|
||||
meta = self.documents.save_markdown_document(
|
||||
"conv1",
|
||||
"../weird/name.md",
|
||||
b"# Hello\n",
|
||||
)
|
||||
self.assertTrue(meta.id)
|
||||
self.assertEqual(meta.filename, "name.md")
|
||||
self.assertEqual(meta.bytes, 8)
|
||||
|
||||
listed = self.documents.list_documents("conv1")
|
||||
self.assertEqual(len(listed), 1)
|
||||
self.assertEqual(listed[0].id, meta.id)
|
||||
self.assertEqual(listed[0].filename, "name.md")
|
||||
|
||||
text = self.documents.read_document_text("conv1", meta.id)
|
||||
self.assertIn("# Hello", text)
|
||||
|
||||
def test_rejects_too_large(self):
|
||||
with self.assertRaises(ValueError):
|
||||
self.documents.save_markdown_document("conv1", "a.md", b"x" * 101)
|
||||
|
||||
|
||||
65
backend/tests/test_llm_client.py
Normal file
65
backend/tests/test_llm_client.py
Normal file
@ -0,0 +1,65 @@
|
||||
import os
|
||||
import unittest
|
||||
|
||||
|
||||
class TestProviderSelection(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self._old_env = dict(os.environ)
|
||||
|
||||
def tearDown(self):
|
||||
os.environ.clear()
|
||||
os.environ.update(self._old_env)
|
||||
|
||||
def test_always_returns_openai_compat(self):
|
||||
"""Provider is always 'openai_compat' now (OpenRouter removed)."""
|
||||
from backend.llm_client import _get_provider_name
|
||||
|
||||
# Should always return openai_compat regardless of env vars
|
||||
self.assertEqual(_get_provider_name(), "openai_compat")
|
||||
|
||||
# Test with different env var combinations
|
||||
os.environ["OPENAI_COMPAT_BASE_URL"] = "http://gpu:8000"
|
||||
self.assertEqual(_get_provider_name(), "openai_compat")
|
||||
|
||||
os.environ.pop("OPENAI_COMPAT_BASE_URL", None)
|
||||
self.assertEqual(_get_provider_name(), "openai_compat")
|
||||
|
||||
|
||||
class TestParallelConcurrency(unittest.IsolatedAsyncioTestCase):
|
||||
async def test_query_models_parallel_respects_llm_max_concurrency(self):
|
||||
import asyncio
|
||||
import backend.llm_client as lc
|
||||
|
||||
old_env = dict(os.environ)
|
||||
old_query_model = lc.query_model
|
||||
|
||||
in_flight = 0
|
||||
max_in_flight = 0
|
||||
lock = asyncio.Lock()
|
||||
|
||||
async def fake_query_model(model, messages, timeout=120.0, max_tokens_override=None):
|
||||
nonlocal in_flight, max_in_flight
|
||||
async with lock:
|
||||
in_flight += 1
|
||||
max_in_flight = max(max_in_flight, in_flight)
|
||||
# ensure overlap is possible without the semaphore
|
||||
await asyncio.sleep(0.02)
|
||||
async with lock:
|
||||
in_flight -= 1
|
||||
return {"content": model}
|
||||
|
||||
try:
|
||||
os.environ["LLM_MAX_CONCURRENCY"] = "1"
|
||||
lc.query_model = fake_query_model
|
||||
|
||||
models = ["m1", "m2", "m3"]
|
||||
out = await lc.query_models_parallel(models, [{"role": "user", "content": "hi"}])
|
||||
|
||||
self.assertEqual(set(out.keys()), set(models))
|
||||
self.assertEqual(max_in_flight, 1)
|
||||
finally:
|
||||
lc.query_model = old_query_model
|
||||
os.environ.clear()
|
||||
os.environ.update(old_env)
|
||||
|
||||
|
||||
36
backend/tests/test_llm_status.py
Normal file
36
backend/tests/test_llm_status.py
Normal file
@ -0,0 +1,36 @@
|
||||
import importlib
|
||||
import os
|
||||
import unittest
|
||||
|
||||
import httpx
|
||||
|
||||
|
||||
class TestLlmStatusEndpoint(unittest.IsolatedAsyncioTestCase):
|
||||
async def asyncSetUp(self):
|
||||
self._old_env = dict(os.environ)
|
||||
os.environ["OPENAI_COMPAT_BASE_URL"] = "http://localhost:11434"
|
||||
os.environ.pop("USE_LOCAL_OLLAMA", None) # Clear this so OPENAI_COMPAT_BASE_URL is used
|
||||
|
||||
import backend.config as config
|
||||
import backend.main as main
|
||||
|
||||
importlib.reload(config) # Reload config to pick up env changes
|
||||
self.main = importlib.reload(main)
|
||||
self.client = httpx.AsyncClient(
|
||||
transport=httpx.ASGITransport(app=self.main.app),
|
||||
base_url="http://test",
|
||||
)
|
||||
|
||||
async def asyncTearDown(self):
|
||||
await self.client.aclose()
|
||||
os.environ.clear()
|
||||
os.environ.update(self._old_env)
|
||||
|
||||
async def test_status_without_probe(self):
|
||||
resp = await self.client.get("/api/llm/status")
|
||||
self.assertEqual(resp.status_code, 200)
|
||||
data = resp.json()
|
||||
self.assertEqual(data["provider"], "openai_compat")
|
||||
self.assertEqual(data["base_url"], "http://localhost:11434")
|
||||
|
||||
|
||||
87
backend/tests/test_openai_compat.py
Normal file
87
backend/tests/test_openai_compat.py
Normal file
@ -0,0 +1,87 @@
|
||||
import unittest
|
||||
|
||||
import httpx
|
||||
import json
|
||||
|
||||
from backend.openai_compat import _resolve_chat_completions_url, _resolve_models_url, query_model, list_models
|
||||
|
||||
|
||||
class TestOpenAICompatUrl(unittest.TestCase):
|
||||
def test_resolve_url_when_no_v1(self):
|
||||
self.assertEqual(
|
||||
_resolve_chat_completions_url("http://gpu:8000"),
|
||||
"http://gpu:8000/v1/chat/completions",
|
||||
)
|
||||
|
||||
def test_resolve_url_when_v1(self):
|
||||
self.assertEqual(
|
||||
_resolve_chat_completions_url("http://gpu:8000/v1"),
|
||||
"http://gpu:8000/v1/chat/completions",
|
||||
)
|
||||
|
||||
def test_resolve_url_when_v1_with_trailing_slash(self):
|
||||
self.assertEqual(
|
||||
_resolve_chat_completions_url("http://gpu:8000/v1/"),
|
||||
"http://gpu:8000/v1/chat/completions",
|
||||
)
|
||||
|
||||
def test_resolve_models_url(self):
|
||||
self.assertEqual(
|
||||
_resolve_models_url("http://gpu:8000"),
|
||||
"http://gpu:8000/v1/models",
|
||||
)
|
||||
|
||||
|
||||
class TestOpenAICompatRequest(unittest.IsolatedAsyncioTestCase):
|
||||
async def test_query_model_builds_payload_and_parses_response(self):
|
||||
captured = {}
|
||||
|
||||
def handler(request: httpx.Request) -> httpx.Response:
|
||||
captured["url"] = str(request.url)
|
||||
captured["auth"] = request.headers.get("authorization")
|
||||
captured["json"] = json.loads(request.content.decode("utf-8"))
|
||||
return httpx.Response(
|
||||
200,
|
||||
json={
|
||||
"choices": [
|
||||
{
|
||||
"message": {"content": "hello", "reasoning_details": None},
|
||||
}
|
||||
]
|
||||
},
|
||||
)
|
||||
|
||||
transport = httpx.MockTransport(handler)
|
||||
async with httpx.AsyncClient(transport=transport, timeout=10.0) as client:
|
||||
out = await query_model(
|
||||
"my-model",
|
||||
[{"role": "user", "content": "hi"}],
|
||||
base_url="http://gpu:8000",
|
||||
api_key="secret",
|
||||
max_tokens=123,
|
||||
timeout=10.0,
|
||||
client=client,
|
||||
)
|
||||
|
||||
self.assertEqual(captured["url"], "http://gpu:8000/v1/chat/completions")
|
||||
self.assertEqual(captured["auth"], "Bearer secret")
|
||||
self.assertEqual(captured["json"]["model"], "my-model")
|
||||
self.assertEqual(captured["json"]["max_tokens"], 123)
|
||||
self.assertEqual(out["content"], "hello")
|
||||
|
||||
async def test_list_models_parses_ids(self):
|
||||
def handler(request: httpx.Request) -> httpx.Response:
|
||||
return httpx.Response(
|
||||
200,
|
||||
json={"data": [{"id": "a"}, {"id": "b"}, {"nope": "c"}]},
|
||||
)
|
||||
|
||||
transport = httpx.MockTransport(handler)
|
||||
async with httpx.AsyncClient(transport=transport, timeout=10.0) as client:
|
||||
ids = await list_models(
|
||||
base_url="http://gpu:8000",
|
||||
client=client,
|
||||
)
|
||||
self.assertEqual(ids, ["a", "b"])
|
||||
|
||||
|
||||
278
docs/DEPLOYMENT.md
Normal file
278
docs/DEPLOYMENT.md
Normal file
@ -0,0 +1,278 @@
|
||||
# Deployment Guide
|
||||
|
||||
## Overview
|
||||
|
||||
LLM Council can be deployed in several configurations depending on your needs:
|
||||
- **Local Development**: Everything runs on your local machine
|
||||
- **Hybrid**: Frontend/Backend local, LLM server on remote GPU VM
|
||||
- **Full Remote**: Everything on a server/VM
|
||||
- **Production**: Professional deployment with proper infrastructure
|
||||
|
||||
## Architecture Options
|
||||
|
||||
### Option 1: Hybrid (Recommended for Development)
|
||||
|
||||
**Setup:**
|
||||
- Frontend + Backend: Run on your local machine
|
||||
- LLM Server (Ollama): Run on remote GPU VM
|
||||
|
||||
**Pros:**
|
||||
- Easy development and debugging
|
||||
- GPU resources available remotely
|
||||
- No need to deploy frontend/backend code
|
||||
- Fast iteration
|
||||
|
||||
**Cons:**
|
||||
- Requires network connectivity to GPU VM
|
||||
- Latency for LLM requests
|
||||
|
||||
**Configuration:**
|
||||
```bash
|
||||
# .env on local machine
|
||||
USE_LOCAL_OLLAMA=false
|
||||
OPENAI_COMPAT_BASE_URL=http://your-gpu-vm-ip:11434
|
||||
```
|
||||
|
||||
### Option 2: Full Remote Deployment
|
||||
|
||||
**Setup:**
|
||||
- Everything runs on the GPU VM or dedicated server
|
||||
|
||||
**Pros:**
|
||||
- Centralized deployment
|
||||
- Can be accessed from multiple machines
|
||||
- Better for team use
|
||||
|
||||
**Cons:**
|
||||
- More complex setup
|
||||
- Requires proper security configuration
|
||||
- Slower development iteration
|
||||
|
||||
### Option 3: Production Deployment (Professional)
|
||||
|
||||
**Recommended Stack:**
|
||||
- **Frontend**: Serve static build via nginx/CDN
|
||||
- **Backend**: Run via systemd/gunicorn/uvicorn with reverse proxy
|
||||
- **LLM Server**: Separate service on GPU VM
|
||||
- **Security**: TLS/HTTPS, authentication, rate limiting
|
||||
|
||||
## GPU VM Setup
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. GPU VM with:
|
||||
- NVIDIA GPU with CUDA support
|
||||
- Sufficient VRAM for your models
|
||||
- Network access from your local machine
|
||||
|
||||
2. Ollama installed on GPU VM:
|
||||
```bash
|
||||
curl -fsSL https://ollama.ai/install.sh | sh
|
||||
```
|
||||
|
||||
### Step 1: Configure Ollama to Accept Remote Connections
|
||||
|
||||
**On GPU VM:**
|
||||
|
||||
```bash
|
||||
# Option A: Environment variable (temporary)
|
||||
export OLLAMA_HOST=0.0.0.0:11434
|
||||
|
||||
# Option B: Systemd service (persistent - recommended)
|
||||
sudo systemctl edit ollama
|
||||
```
|
||||
|
||||
Add to the override file:
|
||||
```ini
|
||||
[Service]
|
||||
Environment="OLLAMA_HOST=0.0.0.0:11434"
|
||||
Environment="OLLAMA_KEEP_ALIVE=24h"
|
||||
Environment="OLLAMA_MAX_LOADED_MODELS=3"
|
||||
```
|
||||
|
||||
Then restart:
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart ollama
|
||||
```
|
||||
|
||||
### Step 2: Configure Firewall
|
||||
|
||||
**On GPU VM:**
|
||||
|
||||
```bash
|
||||
# Allow port 11434 from your local network
|
||||
sudo ufw allow from YOUR_LOCAL_IP to any port 11434
|
||||
# Or allow from entire subnet (less secure)
|
||||
sudo ufw allow 11434/tcp
|
||||
```
|
||||
|
||||
### Step 3: Pull Required Models
|
||||
|
||||
**On GPU VM:**
|
||||
|
||||
```bash
|
||||
ollama pull qwen2.5:7b
|
||||
ollama pull llama3.1:8b
|
||||
ollama pull qwen2.5:14b
|
||||
ollama pull qwen2:latest
|
||||
```
|
||||
|
||||
### Step 3.5 (GPU VM): Ensure Ollama Uses GPU + Stores Data on /mnt/data
|
||||
|
||||
If your VM has a small root disk, keep Ollama's storage and HOME off `/` (common cause of weird failures).
|
||||
Also note that on this setup, `OLLAMA_LLM_LIBRARY=cuda` caused Ollama to *skip* CUDA libraries; use `cuda_v12`.
|
||||
|
||||
**On GPU VM:**
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /etc/systemd/system/ollama.service.d
|
||||
sudo tee /etc/systemd/system/ollama.service.d/override.conf >/dev/null <<'EOF'
|
||||
[Service]
|
||||
Environment="OLLAMA_HOST=0.0.0.0:11434"
|
||||
Environment="OLLAMA_KEEP_ALIVE=24h"
|
||||
Environment="OLLAMA_MODELS=/mnt/data/ollama"
|
||||
Environment="HOME=/mnt/data/ollama/home"
|
||||
Environment="OLLAMA_LLM_LIBRARY=cuda_v12"
|
||||
Environment="LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12"
|
||||
EOF
|
||||
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart ollama
|
||||
```
|
||||
|
||||
**Verify GPU offload (on GPU VM):**
|
||||
|
||||
```bash
|
||||
ollama run qwen2:latest "Write 80 words about GPUs."
|
||||
ollama ps
|
||||
```
|
||||
|
||||
### Step 4: Verify Remote Access
|
||||
|
||||
**From local machine:**
|
||||
|
||||
```bash
|
||||
curl http://YOUR_GPU_VM_IP:11434/api/tags
|
||||
# Should return list of available models
|
||||
curl http://YOUR_GPU_VM_IP:11434/v1/models
|
||||
```
|
||||
|
||||
### Step 5: Configure LLM Council
|
||||
|
||||
**On local machine `.env`:**
|
||||
|
||||
```bash
|
||||
USE_LOCAL_OLLAMA=false
|
||||
OPENAI_COMPAT_BASE_URL=http://YOUR_GPU_VM_IP:11434
|
||||
# Local (small) example:
|
||||
# COUNCIL_MODELS=llama3.2:1b,qwen2.5:0.5b,gemma2:2b
|
||||
# CHAIRMAN_MODEL=llama3.2:3b
|
||||
|
||||
# GPU (available models):
|
||||
COUNCIL_MODELS=qwen2.5:7b,llama3.1:8b,qwen2:latest
|
||||
CHAIRMAN_MODEL=qwen2.5:14b
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### For Development/Internal Use
|
||||
|
||||
1. **Network Security:**
|
||||
- Use VPN or private network
|
||||
- Restrict firewall to specific IPs
|
||||
- Consider SSH tunnel for extra security
|
||||
|
||||
2. **Ollama Security:**
|
||||
- Ollama has no built-in authentication
|
||||
- Only expose on trusted networks
|
||||
- Consider reverse proxy with auth (nginx + basic auth)
|
||||
|
||||
### For Production
|
||||
|
||||
1. **Authentication:**
|
||||
- Add API key authentication to backend
|
||||
- Use session-based auth for frontend
|
||||
- Implement rate limiting
|
||||
|
||||
2. **Network Security:**
|
||||
- Use HTTPS/TLS everywhere
|
||||
- Set up proper firewall rules
|
||||
- Consider using a reverse proxy (nginx/traefik)
|
||||
|
||||
3. **Infrastructure:**
|
||||
- Use container orchestration (Docker Compose/Kubernetes)
|
||||
- Set up monitoring and logging
|
||||
- Implement backup strategy for conversations
|
||||
|
||||
## Deployment Scripts
|
||||
|
||||
### Quick Start (Local + Remote Ollama)
|
||||
|
||||
```bash
|
||||
# 1. Start Ollama on GPU VM (already running if systemd configured)
|
||||
# 2. On local machine:
|
||||
./start.sh
|
||||
```
|
||||
|
||||
### Full Remote Deployment
|
||||
|
||||
See `docs/DEPLOYMENT_FULL.md` for complete remote deployment instructions.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Connection Timeouts
|
||||
|
||||
1. Check Ollama is listening on all interfaces:
|
||||
```bash
|
||||
# On GPU VM
|
||||
sudo netstat -tlnp | grep 11434
|
||||
# Should show 0.0.0.0:11434, not 127.0.0.1:11434
|
||||
```
|
||||
|
||||
2. Check firewall rules:
|
||||
```bash
|
||||
# On GPU VM
|
||||
sudo ufw status
|
||||
```
|
||||
|
||||
3. Test connectivity:
|
||||
```bash
|
||||
# From local machine
|
||||
curl -v http://GPU_VM_IP:11434/api/tags
|
||||
```
|
||||
|
||||
### Model Loading Issues
|
||||
|
||||
1. Check available VRAM:
|
||||
```bash
|
||||
nvidia-smi
|
||||
```
|
||||
|
||||
2. Adjust `OLLAMA_MAX_LOADED_MODELS` if needed
|
||||
|
||||
3. Check model sizes vs available memory
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### Ollama Settings
|
||||
|
||||
```bash
|
||||
# On GPU VM, edit systemd override:
|
||||
Environment="OLLAMA_KEEP_ALIVE=24h" # Keep models loaded
|
||||
Environment="OLLAMA_MAX_LOADED_MODELS=3" # Max concurrent models
|
||||
Environment="OLLAMA_NUM_PARALLEL=1" # Parallel requests
|
||||
```
|
||||
|
||||
### LLM Council Timeouts
|
||||
|
||||
Adjust in `.env`:
|
||||
```bash
|
||||
LLM_TIMEOUT_SECONDS=600.0 # For slow models
|
||||
CHAIRMAN_TIMEOUT_SECONDS=600.0
|
||||
OPENAI_COMPAT_TIMEOUT_SECONDS=600.0
|
||||
OPENAI_COMPAT_CONNECT_TIMEOUT_SECONDS=30.0
|
||||
OPENAI_COMPAT_WRITE_TIMEOUT_SECONDS=30.0
|
||||
OPENAI_COMPAT_POOL_TIMEOUT_SECONDS=30.0
|
||||
```
|
||||
|
||||
178
docs/DEPLOYMENT_RECOMMENDATIONS.md
Normal file
178
docs/DEPLOYMENT_RECOMMENDATIONS.md
Normal file
@ -0,0 +1,178 @@
|
||||
# Professional Deployment Recommendations
|
||||
|
||||
## Recommended Architecture
|
||||
|
||||
### For Development/Personal Use
|
||||
|
||||
**Hybrid Approach (Recommended):**
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────┐
|
||||
│ Local Machine │ │ GPU VM │
|
||||
│ │ │ │
|
||||
│ Frontend │ │ Ollama Server │
|
||||
│ (React/Vite) │ │ (LLM Models) │
|
||||
│ │◄────────┤ │
|
||||
│ Backend │ HTTP │ Port 11434 │
|
||||
│ (FastAPI) │ │ │
|
||||
└─────────────────┘ └──────────────────┘
|
||||
```
|
||||
|
||||
**Why this is best:**
|
||||
- ✅ Fast development iteration
|
||||
- ✅ Easy debugging (logs on local machine)
|
||||
- ✅ GPU resources available remotely
|
||||
- ✅ No complex deployment needed
|
||||
- ✅ Can work offline (if models cached locally)
|
||||
|
||||
### For Production/Team Use
|
||||
|
||||
**Full Remote Deployment:**
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ Users │
|
||||
│ (Browsers) │
|
||||
└────────┬────────┘
|
||||
│ HTTPS
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Reverse Proxy │
|
||||
│ (nginx/traefik)│
|
||||
└────────┬────────┘
|
||||
│
|
||||
┌────┴────┐
|
||||
│ │
|
||||
▼ ▼
|
||||
┌────────┐ ┌──────────┐
|
||||
│Frontend│ │ Backend │
|
||||
│(Static)│ │(FastAPI) │
|
||||
└────────┘ └────┬─────┘
|
||||
│ HTTP
|
||||
▼
|
||||
┌──────────────┐
|
||||
│ GPU VM │
|
||||
│ Ollama │
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
## Comparison of Approaches
|
||||
|
||||
| Aspect | Hybrid (Local + Remote) | Full Remote | Production |
|
||||
|--------|------------------------|-------------|------------|
|
||||
| **Setup Complexity** | Low | Medium | High |
|
||||
| **Development Speed** | Fast | Medium | Slow |
|
||||
| **Security** | Medium | Medium | High |
|
||||
| **Scalability** | Low | Medium | High |
|
||||
| **Cost** | Low | Medium | High |
|
||||
| **Best For** | Dev/Personal | Team/Internal | Public/Enterprise |
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
### Development/Internal Network
|
||||
|
||||
1. **Network Isolation:**
|
||||
- Use private network/VPN
|
||||
- Restrict firewall to specific IPs
|
||||
- Consider SSH tunnel for Ollama
|
||||
|
||||
2. **Ollama Access:**
|
||||
```bash
|
||||
# Only allow from specific IPs
|
||||
sudo ufw allow from YOUR_IP to any port 11434
|
||||
```
|
||||
|
||||
3. **SSH Tunnel Alternative:**
|
||||
```bash
|
||||
# More secure - no direct network exposure
|
||||
ssh -L 11434:localhost:11434 user@gpu-vm
|
||||
# Then use localhost:11434 in .env
|
||||
```
|
||||
|
||||
### Production Deployment
|
||||
|
||||
1. **Authentication:**
|
||||
- Add API keys to backend
|
||||
- Implement user sessions
|
||||
- Use OAuth2/JWT for API
|
||||
|
||||
2. **Network Security:**
|
||||
- HTTPS/TLS everywhere
|
||||
- WAF (Web Application Firewall)
|
||||
- Rate limiting
|
||||
- DDoS protection
|
||||
|
||||
3. **Infrastructure:**
|
||||
- Container orchestration (Docker/K8s)
|
||||
- Service mesh for internal communication
|
||||
- Monitoring and alerting
|
||||
- Automated backups
|
||||
|
||||
## Deployment Checklist
|
||||
|
||||
### Hybrid Setup (Recommended for Dev)
|
||||
|
||||
- [ ] GPU VM has Ollama installed
|
||||
- [ ] Ollama configured to listen on 0.0.0.0:11434
|
||||
- [ ] Firewall allows connections from local machine
|
||||
- [ ] Models pulled on GPU VM
|
||||
- [ ] Local `.env` configured with GPU VM IP
|
||||
- [ ] Test connection: `curl http://GPU_VM_IP:11434/api/tags`
|
||||
|
||||
### Full Remote Setup
|
||||
|
||||
- [ ] Server/VM provisioned
|
||||
- [ ] Frontend built and served (nginx/static host)
|
||||
- [ ] Backend running as service (systemd/supervisor)
|
||||
- [ ] Reverse proxy configured (nginx/traefik)
|
||||
- [ ] SSL certificates installed
|
||||
- [ ] Authentication implemented
|
||||
- [ ] Monitoring set up
|
||||
- [ ] Backup strategy in place
|
||||
|
||||
### Production Setup
|
||||
|
||||
- [ ] All of Full Remote checklist
|
||||
- [ ] Load balancing configured
|
||||
- [ ] Database for conversations (optional upgrade)
|
||||
- [ ] Logging and monitoring (Prometheus/Grafana)
|
||||
- [ ] CI/CD pipeline
|
||||
- [ ] Security audit
|
||||
- [ ] Documentation for ops team
|
||||
- [ ] Disaster recovery plan
|
||||
|
||||
## Cost Considerations
|
||||
|
||||
### Hybrid (Local + Remote GPU VM)
|
||||
- **Cost**: GPU VM only (~$0.50-2/hour depending on GPU)
|
||||
- **Best for**: Development, personal projects, small teams
|
||||
|
||||
### Full Remote
|
||||
- **Cost**: GPU VM + Application Server (~$1-3/hour)
|
||||
- **Best for**: Teams, internal tools
|
||||
|
||||
### Production
|
||||
- **Cost**: $100-1000+/month depending on scale
|
||||
- **Best for**: Public services, enterprise
|
||||
|
||||
## Migration Path
|
||||
|
||||
1. **Start**: Hybrid (local dev, remote GPU)
|
||||
2. **Grow**: Full remote (when team needs it)
|
||||
3. **Scale**: Production (when going public/enterprise)
|
||||
|
||||
## Recommendation
|
||||
|
||||
**For your use case (development/personal):**
|
||||
|
||||
Use the **Hybrid approach**:
|
||||
- Run frontend + backend locally
|
||||
- Connect to Ollama on GPU VM
|
||||
- Use SSH tunnel for extra security if needed
|
||||
- Simple, fast, cost-effective
|
||||
|
||||
This gives you:
|
||||
- Fast development iteration
|
||||
- Easy debugging
|
||||
- GPU resources when needed
|
||||
- Minimal infrastructure complexity
|
||||
- Low cost
|
||||
|
||||
93
docs/GPU_VM_SETUP.md
Normal file
93
docs/GPU_VM_SETUP.md
Normal file
@ -0,0 +1,93 @@
|
||||
# GPU VM Setup - Quick Reference
|
||||
|
||||
## Quick Setup Steps
|
||||
|
||||
### 1. On GPU VM: Configure Ollama to Accept Remote Connections
|
||||
|
||||
```bash
|
||||
# Create systemd override
|
||||
sudo mkdir -p /etc/systemd/system/ollama.service.d
|
||||
sudo tee /etc/systemd/system/ollama.service.d/override.conf > /dev/null <<EOF
|
||||
[Service]
|
||||
Environment="OLLAMA_HOST=0.0.0.0:11434"
|
||||
Environment="OLLAMA_KEEP_ALIVE=24h"
|
||||
|
||||
# Keep Ollama storage off root disk (recommended) and ensure the service user
|
||||
# has a writable HOME (runners/keys/cache).
|
||||
Environment="OLLAMA_MODELS=/mnt/data/ollama"
|
||||
Environment="HOME=/mnt/data/ollama/home"
|
||||
|
||||
# IMPORTANT (GPU): on this VM, `OLLAMA_LLM_LIBRARY=cuda` caused Ollama to SKIP CUDA.
|
||||
# Use the libdir selector instead.
|
||||
Environment="OLLAMA_LLM_LIBRARY=cuda_v12"
|
||||
|
||||
# Ensure the dynamic linker can resolve Ollama's bundled CUDA + ggml libs.
|
||||
Environment="LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12"
|
||||
EOF
|
||||
|
||||
# Reload and restart
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart ollama
|
||||
|
||||
# Verify
|
||||
curl http://0.0.0.0:11434/api/tags
|
||||
```
|
||||
|
||||
**Verify GPU is actually used (on GPU VM):**
|
||||
|
||||
```bash
|
||||
ollama run qwen2:latest "Write 80 words about GPUs."
|
||||
ollama ps
|
||||
watch -n 0.2 nvidia-smi
|
||||
```
|
||||
|
||||
### 2. On GPU VM: Configure Firewall
|
||||
|
||||
```bash
|
||||
# Allow port 11434 (adjust IP/subnet as needed)
|
||||
sudo ufw allow from YOUR_LOCAL_IP to any port 11434
|
||||
# Or allow from entire subnet (less secure)
|
||||
sudo ufw allow 11434/tcp
|
||||
```
|
||||
|
||||
### 3. On GPU VM: Pull Required Models
|
||||
|
||||
```bash
|
||||
ollama pull qwen2.5:7b
|
||||
ollama pull llama3.1:8b
|
||||
ollama pull qwen2.5:14b
|
||||
ollama pull qwen2:latest
|
||||
```
|
||||
|
||||
### 4. On Local Machine: Configure .env
|
||||
|
||||
```bash
|
||||
USE_LOCAL_OLLAMA=false
|
||||
OPENAI_COMPAT_BASE_URL=http://YOUR_GPU_VM_IP:11434
|
||||
# Local (small) example:
|
||||
# COUNCIL_MODELS=llama3.2:1b,qwen2.5:0.5b,gemma2:2b
|
||||
# CHAIRMAN_MODEL=llama3.2:3b
|
||||
|
||||
# GPU (available models):
|
||||
COUNCIL_MODELS=qwen2.5:7b,llama3.1:8b,qwen2:latest
|
||||
CHAIRMAN_MODEL=qwen2.5:14b
|
||||
```
|
||||
|
||||
### 5. Test Connection
|
||||
|
||||
```bash
|
||||
# From local machine
|
||||
curl http://YOUR_GPU_VM_IP:11434/api/tags
|
||||
curl http://YOUR_GPU_VM_IP:11434/v1/models
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- **Connection timeout**: Check Ollama is listening on `0.0.0.0:11434` (not `127.0.0.1`)
|
||||
- **Firewall blocking**: Check `sudo ufw status` and allow port 11434
|
||||
- **CPU instead of GPU**: Run `ollama ps` and confirm it doesn't say `100% CPU`. If it does:
|
||||
- Ensure `OLLAMA_LLM_LIBRARY=cuda_v12` (not `cuda`)
|
||||
- Ensure `LD_LIBRARY_PATH` includes `/usr/local/lib/ollama` and `/usr/local/lib/ollama/cuda_v12`
|
||||
- Ensure `OLLAMA_MODELS` and `HOME` are on a disk with free space (root disk full can break runner/cache)
|
||||
|
||||
See [DEPLOYMENT.md](DEPLOYMENT.md) for detailed instructions.
|
||||
11
docs/README.md
Normal file
11
docs/README.md
Normal file
@ -0,0 +1,11 @@
|
||||
# Documentation
|
||||
|
||||
## Getting Started
|
||||
- **[README](../README.md)** - Main project documentation
|
||||
- **[Architecture](../ARCHITECTURE.md)** - System architecture overview
|
||||
|
||||
## Deployment
|
||||
- **[Deployment Guide](DEPLOYMENT.md)** - Complete deployment instructions with testing steps
|
||||
- **[Deployment Recommendations](DEPLOYMENT_RECOMMENDATIONS.md)** - Professional deployment options and best practices
|
||||
- **[GPU VM Setup](GPU_VM_SETUP.md)** - Quick reference for GPU VM configuration
|
||||
|
||||
24
frontend/.gitignore
vendored
Normal file
24
frontend/.gitignore
vendored
Normal file
@ -0,0 +1,24 @@
|
||||
# Logs
|
||||
logs
|
||||
*.log
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
pnpm-debug.log*
|
||||
lerna-debug.log*
|
||||
|
||||
node_modules
|
||||
dist
|
||||
dist-ssr
|
||||
*.local
|
||||
|
||||
# Editor directories and files
|
||||
.vscode/*
|
||||
!.vscode/extensions.json
|
||||
.idea
|
||||
.DS_Store
|
||||
*.suo
|
||||
*.ntvs*
|
||||
*.njsproj
|
||||
*.sln
|
||||
*.sw?
|
||||
16
frontend/README.md
Normal file
16
frontend/README.md
Normal file
@ -0,0 +1,16 @@
|
||||
# React + Vite
|
||||
|
||||
This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.
|
||||
|
||||
Currently, two official plugins are available:
|
||||
|
||||
- [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react) uses [Babel](https://babeljs.io/) (or [oxc](https://oxc.rs) when used in [rolldown-vite](https://vite.dev/guide/rolldown)) for Fast Refresh
|
||||
- [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react-swc) uses [SWC](https://swc.rs/) for Fast Refresh
|
||||
|
||||
## React Compiler
|
||||
|
||||
The React Compiler is not enabled on this template because of its impact on dev & build performances. To add it, see [this documentation](https://react.dev/learn/react-compiler/installation).
|
||||
|
||||
## Expanding the ESLint configuration
|
||||
|
||||
If you are developing a production application, we recommend using TypeScript with type-aware lint rules enabled. Check out the [TS template](https://github.com/vitejs/vite/tree/main/packages/create-vite/template-react-ts) for information on how to integrate TypeScript and [`typescript-eslint`](https://typescript-eslint.io) in your project.
|
||||
29
frontend/eslint.config.js
Normal file
29
frontend/eslint.config.js
Normal file
@ -0,0 +1,29 @@
|
||||
import js from '@eslint/js'
|
||||
import globals from 'globals'
|
||||
import reactHooks from 'eslint-plugin-react-hooks'
|
||||
import reactRefresh from 'eslint-plugin-react-refresh'
|
||||
import { defineConfig, globalIgnores } from 'eslint/config'
|
||||
|
||||
export default defineConfig([
|
||||
globalIgnores(['dist']),
|
||||
{
|
||||
files: ['**/*.{js,jsx}'],
|
||||
extends: [
|
||||
js.configs.recommended,
|
||||
reactHooks.configs.flat.recommended,
|
||||
reactRefresh.configs.vite,
|
||||
],
|
||||
languageOptions: {
|
||||
ecmaVersion: 2020,
|
||||
globals: globals.browser,
|
||||
parserOptions: {
|
||||
ecmaVersion: 'latest',
|
||||
ecmaFeatures: { jsx: true },
|
||||
sourceType: 'module',
|
||||
},
|
||||
},
|
||||
rules: {
|
||||
'no-unused-vars': ['error', { varsIgnorePattern: '^[A-Z_]' }],
|
||||
},
|
||||
},
|
||||
])
|
||||
13
frontend/index.html
Normal file
13
frontend/index.html
Normal file
@ -0,0 +1,13 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>frontend</title>
|
||||
</head>
|
||||
<body>
|
||||
<div id="root"></div>
|
||||
<script type="module" src="/src/main.jsx"></script>
|
||||
</body>
|
||||
</html>
|
||||
3649
frontend/package-lock.json
generated
Normal file
3649
frontend/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
29
frontend/package.json
Normal file
29
frontend/package.json
Normal file
@ -0,0 +1,29 @@
|
||||
{
|
||||
"name": "frontend",
|
||||
"private": true,
|
||||
"version": "0.0.0",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "vite",
|
||||
"build": "vite build",
|
||||
"lint": "eslint .",
|
||||
"test": "node --test",
|
||||
"preview": "vite preview"
|
||||
},
|
||||
"dependencies": {
|
||||
"react": "^19.2.0",
|
||||
"react-dom": "^19.2.0",
|
||||
"react-markdown": "^10.1.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@eslint/js": "^9.39.1",
|
||||
"@types/react": "^19.2.5",
|
||||
"@types/react-dom": "^19.2.3",
|
||||
"@vitejs/plugin-react": "^4.3.1",
|
||||
"eslint": "^9.39.1",
|
||||
"eslint-plugin-react-hooks": "^7.0.1",
|
||||
"eslint-plugin-react-refresh": "^0.4.24",
|
||||
"globals": "^16.5.0",
|
||||
"vite": "^5.4.11"
|
||||
}
|
||||
}
|
||||
1
frontend/public/vite.svg
Normal file
1
frontend/public/vite.svg
Normal file
@ -0,0 +1 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="31.88" height="32" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 257"><defs><linearGradient id="IconifyId1813088fe1fbc01fb466" x1="-.828%" x2="57.636%" y1="7.652%" y2="78.411%"><stop offset="0%" stop-color="#41D1FF"></stop><stop offset="100%" stop-color="#BD34FE"></stop></linearGradient><linearGradient id="IconifyId1813088fe1fbc01fb467" x1="43.376%" x2="50.316%" y1="2.242%" y2="89.03%"><stop offset="0%" stop-color="#FFEA83"></stop><stop offset="8.333%" stop-color="#FFDD35"></stop><stop offset="100%" stop-color="#FFA800"></stop></linearGradient></defs><path fill="url(#IconifyId1813088fe1fbc01fb466)" d="M255.153 37.938L134.897 252.976c-2.483 4.44-8.862 4.466-11.382.048L.875 37.958c-2.746-4.814 1.371-10.646 6.827-9.67l120.385 21.517a6.537 6.537 0 0 0 2.322-.004l117.867-21.483c5.438-.991 9.574 4.796 6.877 9.62Z"></path><path fill="url(#IconifyId1813088fe1fbc01fb467)" d="M185.432.063L96.44 17.501a3.268 3.268 0 0 0-2.634 3.014l-5.474 92.456a3.268 3.268 0 0 0 3.997 3.378l24.777-5.718c2.318-.535 4.413 1.507 3.936 3.838l-7.361 36.047c-.495 2.426 1.782 4.5 4.151 3.78l15.304-4.649c2.372-.72 4.652 1.36 4.15 3.788l-11.698 56.621c-.732 3.542 3.979 5.473 5.943 2.437l1.313-2.028l72.516-144.72c1.215-2.423-.88-5.186-3.54-4.672l-25.505 4.922c-2.396.462-4.435-1.77-3.759-4.114l16.646-57.705c.677-2.35-1.37-4.583-3.769-4.113Z"></path></svg>
|
||||
|
After Width: | Height: | Size: 1.5 KiB |
16
frontend/src/App.css
Normal file
16
frontend/src/App.css
Normal file
@ -0,0 +1,16 @@
|
||||
* {
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
.app {
|
||||
display: flex;
|
||||
height: 100vh;
|
||||
width: 100vw;
|
||||
overflow: hidden;
|
||||
background: var(--bg-primary);
|
||||
color: var(--text-primary);
|
||||
transition: background-color 0.2s, color 0.2s;
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen',
|
||||
'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue',
|
||||
sans-serif;
|
||||
}
|
||||
462
frontend/src/App.jsx
Normal file
462
frontend/src/App.jsx
Normal file
@ -0,0 +1,462 @@
|
||||
import { useState, useEffect } from 'react';
|
||||
import Sidebar from './components/Sidebar';
|
||||
import ChatInterface from './components/ChatInterface';
|
||||
import { api } from './api';
|
||||
import './App.css';
|
||||
|
||||
function App() {
|
||||
// Theme state - default to light, or load from localStorage
|
||||
const [theme, setTheme] = useState(() => {
|
||||
const saved = localStorage.getItem('llm-council-theme');
|
||||
if (saved) return saved;
|
||||
const envDefault = (import.meta.env.VITE_DEFAULT_THEME || '').toLowerCase();
|
||||
if (envDefault === 'dark' || envDefault === 'light') return envDefault;
|
||||
return 'light';
|
||||
});
|
||||
const [conversations, setConversations] = useState([]);
|
||||
const [currentConversationId, setCurrentConversationId] = useState(null);
|
||||
const [currentConversation, setCurrentConversation] = useState(null);
|
||||
const [isLoading, setIsLoading] = useState(false);
|
||||
const [documents, setDocuments] = useState([]);
|
||||
const [isUploadingDoc, setIsUploadingDoc] = useState(false);
|
||||
const [selectedDoc, setSelectedDoc] = useState(null); // {id, filename, bytes}
|
||||
const [selectedDocContent, setSelectedDocContent] = useState('');
|
||||
const [isLoadingDoc, setIsLoadingDoc] = useState(false);
|
||||
const [searchQuery, setSearchQuery] = useState('');
|
||||
const [filteredConversations, setFilteredConversations] = useState([]);
|
||||
const [prefillMessage, setPrefillMessage] = useState('');
|
||||
|
||||
// Apply theme class to document root
|
||||
useEffect(() => {
|
||||
document.documentElement.setAttribute('data-theme', theme);
|
||||
localStorage.setItem('llm-council-theme', theme);
|
||||
}, [theme]);
|
||||
|
||||
const toggleTheme = () => {
|
||||
setTheme(prev => prev === 'light' ? 'dark' : 'light');
|
||||
};
|
||||
|
||||
// Load conversations on mount
|
||||
useEffect(() => {
|
||||
loadConversations();
|
||||
}, []);
|
||||
|
||||
// Read message from URL once (used to prefill the textarea; not auto-sent)
|
||||
useEffect(() => {
|
||||
const urlParams = new URLSearchParams(window.location.search);
|
||||
const msg = urlParams.get('message');
|
||||
if (msg) setPrefillMessage(msg);
|
||||
}, []);
|
||||
|
||||
// Check for conversation ID in URL after conversations load
|
||||
useEffect(() => {
|
||||
if (conversations.length > 0) {
|
||||
const urlParams = new URLSearchParams(window.location.search);
|
||||
const convIdFromUrl = urlParams.get('conversation');
|
||||
if (convIdFromUrl && !currentConversationId) {
|
||||
const exists = conversations.some(c => c.id === convIdFromUrl);
|
||||
if (exists) {
|
||||
setCurrentConversationId(convIdFromUrl);
|
||||
} else {
|
||||
console.warn('Conversation from URL not found:', convIdFromUrl);
|
||||
alert(`Conversation not found: ${convIdFromUrl}\n\nIt may have been deleted, or you're pointing at a different data directory/server.`);
|
||||
// Remove the invalid param so we don't keep trying.
|
||||
urlParams.delete('conversation');
|
||||
const newQuery = urlParams.toString();
|
||||
const newUrl = `${window.location.pathname}${newQuery ? `?${newQuery}` : ''}${window.location.hash || ''}`;
|
||||
window.history.replaceState({}, '', newUrl);
|
||||
}
|
||||
}
|
||||
}
|
||||
}, [conversations, currentConversationId]);
|
||||
|
||||
// Filter conversations based on search
|
||||
useEffect(() => {
|
||||
if (!searchQuery.trim()) {
|
||||
setFilteredConversations(conversations);
|
||||
return;
|
||||
}
|
||||
|
||||
const search = async () => {
|
||||
try {
|
||||
const results = await api.searchConversations(searchQuery);
|
||||
setFilteredConversations(results);
|
||||
} catch (error) {
|
||||
console.error('Search failed:', error);
|
||||
// Fallback to client-side search
|
||||
const query = searchQuery.toLowerCase();
|
||||
const filtered = conversations.filter(conv =>
|
||||
(conv.title || '').toLowerCase().includes(query)
|
||||
);
|
||||
setFilteredConversations(filtered);
|
||||
}
|
||||
};
|
||||
|
||||
const timeoutId = setTimeout(search, 300); // Debounce
|
||||
return () => clearTimeout(timeoutId);
|
||||
}, [searchQuery, conversations]);
|
||||
|
||||
// Load conversation details when selected
|
||||
useEffect(() => {
|
||||
if (currentConversationId) {
|
||||
loadConversation(currentConversationId);
|
||||
loadDocuments(currentConversationId);
|
||||
}
|
||||
}, [currentConversationId]);
|
||||
|
||||
const loadConversations = async () => {
|
||||
try {
|
||||
const convs = await api.listConversations();
|
||||
setConversations(convs);
|
||||
} catch (error) {
|
||||
console.error('Failed to load conversations:', error);
|
||||
}
|
||||
};
|
||||
|
||||
const loadConversation = async (id) => {
|
||||
try {
|
||||
const conv = await api.getConversation(id);
|
||||
setCurrentConversation(conv);
|
||||
} catch (error) {
|
||||
console.error('Failed to load conversation:', error);
|
||||
}
|
||||
};
|
||||
|
||||
const loadDocuments = async (id) => {
|
||||
try {
|
||||
const docs = await api.listDocuments(id);
|
||||
setDocuments(docs);
|
||||
// Clear selection if it no longer exists
|
||||
setSelectedDoc((prev) => {
|
||||
if (!prev) return prev;
|
||||
return docs.some((d) => d.id === prev.id) ? prev : null;
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Failed to load documents:', error);
|
||||
setDocuments([]);
|
||||
}
|
||||
};
|
||||
|
||||
const handleNewConversation = async () => {
|
||||
try {
|
||||
const newConv = await api.createConversation();
|
||||
setConversations([
|
||||
{ id: newConv.id, created_at: newConv.created_at, message_count: 0 },
|
||||
...conversations,
|
||||
]);
|
||||
setCurrentConversationId(newConv.id);
|
||||
} catch (error) {
|
||||
console.error('Failed to create conversation:', error);
|
||||
}
|
||||
};
|
||||
|
||||
const handleSelectConversation = (id) => {
|
||||
setCurrentConversationId(id);
|
||||
};
|
||||
|
||||
const handleSendMessage = async (content) => {
|
||||
if (!currentConversationId) return;
|
||||
|
||||
setIsLoading(true);
|
||||
try {
|
||||
// Optimistically add user message to UI
|
||||
const userMessage = { role: 'user', content };
|
||||
setCurrentConversation((prev) => ({
|
||||
...prev,
|
||||
messages: [...prev.messages, userMessage],
|
||||
}));
|
||||
|
||||
// Create a partial assistant message that will be updated progressively
|
||||
const assistantMessage = {
|
||||
role: 'assistant',
|
||||
stage1: null,
|
||||
stage2: null,
|
||||
stage3: null,
|
||||
metadata: null,
|
||||
loading: {
|
||||
stage1: false,
|
||||
stage2: false,
|
||||
stage3: false,
|
||||
},
|
||||
};
|
||||
|
||||
// Add the partial assistant message
|
||||
setCurrentConversation((prev) => ({
|
||||
...prev,
|
||||
messages: [...prev.messages, assistantMessage],
|
||||
}));
|
||||
|
||||
// Send message with streaming
|
||||
await api.sendMessageStream(currentConversationId, content, (eventType, event) => {
|
||||
console.log('[Stream Event]', eventType, event); // Debug logging
|
||||
switch (eventType) {
|
||||
case 'stage1_start':
|
||||
setCurrentConversation((prev) => {
|
||||
const messages = [...prev.messages];
|
||||
const lastMsg = messages[messages.length - 1];
|
||||
lastMsg.loading.stage1 = true;
|
||||
lastMsg.stage1 = []; // Initialize empty array
|
||||
return { ...prev, messages };
|
||||
});
|
||||
break;
|
||||
|
||||
case 'stage1_response':
|
||||
// Individual response completed - add it immediately
|
||||
setCurrentConversation((prev) => {
|
||||
const messages = [...prev.messages];
|
||||
const lastMsg = messages[messages.length - 1];
|
||||
if (!lastMsg.stage1) lastMsg.stage1 = [];
|
||||
// Check if this model already exists, if so update it, otherwise add it
|
||||
const existingIdx = lastMsg.stage1.findIndex(r => r.model === event.model);
|
||||
if (existingIdx >= 0) {
|
||||
lastMsg.stage1[existingIdx] = event.response;
|
||||
} else {
|
||||
lastMsg.stage1.push(event.response);
|
||||
}
|
||||
return { ...prev, messages };
|
||||
});
|
||||
break;
|
||||
|
||||
case 'stage1_complete':
|
||||
setCurrentConversation((prev) => {
|
||||
const messages = [...prev.messages];
|
||||
const lastMsg = messages[messages.length - 1];
|
||||
lastMsg.stage1 = event.data;
|
||||
if (!lastMsg.metadata) lastMsg.metadata = {};
|
||||
lastMsg.metadata.stage1_metadata = event.metadata;
|
||||
lastMsg.loading.stage1 = false;
|
||||
return { ...prev, messages };
|
||||
});
|
||||
break;
|
||||
|
||||
case 'stage2_start':
|
||||
setCurrentConversation((prev) => {
|
||||
const messages = [...prev.messages];
|
||||
const lastMsg = messages[messages.length - 1];
|
||||
lastMsg.loading.stage2 = true;
|
||||
return { ...prev, messages };
|
||||
});
|
||||
break;
|
||||
|
||||
case 'stage2_complete':
|
||||
setCurrentConversation((prev) => {
|
||||
const messages = [...prev.messages];
|
||||
const lastMsg = messages[messages.length - 1];
|
||||
lastMsg.stage2 = event.data;
|
||||
if (!lastMsg.metadata) lastMsg.metadata = {};
|
||||
lastMsg.metadata = { ...lastMsg.metadata, ...event.metadata };
|
||||
lastMsg.metadata.stage2_metadata = event.metadata.stage2_metadata;
|
||||
lastMsg.loading.stage2 = false;
|
||||
return { ...prev, messages };
|
||||
});
|
||||
break;
|
||||
|
||||
case 'stage3_start':
|
||||
setCurrentConversation((prev) => {
|
||||
const messages = [...prev.messages];
|
||||
const lastMsg = messages[messages.length - 1];
|
||||
lastMsg.loading.stage3 = true;
|
||||
return { ...prev, messages };
|
||||
});
|
||||
break;
|
||||
|
||||
case 'stage3_complete':
|
||||
setCurrentConversation((prev) => {
|
||||
const messages = [...prev.messages];
|
||||
const lastMsg = messages[messages.length - 1];
|
||||
lastMsg.stage3 = event.data;
|
||||
if (!lastMsg.metadata) lastMsg.metadata = {};
|
||||
lastMsg.metadata.stage3_metadata = event.metadata;
|
||||
if (event.metadata?.duration_seconds && lastMsg.metadata.stage1_metadata && lastMsg.metadata.stage2_metadata) {
|
||||
const total = (lastMsg.metadata.stage1_metadata.duration_seconds || 0) +
|
||||
(lastMsg.metadata.stage2_metadata.duration_seconds || 0) +
|
||||
(event.metadata.duration_seconds || 0);
|
||||
lastMsg.metadata.total_duration_seconds = Math.round(total * 100) / 100;
|
||||
}
|
||||
lastMsg.loading.stage3 = false;
|
||||
return { ...prev, messages };
|
||||
});
|
||||
break;
|
||||
|
||||
case 'title_complete':
|
||||
// Reload conversations to get updated title
|
||||
loadConversations();
|
||||
break;
|
||||
|
||||
case 'complete':
|
||||
// Stream complete, reload conversations list and current conversation
|
||||
loadConversations();
|
||||
if (currentConversationId) {
|
||||
// Reload the conversation to get the saved assistant message
|
||||
setTimeout(() => {
|
||||
loadConversation(currentConversationId);
|
||||
}, 100);
|
||||
}
|
||||
setIsLoading(false);
|
||||
break;
|
||||
|
||||
case 'error':
|
||||
console.error('Stream error:', event.message);
|
||||
alert(`Error: ${event.message}`);
|
||||
setIsLoading(false);
|
||||
break;
|
||||
|
||||
case 'stage1_response_failed':
|
||||
// Model failed - show notification
|
||||
setCurrentConversation((prev) => {
|
||||
const messages = [...prev.messages];
|
||||
const lastMsg = messages[messages.length - 1];
|
||||
if (!lastMsg.failed_models) lastMsg.failed_models = [];
|
||||
if (!lastMsg.failed_models.includes(event.model)) {
|
||||
lastMsg.failed_models.push(event.model);
|
||||
}
|
||||
return { ...prev, messages };
|
||||
});
|
||||
break;
|
||||
|
||||
default:
|
||||
console.log('Unknown event type:', eventType);
|
||||
}
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Failed to send message:', error);
|
||||
// Remove optimistic messages on error
|
||||
setCurrentConversation((prev) => ({
|
||||
...prev,
|
||||
messages: prev.messages.slice(0, -2),
|
||||
}));
|
||||
setIsLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleUploadDocument = async (files) => {
|
||||
if (!currentConversationId) return;
|
||||
setIsUploadingDoc(true);
|
||||
try {
|
||||
const list = Array.isArray(files) ? files : [files];
|
||||
if (list.length <= 1) {
|
||||
await api.uploadDocument(currentConversationId, list[0]);
|
||||
} else {
|
||||
await api.uploadDocuments(currentConversationId, list);
|
||||
}
|
||||
await loadDocuments(currentConversationId);
|
||||
} catch (error) {
|
||||
console.error('Failed to upload document:', error);
|
||||
alert(error?.message || 'Failed to upload document');
|
||||
} finally {
|
||||
setIsUploadingDoc(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleViewDocument = async (doc) => {
|
||||
if (!currentConversationId) return;
|
||||
setSelectedDoc(doc);
|
||||
setIsLoadingDoc(true);
|
||||
try {
|
||||
const res = await api.getDocument(currentConversationId, doc.id);
|
||||
setSelectedDocContent(res.content || '');
|
||||
} catch (error) {
|
||||
console.error('Failed to get document:', error);
|
||||
alert(error?.message || 'Failed to load document');
|
||||
setSelectedDoc(null);
|
||||
setSelectedDocContent('');
|
||||
} finally {
|
||||
setIsLoadingDoc(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleDeleteDocument = async (doc) => {
|
||||
if (!currentConversationId) return;
|
||||
const ok = confirm(`Delete ${doc.filename}?`);
|
||||
if (!ok) return;
|
||||
|
||||
try {
|
||||
await api.deleteDocument(currentConversationId, doc.id);
|
||||
await loadDocuments(currentConversationId);
|
||||
if (selectedDoc?.id === doc.id) {
|
||||
setSelectedDoc(null);
|
||||
setSelectedDocContent('');
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to delete document:', error);
|
||||
alert(error?.message || 'Failed to delete document');
|
||||
}
|
||||
};
|
||||
|
||||
const handleExportReport = async () => {
|
||||
if (!currentConversationId) return;
|
||||
try {
|
||||
await api.exportConversation(currentConversationId);
|
||||
} catch (error) {
|
||||
console.error('Failed to export conversation:', error);
|
||||
alert(error?.message || 'Failed to export conversation');
|
||||
}
|
||||
};
|
||||
|
||||
const handleDeleteConversation = async (conversationId, e) => {
|
||||
e.stopPropagation(); // Prevent selecting the conversation when clicking delete
|
||||
const ok = confirm('Delete this conversation? This cannot be undone.');
|
||||
if (!ok) return;
|
||||
|
||||
try {
|
||||
await api.deleteConversation(conversationId);
|
||||
await loadConversations();
|
||||
if (currentConversationId === conversationId) {
|
||||
setCurrentConversationId(null);
|
||||
setCurrentConversation(null);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to delete conversation:', error);
|
||||
alert(error?.message || 'Failed to delete conversation');
|
||||
}
|
||||
};
|
||||
|
||||
const handleRenameConversation = async (conversationId, newTitle) => {
|
||||
try {
|
||||
await api.updateConversationTitle(conversationId, newTitle);
|
||||
await loadConversations();
|
||||
// Update current conversation if it's the one being renamed
|
||||
if (currentConversationId === conversationId) {
|
||||
await loadConversation(conversationId);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Failed to rename conversation:', error);
|
||||
alert(error?.message || 'Failed to rename conversation');
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<div className={`app ${theme}`}>
|
||||
<Sidebar
|
||||
conversations={searchQuery ? filteredConversations : conversations}
|
||||
currentConversationId={currentConversationId}
|
||||
onSelectConversation={handleSelectConversation}
|
||||
onNewConversation={handleNewConversation}
|
||||
onDeleteConversation={handleDeleteConversation}
|
||||
onRenameConversation={handleRenameConversation}
|
||||
theme={theme}
|
||||
onToggleTheme={toggleTheme}
|
||||
searchQuery={searchQuery}
|
||||
onSearchChange={setSearchQuery}
|
||||
/>
|
||||
<ChatInterface
|
||||
conversation={currentConversation}
|
||||
onSendMessage={handleSendMessage}
|
||||
isLoading={isLoading}
|
||||
prefillMessage={prefillMessage}
|
||||
onPrefillConsumed={() => setPrefillMessage('')}
|
||||
documents={documents}
|
||||
isUploadingDoc={isUploadingDoc}
|
||||
onUploadDocument={handleUploadDocument}
|
||||
selectedDoc={selectedDoc}
|
||||
selectedDocContent={selectedDocContent}
|
||||
isLoadingDoc={isLoadingDoc}
|
||||
onViewDocument={handleViewDocument}
|
||||
onDeleteDocument={handleDeleteDocument}
|
||||
onExportReport={handleExportReport}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export default App;
|
||||
286
frontend/src/api.js
Normal file
286
frontend/src/api.js
Normal file
@ -0,0 +1,286 @@
|
||||
/**
|
||||
* API client for the LLM Council backend.
|
||||
*/
|
||||
|
||||
const API_BASE = 'http://localhost:8001';
|
||||
|
||||
export const api = {
|
||||
/**
|
||||
* List all conversations.
|
||||
*/
|
||||
async listConversations() {
|
||||
const response = await fetch(`${API_BASE}/api/conversations`);
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to list conversations');
|
||||
}
|
||||
return response.json();
|
||||
},
|
||||
|
||||
/**
|
||||
* Create a new conversation.
|
||||
*/
|
||||
async createConversation() {
|
||||
const response = await fetch(`${API_BASE}/api/conversations`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify({}),
|
||||
});
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to create conversation');
|
||||
}
|
||||
return response.json();
|
||||
},
|
||||
|
||||
/**
|
||||
* Get a specific conversation.
|
||||
*/
|
||||
async getConversation(conversationId) {
|
||||
const response = await fetch(
|
||||
`${API_BASE}/api/conversations/${conversationId}`
|
||||
);
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to get conversation');
|
||||
}
|
||||
return response.json();
|
||||
},
|
||||
|
||||
/**
|
||||
* Delete a conversation.
|
||||
*/
|
||||
async deleteConversation(conversationId) {
|
||||
const response = await fetch(
|
||||
`${API_BASE}/api/conversations/${conversationId}`,
|
||||
{ method: 'DELETE' }
|
||||
);
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to delete conversation');
|
||||
}
|
||||
return response.json();
|
||||
},
|
||||
|
||||
/**
|
||||
* Update conversation title.
|
||||
*/
|
||||
async updateConversationTitle(conversationId, title) {
|
||||
const response = await fetch(
|
||||
`${API_BASE}/api/conversations/${conversationId}/title`,
|
||||
{
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify({ title }),
|
||||
}
|
||||
);
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to update conversation title');
|
||||
}
|
||||
return response.json();
|
||||
},
|
||||
|
||||
/**
|
||||
* Search conversations by title and content.
|
||||
*/
|
||||
async searchConversations(query) {
|
||||
const response = await fetch(
|
||||
`${API_BASE}/api/conversations/search?q=${encodeURIComponent(query)}`
|
||||
);
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to search conversations');
|
||||
}
|
||||
return response.json();
|
||||
},
|
||||
|
||||
|
||||
/**
|
||||
* List uploaded markdown documents for a conversation.
|
||||
*/
|
||||
async listDocuments(conversationId) {
|
||||
const response = await fetch(
|
||||
`${API_BASE}/api/conversations/${conversationId}/documents`
|
||||
);
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to list documents');
|
||||
}
|
||||
return response.json();
|
||||
},
|
||||
|
||||
/**
|
||||
* Upload a markdown (.md) document for a conversation.
|
||||
*/
|
||||
async uploadDocument(conversationId, file) {
|
||||
// Keep single-file backward compatibility (server accepts "file")
|
||||
const form = new FormData();
|
||||
// Support both File and Blob (handy for tests and some environments)
|
||||
form.append('file', file, file?.name || 'document.md');
|
||||
|
||||
const response = await fetch(
|
||||
`${API_BASE}/api/conversations/${conversationId}/documents`,
|
||||
{
|
||||
method: 'POST',
|
||||
body: form,
|
||||
}
|
||||
);
|
||||
if (!response.ok) {
|
||||
const text = await response.text().catch(() => '');
|
||||
throw new Error(text || 'Failed to upload document');
|
||||
}
|
||||
return response.json();
|
||||
},
|
||||
|
||||
/**
|
||||
* Upload multiple markdown (.md) documents for a conversation in one request.
|
||||
* Returns either { uploaded: [...] } or a single {id, filename, bytes} if only one file was uploaded.
|
||||
*/
|
||||
async uploadDocuments(conversationId, files) {
|
||||
const list = Array.isArray(files) ? files : [files];
|
||||
const form = new FormData();
|
||||
for (const f of list) {
|
||||
form.append('files', f, f?.name || 'document.md');
|
||||
}
|
||||
|
||||
const response = await fetch(
|
||||
`${API_BASE}/api/conversations/${conversationId}/documents`,
|
||||
{
|
||||
method: 'POST',
|
||||
body: form,
|
||||
}
|
||||
);
|
||||
if (!response.ok) {
|
||||
const text = await response.text().catch(() => '');
|
||||
throw new Error(text || 'Failed to upload documents');
|
||||
}
|
||||
return response.json();
|
||||
},
|
||||
|
||||
/**
|
||||
* Fetch a document's markdown content.
|
||||
*/
|
||||
async getDocument(conversationId, docId) {
|
||||
const response = await fetch(
|
||||
`${API_BASE}/api/conversations/${conversationId}/documents/${docId}`
|
||||
);
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to get document');
|
||||
}
|
||||
return response.json();
|
||||
},
|
||||
|
||||
/**
|
||||
* Delete a document.
|
||||
*/
|
||||
async deleteDocument(conversationId, docId) {
|
||||
const response = await fetch(
|
||||
`${API_BASE}/api/conversations/${conversationId}/documents/${docId}`,
|
||||
{ method: 'DELETE' }
|
||||
);
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to delete document');
|
||||
}
|
||||
return response.json();
|
||||
},
|
||||
|
||||
/**
|
||||
* Export conversation as markdown report.
|
||||
*/
|
||||
async exportConversation(conversationId) {
|
||||
const response = await fetch(
|
||||
`${API_BASE}/api/conversations/${conversationId}/export`
|
||||
);
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to export conversation');
|
||||
}
|
||||
const blob = await response.blob();
|
||||
const url = window.URL.createObjectURL(blob);
|
||||
const a = document.createElement('a');
|
||||
a.href = url;
|
||||
const contentDisposition = response.headers.get('Content-Disposition');
|
||||
if (contentDisposition) {
|
||||
const filenameMatch = contentDisposition.match(/filename="(.+)"/);
|
||||
if (filenameMatch) {
|
||||
a.download = filenameMatch[1];
|
||||
}
|
||||
} else {
|
||||
a.download = `conversation_${conversationId}.md`;
|
||||
}
|
||||
document.body.appendChild(a);
|
||||
a.click();
|
||||
window.URL.revokeObjectURL(url);
|
||||
document.body.removeChild(a);
|
||||
},
|
||||
|
||||
/**
|
||||
* Send a message in a conversation.
|
||||
*/
|
||||
async sendMessage(conversationId, content) {
|
||||
const response = await fetch(
|
||||
`${API_BASE}/api/conversations/${conversationId}/message`,
|
||||
{
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify({ content }),
|
||||
}
|
||||
);
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to send message');
|
||||
}
|
||||
return response.json();
|
||||
},
|
||||
|
||||
/**
|
||||
* Send a message and receive streaming updates.
|
||||
* @param {string} conversationId - The conversation ID
|
||||
* @param {string} content - The message content
|
||||
* @param {function} onEvent - Callback function for each event: (eventType, data) => void
|
||||
* @returns {Promise<void>}
|
||||
*/
|
||||
async sendMessageStream(conversationId, content, onEvent) {
|
||||
const response = await fetch(
|
||||
`${API_BASE}/api/conversations/${conversationId}/message/stream`,
|
||||
{
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify({ content }),
|
||||
}
|
||||
);
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error('Failed to send message');
|
||||
}
|
||||
|
||||
const reader = response.body.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) {
|
||||
console.log('[Stream] Done reading');
|
||||
break;
|
||||
}
|
||||
|
||||
const chunk = decoder.decode(value);
|
||||
const lines = chunk.split('\n');
|
||||
|
||||
for (const line of lines) {
|
||||
if (line.startsWith('data: ')) {
|
||||
const data = line.slice(6);
|
||||
try {
|
||||
const event = JSON.parse(data);
|
||||
console.log('[Stream] Received event:', event.type);
|
||||
onEvent(event.type, event);
|
||||
} catch (e) {
|
||||
console.error('Failed to parse SSE event:', e, 'Raw data:', data);
|
||||
}
|
||||
} else if (line.trim()) {
|
||||
console.log('[Stream] Non-data line:', line);
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
};
|
||||
1
frontend/src/assets/react.svg
Normal file
1
frontend/src/assets/react.svg
Normal file
@ -0,0 +1 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="35.93" height="32" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 228"><path fill="#00D8FF" d="M210.483 73.824a171.49 171.49 0 0 0-8.24-2.597c.465-1.9.893-3.777 1.273-5.621c6.238-30.281 2.16-54.676-11.769-62.708c-13.355-7.7-35.196.329-57.254 19.526a171.23 171.23 0 0 0-6.375 5.848a155.866 155.866 0 0 0-4.241-3.917C100.759 3.829 77.587-4.822 63.673 3.233C50.33 10.957 46.379 33.89 51.995 62.588a170.974 170.974 0 0 0 1.892 8.48c-3.28.932-6.445 1.924-9.474 2.98C17.309 83.498 0 98.307 0 113.668c0 15.865 18.582 31.778 46.812 41.427a145.52 145.52 0 0 0 6.921 2.165a167.467 167.467 0 0 0-2.01 9.138c-5.354 28.2-1.173 50.591 12.134 58.266c13.744 7.926 36.812-.22 59.273-19.855a145.567 145.567 0 0 0 5.342-4.923a168.064 168.064 0 0 0 6.92 6.314c21.758 18.722 43.246 26.282 56.54 18.586c13.731-7.949 18.194-32.003 12.4-61.268a145.016 145.016 0 0 0-1.535-6.842c1.62-.48 3.21-.974 4.76-1.488c29.348-9.723 48.443-25.443 48.443-41.52c0-15.417-17.868-30.326-45.517-39.844Zm-6.365 70.984c-1.4.463-2.836.91-4.3 1.345c-3.24-10.257-7.612-21.163-12.963-32.432c5.106-11 9.31-21.767 12.459-31.957c2.619.758 5.16 1.557 7.61 2.4c23.69 8.156 38.14 20.213 38.14 29.504c0 9.896-15.606 22.743-40.946 31.14Zm-10.514 20.834c2.562 12.94 2.927 24.64 1.23 33.787c-1.524 8.219-4.59 13.698-8.382 15.893c-8.067 4.67-25.32-1.4-43.927-17.412a156.726 156.726 0 0 1-6.437-5.87c7.214-7.889 14.423-17.06 21.459-27.246c12.376-1.098 24.068-2.894 34.671-5.345a134.17 134.17 0 0 1 1.386 6.193ZM87.276 214.515c-7.882 2.783-14.16 2.863-17.955.675c-8.075-4.657-11.432-22.636-6.853-46.752a156.923 156.923 0 0 1 1.869-8.499c10.486 2.32 22.093 3.988 34.498 4.994c7.084 9.967 14.501 19.128 21.976 27.15a134.668 134.668 0 0 1-4.877 4.492c-9.933 8.682-19.886 14.842-28.658 17.94ZM50.35 144.747c-12.483-4.267-22.792-9.812-29.858-15.863c-6.35-5.437-9.555-10.836-9.555-15.216c0-9.322 13.897-21.212 37.076-29.293c2.813-.98 5.757-1.905 8.812-2.773c3.204 10.42 7.406 21.315 12.477 32.332c-5.137 11.18-9.399 22.249-12.634 32.792a134.718 134.718 0 0 1-6.318-1.979Zm12.378-84.26c-4.811-24.587-1.616-43.134 6.425-47.789c8.564-4.958 27.502 2.111 47.463 19.835a144.318 144.318 0 0 1 3.841 3.545c-7.438 7.987-14.787 17.08-21.808 26.988c-12.04 1.116-23.565 2.908-34.161 5.309a160.342 160.342 0 0 1-1.76-7.887Zm110.427 27.268a347.8 347.8 0 0 0-7.785-12.803c8.168 1.033 15.994 2.404 23.343 4.08c-2.206 7.072-4.956 14.465-8.193 22.045a381.151 381.151 0 0 0-7.365-13.322Zm-45.032-43.861c5.044 5.465 10.096 11.566 15.065 18.186a322.04 322.04 0 0 0-30.257-.006c4.974-6.559 10.069-12.652 15.192-18.18ZM82.802 87.83a323.167 323.167 0 0 0-7.227 13.238c-3.184-7.553-5.909-14.98-8.134-22.152c7.304-1.634 15.093-2.97 23.209-3.984a321.524 321.524 0 0 0-7.848 12.897Zm8.081 65.352c-8.385-.936-16.291-2.203-23.593-3.793c2.26-7.3 5.045-14.885 8.298-22.6a321.187 321.187 0 0 0 7.257 13.246c2.594 4.48 5.28 8.868 8.038 13.147Zm37.542 31.03c-5.184-5.592-10.354-11.779-15.403-18.433c4.902.192 9.899.29 14.978.29c5.218 0 10.376-.117 15.453-.343c-4.985 6.774-10.018 12.97-15.028 18.486Zm52.198-57.817c3.422 7.8 6.306 15.345 8.596 22.52c-7.422 1.694-15.436 3.058-23.88 4.071a382.417 382.417 0 0 0 7.859-13.026a347.403 347.403 0 0 0 7.425-13.565Zm-16.898 8.101a358.557 358.557 0 0 1-12.281 19.815a329.4 329.4 0 0 1-23.444.823c-7.967 0-15.716-.248-23.178-.732a310.202 310.202 0 0 1-12.513-19.846h.001a307.41 307.41 0 0 1-10.923-20.627a310.278 310.278 0 0 1 10.89-20.637l-.001.001a307.318 307.318 0 0 1 12.413-19.761c7.613-.576 15.42-.876 23.31-.876H128c7.926 0 15.743.303 23.354.883a329.357 329.357 0 0 1 12.335 19.695a358.489 358.489 0 0 1 11.036 20.54a329.472 329.472 0 0 1-11 20.722Zm22.56-122.124c8.572 4.944 11.906 24.881 6.52 51.026c-.344 1.668-.73 3.367-1.15 5.09c-10.622-2.452-22.155-4.275-34.23-5.408c-7.034-10.017-14.323-19.124-21.64-27.008a160.789 160.789 0 0 1 5.888-5.4c18.9-16.447 36.564-22.941 44.612-18.3ZM128 90.808c12.625 0 22.86 10.235 22.86 22.86s-10.235 22.86-22.86 22.86s-22.86-10.235-22.86-22.86s10.235-22.86 22.86-22.86Z"></path></svg>
|
||||
|
After Width: | Height: | Size: 4.0 KiB |
460
frontend/src/components/ChatInterface.css
Normal file
460
frontend/src/components/ChatInterface.css
Normal file
@ -0,0 +1,460 @@
|
||||
.chat-interface {
|
||||
flex: 1;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
height: 100vh;
|
||||
background: var(--bg-primary);
|
||||
transition: background-color 0.2s;
|
||||
}
|
||||
|
||||
.docs-bar {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
gap: 12px;
|
||||
padding: 14px 16px;
|
||||
border-bottom: 1px solid var(--border-light);
|
||||
background: var(--bg-tertiary);
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.docs-left {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 2px;
|
||||
}
|
||||
|
||||
.docs-title {
|
||||
font-size: 12px;
|
||||
font-weight: 700;
|
||||
letter-spacing: 0.4px;
|
||||
text-transform: uppercase;
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.docs-subtitle {
|
||||
font-size: 12px;
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
.docs-right {
|
||||
display: flex;
|
||||
gap: 8px;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.docs-export-btn,
|
||||
.docs-upload-btn {
|
||||
padding: 10px 14px;
|
||||
border-radius: 8px;
|
||||
border: 1px solid var(--border-color);
|
||||
background: var(--bg-primary);
|
||||
color: var(--text-primary);
|
||||
font-weight: 600;
|
||||
cursor: pointer;
|
||||
font-size: 13px;
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.docs-export-btn:hover,
|
||||
.docs-upload-btn:hover {
|
||||
background: var(--hover-bg);
|
||||
}
|
||||
|
||||
.docs-upload-btn:disabled,
|
||||
.docs-export-btn:disabled {
|
||||
opacity: 0.6;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
.docs-list {
|
||||
padding: 10px 16px;
|
||||
border-bottom: 1px solid var(--border-light);
|
||||
background: var(--bg-primary);
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 6px;
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.doc-item {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
gap: 12px;
|
||||
font-size: 13px;
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.doc-name-btn {
|
||||
font-weight: 700;
|
||||
border: 1px solid transparent;
|
||||
background: transparent;
|
||||
padding: 4px 8px;
|
||||
border-radius: 8px;
|
||||
text-align: left;
|
||||
cursor: pointer;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
white-space: nowrap;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.doc-number {
|
||||
font-weight: 600;
|
||||
color: #4a90e2;
|
||||
font-size: 12px;
|
||||
background: rgba(74, 144, 226, 0.1);
|
||||
padding: 2px 6px;
|
||||
border-radius: 4px;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.doc-name-btn:hover {
|
||||
background: var(--hover-bg);
|
||||
}
|
||||
|
||||
.doc-name-btn.active {
|
||||
border-color: var(--border-hover);
|
||||
background: var(--hover-bg-light);
|
||||
}
|
||||
|
||||
.doc-actions {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 10px;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.doc-bytes {
|
||||
color: var(--text-secondary);
|
||||
font-variant-numeric: tabular-nums;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.doc-delete-btn {
|
||||
padding: 6px 10px;
|
||||
border-radius: 8px;
|
||||
border: 1px solid var(--border-color);
|
||||
background: var(--bg-primary);
|
||||
color: var(--text-primary);
|
||||
cursor: pointer;
|
||||
font-weight: 600;
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.doc-delete-btn:hover {
|
||||
border-color: #ffb4b4;
|
||||
background: #fff5f5;
|
||||
}
|
||||
|
||||
.doc-preview {
|
||||
border-bottom: 1px solid var(--border-light);
|
||||
background: var(--bg-primary);
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.doc-preview-header {
|
||||
padding: 10px 16px;
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
gap: 12px;
|
||||
border-bottom: 1px solid var(--border-light);
|
||||
}
|
||||
|
||||
.doc-preview-title {
|
||||
font-weight: 800;
|
||||
font-size: 13px;
|
||||
color: var(--text-primary);
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
.doc-preview-meta {
|
||||
font-size: 12px;
|
||||
color: var(--text-secondary);
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.doc-preview-body {
|
||||
padding: 12px 16px;
|
||||
max-height: 240px;
|
||||
overflow: auto;
|
||||
}
|
||||
|
||||
.doc-preview-loading {
|
||||
color: var(--text-secondary);
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
.messages-container {
|
||||
flex: 1;
|
||||
overflow-y: auto;
|
||||
padding: 24px;
|
||||
}
|
||||
|
||||
.empty-state {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
height: 100%;
|
||||
color: var(--text-secondary);
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.empty-state h2 {
|
||||
margin: 0 0 8px 0;
|
||||
font-size: 24px;
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.empty-state p {
|
||||
margin: 0;
|
||||
font-size: 16px;
|
||||
}
|
||||
|
||||
.message-group {
|
||||
margin-bottom: 32px;
|
||||
}
|
||||
|
||||
.user-message,
|
||||
.assistant-message {
|
||||
margin-bottom: 16px;
|
||||
}
|
||||
|
||||
.message-label {
|
||||
font-size: 12px;
|
||||
font-weight: 600;
|
||||
color: var(--text-secondary);
|
||||
margin-bottom: 8px;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.5px;
|
||||
}
|
||||
|
||||
.user-message .message-content {
|
||||
background: var(--hover-bg-light);
|
||||
padding: 16px;
|
||||
border-radius: 8px;
|
||||
border: 1px solid var(--border-hover);
|
||||
color: var(--text-primary);
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
line-height: 1.6;
|
||||
max-width: 80%;
|
||||
white-space: pre-wrap;
|
||||
}
|
||||
|
||||
.loading-indicator {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
padding: 16px;
|
||||
color: var(--text-secondary);
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.stage-loading {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
padding: 16px;
|
||||
margin: 12px 0;
|
||||
background: var(--bg-secondary);
|
||||
border-radius: 8px;
|
||||
border: 1px solid var(--border-color);
|
||||
color: var(--text-secondary);
|
||||
font-size: 14px;
|
||||
font-style: italic;
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.spinner {
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
border: 2px solid #e0e0e0;
|
||||
border-top-color: #4a90e2;
|
||||
border-radius: 50%;
|
||||
animation: spin 0.8s linear infinite;
|
||||
}
|
||||
|
||||
@keyframes spin {
|
||||
to {
|
||||
transform: rotate(360deg);
|
||||
}
|
||||
}
|
||||
|
||||
.input-form {
|
||||
display: flex;
|
||||
align-items: flex-end;
|
||||
gap: 12px;
|
||||
padding: 24px;
|
||||
border-top: 1px solid var(--border-color);
|
||||
background: var(--bg-secondary);
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.input-wrapper {
|
||||
flex: 1;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.message-input {
|
||||
width: 100%;
|
||||
padding: 16px;
|
||||
background: var(--bg-primary);
|
||||
color: var(--text-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 8px;
|
||||
transition: background-color 0.2s, border-color 0.2s, color 0.2s;
|
||||
font-size: 16px;
|
||||
font-family: inherit;
|
||||
line-height: 1.6;
|
||||
outline: none;
|
||||
resize: vertical;
|
||||
min-height: 120px;
|
||||
max-height: 400px;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
.message-input:focus {
|
||||
border-color: #4a90e2;
|
||||
box-shadow: 0 0 0 3px rgba(74, 144, 226, 0.1);
|
||||
}
|
||||
|
||||
.message-input:disabled {
|
||||
opacity: 0.5;
|
||||
cursor: not-allowed;
|
||||
background: var(--bg-secondary);
|
||||
}
|
||||
|
||||
.send-button {
|
||||
padding: 14px 28px;
|
||||
background: #4a90e2;
|
||||
border: 1px solid #4a90e2;
|
||||
border-radius: 8px;
|
||||
color: #fff;
|
||||
font-size: 15px;
|
||||
font-weight: 600;
|
||||
cursor: pointer;
|
||||
transition: background 0.2s;
|
||||
white-space: nowrap;
|
||||
align-self: flex-end;
|
||||
}
|
||||
|
||||
.send-button:hover:not(:disabled) {
|
||||
background: #357abd;
|
||||
border-color: #357abd;
|
||||
}
|
||||
|
||||
.send-button:disabled {
|
||||
opacity: 0.5;
|
||||
cursor: not-allowed;
|
||||
background: #ccc;
|
||||
border-color: #ccc;
|
||||
}
|
||||
|
||||
.debug-console {
|
||||
border-top: 2px solid var(--border-color);
|
||||
background: var(--bg-secondary);
|
||||
max-height: 300px;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.debug-console-header {
|
||||
padding: 8px 12px;
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
font-size: 12px;
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
.debug-console-header button {
|
||||
padding: 4px 8px;
|
||||
font-size: 11px;
|
||||
background: var(--bg-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 4px;
|
||||
color: var(--text-primary);
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.debug-console-content {
|
||||
flex: 1;
|
||||
overflow: auto;
|
||||
padding: 12px;
|
||||
font-size: 11px;
|
||||
font-family: monospace;
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.debug-console-content pre {
|
||||
margin: 0;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-all;
|
||||
}
|
||||
|
||||
.input-wrapper {
|
||||
flex: 1;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.autocomplete-dropdown {
|
||||
position: absolute;
|
||||
background: var(--bg-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 8px;
|
||||
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15);
|
||||
z-index: 1000;
|
||||
max-height: 200px;
|
||||
overflow-y: auto;
|
||||
min-width: 300px;
|
||||
bottom: calc(100% + 8px);
|
||||
left: 0;
|
||||
}
|
||||
|
||||
.autocomplete-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
padding: 10px 12px;
|
||||
cursor: pointer;
|
||||
border-bottom: 1px solid var(--border-light);
|
||||
transition: background-color 0.15s;
|
||||
}
|
||||
|
||||
.autocomplete-item:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
.autocomplete-item:hover,
|
||||
.autocomplete-item.selected {
|
||||
background: var(--hover-bg);
|
||||
}
|
||||
|
||||
.autocomplete-number {
|
||||
font-weight: 600;
|
||||
color: #4a90e2;
|
||||
font-size: 13px;
|
||||
background: rgba(74, 144, 226, 0.1);
|
||||
padding: 3px 8px;
|
||||
border-radius: 4px;
|
||||
flex-shrink: 0;
|
||||
min-width: 32px;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.autocomplete-filename {
|
||||
flex: 1;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
white-space: nowrap;
|
||||
font-size: 14px;
|
||||
color: var(--text-primary);
|
||||
}
|
||||
424
frontend/src/components/ChatInterface.jsx
Normal file
424
frontend/src/components/ChatInterface.jsx
Normal file
@ -0,0 +1,424 @@
|
||||
import { useState, useEffect, useRef } from 'react';
|
||||
import ReactMarkdown from 'react-markdown';
|
||||
import Stage1 from './Stage1';
|
||||
import Stage2 from './Stage2';
|
||||
import Stage3 from './Stage3';
|
||||
import './ChatInterface.css';
|
||||
|
||||
export default function ChatInterface({
|
||||
conversation,
|
||||
onSendMessage,
|
||||
isLoading,
|
||||
prefillMessage = '',
|
||||
onPrefillConsumed,
|
||||
documents = [],
|
||||
isUploadingDoc = false,
|
||||
onUploadDocument,
|
||||
selectedDoc = null,
|
||||
selectedDocContent = '',
|
||||
isLoadingDoc = false,
|
||||
onViewDocument,
|
||||
onDeleteDocument,
|
||||
onExportReport,
|
||||
}) {
|
||||
const [input, setInput] = useState('');
|
||||
const [showAutocomplete, setShowAutocomplete] = useState(false);
|
||||
const [selectedAutocompleteIndex, setSelectedAutocompleteIndex] = useState(0);
|
||||
const messagesEndRef = useRef(null);
|
||||
const fileInputRef = useRef(null);
|
||||
const messageInputRef = useRef(null);
|
||||
const autocompleteRef = useRef(null);
|
||||
|
||||
const scrollToBottom = () => {
|
||||
messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
scrollToBottom();
|
||||
}, [conversation]);
|
||||
|
||||
// Auto-focus input when a new conversation is created or selected
|
||||
useEffect(() => {
|
||||
if (conversation && !isLoading) {
|
||||
// Small delay to ensure the textarea is rendered
|
||||
const timer = setTimeout(() => {
|
||||
messageInputRef.current?.focus();
|
||||
}, 100);
|
||||
return () => clearTimeout(timer);
|
||||
}
|
||||
}, [conversation?.id, isLoading]);
|
||||
|
||||
// Prefill input (e.g. from make test-setup URL param). Never overrides non-empty input.
|
||||
useEffect(() => {
|
||||
if (!conversation?.id) return;
|
||||
if (!prefillMessage) return;
|
||||
if (input.trim() !== '') return;
|
||||
|
||||
setInput(prefillMessage);
|
||||
onPrefillConsumed?.();
|
||||
// Ensure caret at end
|
||||
setTimeout(() => {
|
||||
const el = messageInputRef.current;
|
||||
if (!el) return;
|
||||
el.focus();
|
||||
const end = el.value.length;
|
||||
el.setSelectionRange(end, end);
|
||||
}, 0);
|
||||
}, [conversation?.id, prefillMessage, input, onPrefillConsumed]);
|
||||
|
||||
// Close autocomplete when clicking outside
|
||||
useEffect(() => {
|
||||
const handleClickOutside = (event) => {
|
||||
if (
|
||||
autocompleteRef.current &&
|
||||
!autocompleteRef.current.contains(event.target) &&
|
||||
messageInputRef.current &&
|
||||
!messageInputRef.current.contains(event.target)
|
||||
) {
|
||||
setShowAutocomplete(false);
|
||||
}
|
||||
};
|
||||
|
||||
if (showAutocomplete) {
|
||||
document.addEventListener('mousedown', handleClickOutside);
|
||||
return () => {
|
||||
document.removeEventListener('mousedown', handleClickOutside);
|
||||
};
|
||||
}
|
||||
}, [showAutocomplete]);
|
||||
|
||||
const handleSubmit = (e) => {
|
||||
e.preventDefault();
|
||||
if (input.trim() && !isLoading) {
|
||||
onSendMessage(input);
|
||||
setInput('');
|
||||
}
|
||||
};
|
||||
|
||||
const handleKeyDown = (e) => {
|
||||
// Handle autocomplete navigation
|
||||
if (showAutocomplete) {
|
||||
if (e.key === 'ArrowDown') {
|
||||
e.preventDefault();
|
||||
setSelectedAutocompleteIndex((prev) =>
|
||||
Math.min(prev + 1, documents.length - 1)
|
||||
);
|
||||
return;
|
||||
}
|
||||
if (e.key === 'ArrowUp') {
|
||||
e.preventDefault();
|
||||
setSelectedAutocompleteIndex((prev) => Math.max(prev - 1, 0));
|
||||
return;
|
||||
}
|
||||
if (e.key === 'Enter' || e.key === 'Tab') {
|
||||
e.preventDefault();
|
||||
insertDocumentReference(selectedAutocompleteIndex);
|
||||
return;
|
||||
}
|
||||
if (e.key === 'Escape') {
|
||||
e.preventDefault();
|
||||
setShowAutocomplete(false);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Submit on Enter (without Shift)
|
||||
if (e.key === 'Enter' && !e.shiftKey && !showAutocomplete) {
|
||||
e.preventDefault();
|
||||
handleSubmit(e);
|
||||
}
|
||||
};
|
||||
|
||||
const insertDocumentReference = (index) => {
|
||||
const ref = `@${index + 1}`;
|
||||
const textarea = messageInputRef.current;
|
||||
if (!textarea) return;
|
||||
|
||||
const cursorPos = textarea.selectionStart;
|
||||
const textBefore = input.substring(0, cursorPos);
|
||||
const textAfter = input.substring(cursorPos);
|
||||
|
||||
// Find the @ symbol position
|
||||
const atPos = textBefore.lastIndexOf('@');
|
||||
if (atPos === -1) return;
|
||||
|
||||
// Replace from @ to cursor with the reference
|
||||
const newText = input.substring(0, atPos) + ref + ' ' + textAfter;
|
||||
setInput(newText);
|
||||
setShowAutocomplete(false);
|
||||
|
||||
// Set cursor position after the inserted reference
|
||||
setTimeout(() => {
|
||||
const newCursorPos = atPos + ref.length + 1;
|
||||
textarea.setSelectionRange(newCursorPos, newCursorPos);
|
||||
textarea.focus();
|
||||
}, 0);
|
||||
};
|
||||
|
||||
const handleInputChange = (e) => {
|
||||
const value = e.target.value;
|
||||
setInput(value);
|
||||
|
||||
const cursorPos = e.target.selectionStart;
|
||||
const textBefore = value.substring(0, cursorPos);
|
||||
const lastAt = textBefore.lastIndexOf('@');
|
||||
|
||||
// Check if we're typing after an @ symbol
|
||||
if (lastAt !== -1) {
|
||||
const textAfterAt = textBefore.substring(lastAt + 1);
|
||||
// Only show autocomplete if there's no space or newline after @
|
||||
if (!textAfterAt.match(/[\s\n]/) && documents.length > 0) {
|
||||
setShowAutocomplete(true);
|
||||
setSelectedAutocompleteIndex(0);
|
||||
} else {
|
||||
setShowAutocomplete(false);
|
||||
}
|
||||
} else {
|
||||
setShowAutocomplete(false);
|
||||
}
|
||||
};
|
||||
|
||||
if (!conversation) {
|
||||
return (
|
||||
<div className="chat-interface">
|
||||
<div className="empty-state">
|
||||
<h2>Welcome to LLM Council</h2>
|
||||
<p>Create a new conversation to get started</p>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const handlePickFile = () => {
|
||||
if (!fileInputRef.current) return;
|
||||
fileInputRef.current.value = '';
|
||||
fileInputRef.current.click();
|
||||
};
|
||||
|
||||
const handleFileChange = async (e) => {
|
||||
const selected = Array.from(e.target.files || []);
|
||||
if (selected.length === 0) return;
|
||||
const nonMd = selected.find((f) => !f.name.toLowerCase().endsWith('.md'));
|
||||
if (nonMd) {
|
||||
alert('Only .md files are supported');
|
||||
return;
|
||||
}
|
||||
if (onUploadDocument) {
|
||||
await onUploadDocument(selected);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="chat-interface">
|
||||
<div className="docs-bar">
|
||||
<div className="docs-left">
|
||||
<div className="docs-title">Documents</div>
|
||||
<div className="docs-subtitle">
|
||||
{documents.length === 0
|
||||
? 'No .md files attached'
|
||||
: `${documents.length} attached`}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="docs-right">
|
||||
<input
|
||||
ref={fileInputRef}
|
||||
type="file"
|
||||
multiple
|
||||
accept=".md,text/markdown"
|
||||
style={{ display: 'none' }}
|
||||
onChange={handleFileChange}
|
||||
/>
|
||||
{onExportReport && conversation?.messages?.length > 0 && (
|
||||
<button
|
||||
type="button"
|
||||
className="docs-export-btn"
|
||||
onClick={() => onExportReport()}
|
||||
disabled={isLoading}
|
||||
title="Export conversation as markdown report"
|
||||
>
|
||||
📄 Export Report
|
||||
</button>
|
||||
)}
|
||||
<button
|
||||
type="button"
|
||||
className="docs-upload-btn"
|
||||
onClick={handlePickFile}
|
||||
disabled={isUploadingDoc || isLoading}
|
||||
>
|
||||
{isUploadingDoc ? 'Uploading…' : 'Upload .md'}
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{documents.length > 0 && (
|
||||
<div className="docs-list">
|
||||
{documents.map((d, index) => (
|
||||
<div key={d.id} className="doc-item">
|
||||
<button
|
||||
type="button"
|
||||
className={`doc-name-btn ${selectedDoc?.id === d.id ? 'active' : ''}`}
|
||||
onClick={() => onViewDocument && onViewDocument(d)}
|
||||
title="View document"
|
||||
>
|
||||
<span className="doc-number">@{index + 1}</span> {d.filename}
|
||||
</button>
|
||||
<div className="doc-actions">
|
||||
<span className="doc-bytes">{d.bytes} bytes</span>
|
||||
<button
|
||||
type="button"
|
||||
className="doc-delete-btn"
|
||||
onClick={() => onDeleteDocument && onDeleteDocument(d)}
|
||||
title="Remove document"
|
||||
>
|
||||
Remove
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{selectedDoc && (
|
||||
<div className="doc-preview">
|
||||
<div className="doc-preview-header">
|
||||
<div className="doc-preview-title">{selectedDoc.filename}</div>
|
||||
<div className="doc-preview-meta">{selectedDoc.bytes} bytes</div>
|
||||
</div>
|
||||
<div className="doc-preview-body">
|
||||
{isLoadingDoc ? (
|
||||
<div className="doc-preview-loading">Loading…</div>
|
||||
) : (
|
||||
<div className="markdown-content">
|
||||
<ReactMarkdown>{selectedDocContent}</ReactMarkdown>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="messages-container">
|
||||
{conversation.messages.length === 0 ? (
|
||||
<div className="empty-state">
|
||||
<h2>Start a conversation</h2>
|
||||
<p>Ask a question to consult the LLM Council</p>
|
||||
</div>
|
||||
) : (
|
||||
conversation.messages.map((msg, index) => (
|
||||
<div key={index} className="message-group">
|
||||
{msg.role === 'user' ? (
|
||||
<div className="user-message">
|
||||
<div className="message-label">You</div>
|
||||
<div className="message-content">
|
||||
<div className="markdown-content">
|
||||
<ReactMarkdown>{msg.content}</ReactMarkdown>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
) : (
|
||||
<div className="assistant-message">
|
||||
<div className="message-label">LLM Council</div>
|
||||
|
||||
{/* Stage 1 */}
|
||||
{msg.loading?.stage1 && (
|
||||
<div className="stage-loading">
|
||||
<div className="spinner"></div>
|
||||
<span>Running Stage 1: Collecting individual responses...</span>
|
||||
</div>
|
||||
)}
|
||||
{msg.stage1 && (
|
||||
<Stage1
|
||||
responses={msg.stage1}
|
||||
metadata={msg.metadata?.stage1_metadata}
|
||||
/>
|
||||
)}
|
||||
|
||||
{/* Stage 2 */}
|
||||
{msg.loading?.stage2 && (
|
||||
<div className="stage-loading">
|
||||
<div className="spinner"></div>
|
||||
<span>Running Stage 2: Peer rankings...</span>
|
||||
</div>
|
||||
)}
|
||||
{msg.stage2 && (
|
||||
<Stage2
|
||||
rankings={msg.stage2}
|
||||
labelToModel={msg.metadata?.label_to_model}
|
||||
aggregateRankings={msg.metadata?.aggregate_rankings}
|
||||
metadata={msg.metadata?.stage2_metadata}
|
||||
/>
|
||||
)}
|
||||
|
||||
{/* Stage 3 */}
|
||||
{msg.loading?.stage3 && (
|
||||
<div className="stage-loading">
|
||||
<div className="spinner"></div>
|
||||
<span>Running Stage 3: Final synthesis...</span>
|
||||
</div>
|
||||
)}
|
||||
{msg.stage3 && (
|
||||
<Stage3
|
||||
finalResponse={msg.stage3}
|
||||
metadata={msg.metadata?.stage3_metadata}
|
||||
totalDuration={msg.metadata?.total_duration_seconds}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
))
|
||||
)}
|
||||
|
||||
{isLoading && (
|
||||
<div className="loading-indicator">
|
||||
<div className="spinner"></div>
|
||||
<span>Consulting the council...</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div ref={messagesEndRef} />
|
||||
</div>
|
||||
|
||||
<form className="input-form" onSubmit={handleSubmit}>
|
||||
<div className="input-wrapper">
|
||||
<textarea
|
||||
ref={messageInputRef}
|
||||
className="message-input"
|
||||
placeholder="Ask your question... (Type @ to reference documents, Shift+Enter for new line, Enter to send)"
|
||||
value={input}
|
||||
onChange={handleInputChange}
|
||||
onKeyDown={handleKeyDown}
|
||||
disabled={isLoading}
|
||||
/>
|
||||
{showAutocomplete && documents.length > 0 && (
|
||||
<div
|
||||
ref={autocompleteRef}
|
||||
className="autocomplete-dropdown"
|
||||
>
|
||||
{documents.map((doc, index) => (
|
||||
<div
|
||||
key={doc.id}
|
||||
className={`autocomplete-item ${
|
||||
selectedAutocompleteIndex === index ? 'selected' : ''
|
||||
}`}
|
||||
onClick={() => insertDocumentReference(index)}
|
||||
onMouseEnter={() => setSelectedAutocompleteIndex(index)}
|
||||
>
|
||||
<span className="autocomplete-number">@{index + 1}</span>
|
||||
<span className="autocomplete-filename">{doc.filename}</span>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
<button
|
||||
type="submit"
|
||||
className="send-button"
|
||||
disabled={!input.trim() || isLoading}
|
||||
>
|
||||
Send
|
||||
</button>
|
||||
</form>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
214
frontend/src/components/Sidebar.css
Normal file
214
frontend/src/components/Sidebar.css
Normal file
@ -0,0 +1,214 @@
|
||||
.sidebar {
|
||||
width: 260px;
|
||||
background: var(--bg-secondary);
|
||||
border-right: 1px solid var(--border-color);
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
height: 100vh;
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.sidebar-header {
|
||||
padding: 16px;
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.sidebar-header-top {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
width: 100%;
|
||||
margin-bottom: 12px;
|
||||
}
|
||||
|
||||
.sidebar-header-actions {
|
||||
display: flex;
|
||||
gap: 6px;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.sidebar-header h1 {
|
||||
font-size: 18px;
|
||||
margin: 0;
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.theme-toggle-btn,
|
||||
.debug-toggle-btn {
|
||||
background: var(--bg-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 6px;
|
||||
padding: 6px 10px;
|
||||
font-size: 16px;
|
||||
cursor: pointer;
|
||||
transition: background 0.2s, border-color 0.2s;
|
||||
line-height: 1;
|
||||
color: var(--text-primary);
|
||||
min-width: 36px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
.theme-toggle-btn:hover,
|
||||
.debug-toggle-btn:hover {
|
||||
background: var(--hover-bg);
|
||||
border-color: var(--active-border);
|
||||
}
|
||||
|
||||
.debug-toggle-btn {
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.new-conversation-btn {
|
||||
width: 100%;
|
||||
padding: 10px;
|
||||
background: #4a90e2;
|
||||
border: 1px solid #4a90e2;
|
||||
border-radius: 6px;
|
||||
color: #fff;
|
||||
cursor: pointer;
|
||||
font-size: 14px;
|
||||
transition: background 0.2s;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.new-conversation-btn:hover {
|
||||
background: #357abd;
|
||||
border-color: #357abd;
|
||||
}
|
||||
|
||||
.search-box {
|
||||
padding: 12px 16px;
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.search-input {
|
||||
width: 100%;
|
||||
padding: 8px 12px;
|
||||
background: var(--bg-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 6px;
|
||||
color: var(--text-primary);
|
||||
font-size: 14px;
|
||||
outline: none;
|
||||
transition: border-color 0.2s;
|
||||
}
|
||||
|
||||
.search-input:focus {
|
||||
border-color: #4a90e2;
|
||||
box-shadow: 0 0 0 2px rgba(74, 144, 226, 0.1);
|
||||
}
|
||||
|
||||
.search-input::placeholder {
|
||||
color: var(--text-tertiary);
|
||||
}
|
||||
|
||||
.conversation-list {
|
||||
flex: 1;
|
||||
overflow-y: auto;
|
||||
padding: 8px;
|
||||
}
|
||||
|
||||
.no-conversations {
|
||||
padding: 16px;
|
||||
text-align: center;
|
||||
color: var(--text-tertiary);
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.conversation-item {
|
||||
padding: 12px;
|
||||
margin-bottom: 4px;
|
||||
border-radius: 6px;
|
||||
cursor: pointer;
|
||||
transition: background 0.2s;
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.conversation-item:hover {
|
||||
background: var(--hover-bg);
|
||||
}
|
||||
|
||||
.conversation-item.active {
|
||||
background: var(--hover-bg-light);
|
||||
border: 1px solid #4a90e2;
|
||||
}
|
||||
|
||||
|
||||
.conversation-content {
|
||||
flex: 1;
|
||||
min-width: 0;
|
||||
}
|
||||
|
||||
.conversation-title {
|
||||
color: var(--text-primary);
|
||||
font-size: 14px;
|
||||
margin-bottom: 4px;
|
||||
word-wrap: break-word;
|
||||
}
|
||||
|
||||
.conversation-meta {
|
||||
color: var(--text-tertiary);
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
.conversation-actions {
|
||||
display: flex;
|
||||
gap: 4px;
|
||||
margin-left: 8px;
|
||||
opacity: 0;
|
||||
transition: opacity 0.2s;
|
||||
}
|
||||
|
||||
.conversation-item:hover .conversation-actions {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
.conversation-delete-btn {
|
||||
background: transparent;
|
||||
border: none;
|
||||
cursor: pointer;
|
||||
padding: 4px 8px;
|
||||
border-radius: 4px;
|
||||
font-size: 16px;
|
||||
color: var(--text-secondary);
|
||||
transition: background 0.2s, color 0.2s;
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
.conversation-delete-btn:hover {
|
||||
background: #fee;
|
||||
color: #c33;
|
||||
}
|
||||
|
||||
.conversation-rename-btn {
|
||||
background: transparent;
|
||||
border: none;
|
||||
cursor: pointer;
|
||||
padding: 4px 8px;
|
||||
border-radius: 4px;
|
||||
font-size: 14px;
|
||||
color: var(--text-secondary);
|
||||
transition: background 0.2s, color 0.2s;
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
.conversation-rename-btn:hover {
|
||||
background: var(--hover-bg);
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.conversation-title-edit {
|
||||
width: 100%;
|
||||
padding: 4px 8px;
|
||||
background: var(--bg-primary);
|
||||
border: 1px solid #4a90e2;
|
||||
border-radius: 4px;
|
||||
color: var(--text-primary);
|
||||
font-size: 14px;
|
||||
font-family: inherit;
|
||||
outline: none;
|
||||
}
|
||||
141
frontend/src/components/Sidebar.jsx
Normal file
141
frontend/src/components/Sidebar.jsx
Normal file
@ -0,0 +1,141 @@
|
||||
import { useState, useEffect } from 'react';
|
||||
import './Sidebar.css';
|
||||
|
||||
export default function Sidebar({
|
||||
conversations,
|
||||
currentConversationId,
|
||||
onSelectConversation,
|
||||
onNewConversation,
|
||||
onDeleteConversation,
|
||||
onRenameConversation,
|
||||
theme = 'light',
|
||||
onToggleTheme,
|
||||
searchQuery = '',
|
||||
onSearchChange,
|
||||
}) {
|
||||
const [editingId, setEditingId] = useState(null);
|
||||
const [editValue, setEditValue] = useState('');
|
||||
|
||||
const handleStartEdit = (conv, e) => {
|
||||
e.stopPropagation();
|
||||
setEditingId(conv.id);
|
||||
setEditValue(conv.title || 'New Conversation');
|
||||
};
|
||||
|
||||
const handleSaveEdit = async (convId, e) => {
|
||||
e.stopPropagation();
|
||||
if (onRenameConversation && editValue.trim()) {
|
||||
await onRenameConversation(convId, editValue.trim());
|
||||
}
|
||||
setEditingId(null);
|
||||
setEditValue('');
|
||||
};
|
||||
|
||||
const handleCancelEdit = (e) => {
|
||||
e.stopPropagation();
|
||||
setEditingId(null);
|
||||
setEditValue('');
|
||||
};
|
||||
|
||||
const handleKeyDown = (e, convId) => {
|
||||
if (e.key === 'Enter') {
|
||||
handleSaveEdit(convId, e);
|
||||
} else if (e.key === 'Escape') {
|
||||
handleCancelEdit(e);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="sidebar">
|
||||
<div className="sidebar-header">
|
||||
<div className="sidebar-header-top">
|
||||
<h1>LLM Council</h1>
|
||||
<div className="sidebar-header-actions">
|
||||
{onToggleTheme && (
|
||||
<button
|
||||
className="theme-toggle-btn"
|
||||
onClick={onToggleTheme}
|
||||
title={theme === 'light' ? 'Switch to dark mode' : 'Switch to light mode'}
|
||||
>
|
||||
{theme === 'light' ? '🌙' : '☀️'}
|
||||
</button>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
<button className="new-conversation-btn" onClick={onNewConversation}>
|
||||
+ New Conversation
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<div className="search-box">
|
||||
<input
|
||||
type="text"
|
||||
className="search-input"
|
||||
placeholder="Search conversations..."
|
||||
value={searchQuery}
|
||||
onChange={(e) => onSearchChange && onSearchChange(e.target.value)}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div className="conversation-list">
|
||||
{conversations.length === 0 ? (
|
||||
<div className="no-conversations">No conversations yet</div>
|
||||
) : (
|
||||
conversations.map((conv) => (
|
||||
<div
|
||||
key={conv.id}
|
||||
className={`conversation-item ${
|
||||
conv.id === currentConversationId ? 'active' : ''
|
||||
}`}
|
||||
onClick={() => onSelectConversation(conv.id)}
|
||||
>
|
||||
<div className="conversation-content">
|
||||
{editingId === conv.id ? (
|
||||
<input
|
||||
type="text"
|
||||
className="conversation-title-edit"
|
||||
value={editValue}
|
||||
onChange={(e) => setEditValue(e.target.value)}
|
||||
onBlur={(e) => handleSaveEdit(conv.id, e)}
|
||||
onKeyDown={(e) => handleKeyDown(e, conv.id)}
|
||||
onClick={(e) => e.stopPropagation()}
|
||||
autoFocus
|
||||
/>
|
||||
) : (
|
||||
<>
|
||||
<div className="conversation-title">
|
||||
{conv.title || 'New Conversation'}
|
||||
</div>
|
||||
<div className="conversation-meta">
|
||||
{conv.message_count} messages
|
||||
</div>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
<div className="conversation-actions" onClick={(e) => e.stopPropagation()}>
|
||||
{editingId !== conv.id && onRenameConversation && (
|
||||
<button
|
||||
className="conversation-rename-btn"
|
||||
onClick={(e) => handleStartEdit(conv, e)}
|
||||
title="Rename"
|
||||
>
|
||||
✏️
|
||||
</button>
|
||||
)}
|
||||
{onDeleteConversation && (
|
||||
<button
|
||||
className="conversation-delete-btn"
|
||||
onClick={(e) => onDeleteConversation(conv.id, e)}
|
||||
title="Delete"
|
||||
>
|
||||
×
|
||||
</button>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
))
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
80
frontend/src/components/Stage1.css
Normal file
80
frontend/src/components/Stage1.css
Normal file
@ -0,0 +1,80 @@
|
||||
.stage {
|
||||
margin: 24px 0;
|
||||
padding: 20px;
|
||||
background: var(--bg-secondary);
|
||||
border-radius: 8px;
|
||||
border: 1px solid var(--border-color);
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.stage-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 12px;
|
||||
}
|
||||
|
||||
.stage-meta {
|
||||
font-size: 12px;
|
||||
color: var(--text-secondary);
|
||||
font-weight: normal;
|
||||
}
|
||||
|
||||
.stage-title {
|
||||
margin: 0;
|
||||
color: var(--text-primary);
|
||||
font-size: 16px;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.tabs {
|
||||
display: flex;
|
||||
gap: 8px;
|
||||
margin-bottom: 16px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.tab {
|
||||
padding: 8px 16px;
|
||||
background: var(--bg-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 6px 6px 0 0;
|
||||
color: var(--text-secondary);
|
||||
cursor: pointer;
|
||||
font-size: 14px;
|
||||
transition: all 0.2s;
|
||||
}
|
||||
|
||||
.tab:hover {
|
||||
background: var(--hover-bg);
|
||||
color: var(--text-primary);
|
||||
border-color: #4a90e2;
|
||||
}
|
||||
|
||||
.tab.active {
|
||||
background: var(--bg-primary);
|
||||
color: #4a90e2;
|
||||
border-color: #4a90e2;
|
||||
border-bottom-color: var(--bg-primary);
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.tab-content {
|
||||
background: var(--bg-primary);
|
||||
padding: 16px;
|
||||
border-radius: 6px;
|
||||
border: 1px solid var(--border-color);
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.model-name {
|
||||
color: var(--text-secondary);
|
||||
font-size: 12px;
|
||||
margin-bottom: 12px;
|
||||
font-family: monospace;
|
||||
}
|
||||
|
||||
.response-text {
|
||||
color: var(--text-primary);
|
||||
line-height: 1.6;
|
||||
}
|
||||
47
frontend/src/components/Stage1.jsx
Normal file
47
frontend/src/components/Stage1.jsx
Normal file
@ -0,0 +1,47 @@
|
||||
import { useState } from 'react';
|
||||
import ReactMarkdown from 'react-markdown';
|
||||
import './Stage1.css';
|
||||
|
||||
export default function Stage1({ responses, metadata }) {
|
||||
const [activeTab, setActiveTab] = useState(0);
|
||||
|
||||
if (!responses || responses.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const duration = metadata?.duration_seconds;
|
||||
const successful = metadata?.successful_models?.length || 0;
|
||||
const total = metadata?.total_models || 0;
|
||||
|
||||
return (
|
||||
<div className="stage stage1">
|
||||
<div className="stage-header">
|
||||
<h3 className="stage-title">Stage 1: Individual Responses</h3>
|
||||
{duration && (
|
||||
<div className="stage-meta">
|
||||
{duration}s | {successful}/{total} models
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div className="tabs">
|
||||
{responses.map((resp, index) => (
|
||||
<button
|
||||
key={index}
|
||||
className={`tab ${activeTab === index ? 'active' : ''}`}
|
||||
onClick={() => setActiveTab(index)}
|
||||
>
|
||||
{resp.model.split('/')[1] || resp.model}
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
|
||||
<div className="tab-content">
|
||||
<div className="model-name">{responses[activeTab].model}</div>
|
||||
<div className="response-text markdown-content">
|
||||
<ReactMarkdown>{responses[activeTab].response}</ReactMarkdown>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
170
frontend/src/components/Stage2.css
Normal file
170
frontend/src/components/Stage2.css
Normal file
@ -0,0 +1,170 @@
|
||||
.stage-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 12px;
|
||||
}
|
||||
|
||||
.stage-meta {
|
||||
font-size: 12px;
|
||||
color: var(--text-secondary);
|
||||
font-weight: normal;
|
||||
}
|
||||
|
||||
.stage2 {
|
||||
background: var(--bg-secondary);
|
||||
}
|
||||
|
||||
.stage2 h4 {
|
||||
margin: 20px 0 8px 0;
|
||||
color: var(--text-primary);
|
||||
font-size: 14px;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.stage2 h4:first-of-type {
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
.stage-description {
|
||||
margin: 0 0 12px 0;
|
||||
color: var(--text-secondary);
|
||||
font-size: 13px;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
.aggregate-rankings {
|
||||
background: var(--hover-bg-light);
|
||||
padding: 16px;
|
||||
border-radius: 8px;
|
||||
margin-bottom: 20px;
|
||||
border: 2px solid var(--border-hover);
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.aggregate-rankings h4 {
|
||||
margin: 0 0 12px 0;
|
||||
color: #2a7ae2;
|
||||
font-size: 15px;
|
||||
}
|
||||
|
||||
.aggregate-list {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.aggregate-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
padding: 10px;
|
||||
background: var(--bg-primary);
|
||||
border-radius: 6px;
|
||||
border: 1px solid var(--border-hover);
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.rank-position {
|
||||
color: #2a7ae2;
|
||||
font-weight: 700;
|
||||
font-size: 16px;
|
||||
min-width: 35px;
|
||||
}
|
||||
|
||||
.rank-model {
|
||||
flex: 1;
|
||||
color: var(--text-primary);
|
||||
font-family: monospace;
|
||||
font-size: 14px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.rank-score {
|
||||
color: var(--text-secondary);
|
||||
font-size: 13px;
|
||||
font-family: monospace;
|
||||
}
|
||||
|
||||
.stage2 .tabs {
|
||||
display: flex;
|
||||
gap: 8px;
|
||||
margin-bottom: 16px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.stage2 .tab {
|
||||
padding: 8px 16px;
|
||||
background: var(--bg-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 6px 6px 0 0;
|
||||
color: var(--text-secondary);
|
||||
cursor: pointer;
|
||||
font-size: 14px;
|
||||
transition: all 0.2s;
|
||||
}
|
||||
|
||||
.stage2 .tab:hover {
|
||||
background: var(--hover-bg);
|
||||
color: var(--text-primary);
|
||||
border-color: #4a90e2;
|
||||
}
|
||||
|
||||
.stage2 .tab.active {
|
||||
background: var(--bg-primary);
|
||||
color: #4a90e2;
|
||||
border-color: #4a90e2;
|
||||
border-bottom-color: var(--bg-primary);
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.stage2 .tab-content {
|
||||
background: var(--bg-primary);
|
||||
padding: 16px;
|
||||
border-radius: 6px;
|
||||
border: 1px solid var(--border-color);
|
||||
margin-bottom: 20px;
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.ranking-model {
|
||||
color: var(--text-secondary);
|
||||
font-size: 12px;
|
||||
font-family: monospace;
|
||||
margin-bottom: 12px;
|
||||
}
|
||||
|
||||
.ranking-content {
|
||||
color: var(--text-primary);
|
||||
line-height: 1.6;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.parsed-ranking {
|
||||
margin-top: 16px;
|
||||
padding-top: 16px;
|
||||
border-top: 2px solid var(--border-color);
|
||||
transition: border-color 0.2s;
|
||||
}
|
||||
|
||||
.parsed-ranking strong {
|
||||
color: #2a7ae2;
|
||||
font-size: 13px;
|
||||
}
|
||||
|
||||
.parsed-ranking ol {
|
||||
margin: 8px 0 0 0;
|
||||
padding-left: 24px;
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.parsed-ranking li {
|
||||
margin: 4px 0;
|
||||
font-family: monospace;
|
||||
font-size: 13px;
|
||||
}
|
||||
|
||||
.rank-count {
|
||||
color: var(--text-tertiary);
|
||||
font-size: 12px;
|
||||
}
|
||||
110
frontend/src/components/Stage2.jsx
Normal file
110
frontend/src/components/Stage2.jsx
Normal file
@ -0,0 +1,110 @@
|
||||
import { useState } from 'react';
|
||||
import ReactMarkdown from 'react-markdown';
|
||||
import './Stage2.css';
|
||||
|
||||
function deAnonymizeText(text, labelToModel) {
|
||||
if (!labelToModel) return text;
|
||||
|
||||
let result = text;
|
||||
// Replace each "Response X" with the actual model name
|
||||
Object.entries(labelToModel).forEach(([label, model]) => {
|
||||
const modelShortName = model.split('/')[1] || model;
|
||||
result = result.replace(new RegExp(label, 'g'), `**${modelShortName}**`);
|
||||
});
|
||||
return result;
|
||||
}
|
||||
|
||||
export default function Stage2({ rankings, labelToModel, aggregateRankings, metadata }) {
|
||||
const [activeTab, setActiveTab] = useState(0);
|
||||
|
||||
if (!rankings || rankings.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const duration = metadata?.duration_seconds;
|
||||
const successful = metadata?.successful_models?.length || 0;
|
||||
const total = metadata?.total_models || 0;
|
||||
|
||||
return (
|
||||
<div className="stage stage2">
|
||||
<div className="stage-header">
|
||||
<h3 className="stage-title">Stage 2: Peer Rankings</h3>
|
||||
{duration && (
|
||||
<div className="stage-meta">
|
||||
{duration}s | {successful}/{total} models
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<h4>Raw Evaluations</h4>
|
||||
<p className="stage-description">
|
||||
Each model evaluated all responses (anonymized as Response A, B, C, etc.) and provided rankings.
|
||||
Below, model names are shown in <strong>bold</strong> for readability, but the original evaluation used anonymous labels.
|
||||
</p>
|
||||
|
||||
<div className="tabs">
|
||||
{rankings.map((rank, index) => (
|
||||
<button
|
||||
key={index}
|
||||
className={`tab ${activeTab === index ? 'active' : ''}`}
|
||||
onClick={() => setActiveTab(index)}
|
||||
>
|
||||
{rank.model.split('/')[1] || rank.model}
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
|
||||
<div className="tab-content">
|
||||
<div className="ranking-model">
|
||||
{rankings[activeTab].model}
|
||||
</div>
|
||||
<div className="ranking-content markdown-content">
|
||||
<ReactMarkdown>
|
||||
{deAnonymizeText(rankings[activeTab].ranking, labelToModel)}
|
||||
</ReactMarkdown>
|
||||
</div>
|
||||
|
||||
{rankings[activeTab].parsed_ranking &&
|
||||
rankings[activeTab].parsed_ranking.length > 0 && (
|
||||
<div className="parsed-ranking">
|
||||
<strong>Extracted Ranking:</strong>
|
||||
<ol>
|
||||
{rankings[activeTab].parsed_ranking.map((label, i) => (
|
||||
<li key={i}>
|
||||
{labelToModel && labelToModel[label]
|
||||
? labelToModel[label].split('/')[1] || labelToModel[label]
|
||||
: label}
|
||||
</li>
|
||||
))}
|
||||
</ol>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{aggregateRankings && aggregateRankings.length > 0 && (
|
||||
<div className="aggregate-rankings">
|
||||
<h4>Aggregate Rankings (Street Cred)</h4>
|
||||
<p className="stage-description">
|
||||
Combined results across all peer evaluations (lower score is better):
|
||||
</p>
|
||||
<div className="aggregate-list">
|
||||
{aggregateRankings.map((agg, index) => (
|
||||
<div key={index} className="aggregate-item">
|
||||
<span className="rank-position">#{index + 1}</span>
|
||||
<span className="rank-model">
|
||||
{agg.model.split('/')[1] || agg.model}
|
||||
</span>
|
||||
<span className="rank-score">
|
||||
Avg: {agg.average_rank.toFixed(2)}
|
||||
</span>
|
||||
<span className="rank-count">
|
||||
({agg.rankings_count} votes)
|
||||
</span>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
47
frontend/src/components/Stage3.css
Normal file
47
frontend/src/components/Stage3.css
Normal file
@ -0,0 +1,47 @@
|
||||
.stage-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 12px;
|
||||
}
|
||||
|
||||
.stage-meta {
|
||||
font-size: 12px;
|
||||
color: var(--text-secondary);
|
||||
font-weight: normal;
|
||||
}
|
||||
|
||||
.total-duration {
|
||||
margin-top: 12px;
|
||||
padding-top: 12px;
|
||||
border-top: 1px solid var(--border-light);
|
||||
font-size: 13px;
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
.stage3 {
|
||||
background: var(--bg-secondary);
|
||||
border-color: var(--border-color);
|
||||
}
|
||||
|
||||
.final-response {
|
||||
background: var(--bg-primary);
|
||||
padding: 20px;
|
||||
border-radius: 6px;
|
||||
border: 1px solid var(--border-color);
|
||||
transition: background-color 0.2s, border-color 0.2s;
|
||||
}
|
||||
|
||||
.chairman-label {
|
||||
color: var(--text-primary);
|
||||
font-size: 12px;
|
||||
font-family: monospace;
|
||||
margin-bottom: 12px;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.final-text {
|
||||
color: var(--text-primary);
|
||||
line-height: 1.7;
|
||||
font-size: 15px;
|
||||
}
|
||||
37
frontend/src/components/Stage3.jsx
Normal file
37
frontend/src/components/Stage3.jsx
Normal file
@ -0,0 +1,37 @@
|
||||
import ReactMarkdown from 'react-markdown';
|
||||
import './Stage3.css';
|
||||
|
||||
export default function Stage3({ finalResponse, metadata, totalDuration }) {
|
||||
if (!finalResponse) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const duration = metadata?.duration_seconds;
|
||||
const model = metadata?.model || finalResponse.model;
|
||||
|
||||
return (
|
||||
<div className="stage stage3">
|
||||
<div className="stage-header">
|
||||
<h3 className="stage-title">Stage 3: Final Council Answer</h3>
|
||||
{duration && (
|
||||
<div className="stage-meta">
|
||||
{duration}s
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
<div className="final-response">
|
||||
<div className="chairman-label">
|
||||
Chairman: {model.split('/')[1] || model}
|
||||
</div>
|
||||
<div className="final-text markdown-content">
|
||||
<ReactMarkdown>{finalResponse.response}</ReactMarkdown>
|
||||
</div>
|
||||
{totalDuration && (
|
||||
<div className="total-duration">
|
||||
Total processing time: <strong>{totalDuration}s</strong>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
140
frontend/src/index.css
Normal file
140
frontend/src/index.css
Normal file
@ -0,0 +1,140 @@
|
||||
:root {
|
||||
font-family: system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', sans-serif;
|
||||
line-height: 1.5;
|
||||
font-weight: 400;
|
||||
|
||||
font-synthesis: none;
|
||||
text-rendering: optimizeLegibility;
|
||||
-webkit-font-smoothing: antialiased;
|
||||
-moz-osx-font-smoothing: grayscale;
|
||||
}
|
||||
|
||||
/* Light theme (default) */
|
||||
:root,
|
||||
[data-theme="light"] {
|
||||
--bg-primary: #ffffff;
|
||||
--bg-secondary: #f8f8f8;
|
||||
--bg-tertiary: #fbfbfb;
|
||||
--bg-code: #f5f5f5;
|
||||
--text-primary: #333333;
|
||||
--text-secondary: #666666;
|
||||
--text-tertiary: #999999;
|
||||
--border-color: #e0e0e0;
|
||||
--border-light: #eaeaea;
|
||||
--hover-bg: #f5f5f5;
|
||||
--hover-bg-light: #f0f7ff;
|
||||
--border-hover: #d0e7ff;
|
||||
}
|
||||
|
||||
/* Dark theme - improved contrast */
|
||||
[data-theme="dark"] {
|
||||
--bg-primary: #1a1a1a;
|
||||
--bg-secondary: #252525;
|
||||
--bg-tertiary: #2a2a2a;
|
||||
--bg-code: #2d2d2d;
|
||||
--text-primary: #ffffff;
|
||||
--text-secondary: #e0e0e0;
|
||||
--text-tertiary: #b0b0b0;
|
||||
--border-color: #404040;
|
||||
--border-light: #353535;
|
||||
--hover-bg: #303030;
|
||||
--hover-bg-light: #2a3a4a;
|
||||
--border-hover: #4a5a6a;
|
||||
--active-border: #5aa0f2;
|
||||
--user-message-bg: #1e2a3a;
|
||||
--user-message-border: #2a3a4a;
|
||||
--stage-bg: #252525;
|
||||
--stage-border: #404040;
|
||||
--stage3-bg: #1e2a1e;
|
||||
--stage3-border: #2a3a2a;
|
||||
}
|
||||
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
body {
|
||||
margin: 0;
|
||||
min-width: 320px;
|
||||
min-height: 100vh;
|
||||
background: var(--bg-secondary);
|
||||
color: var(--text-primary);
|
||||
transition: background-color 0.2s, color 0.2s;
|
||||
}
|
||||
|
||||
#root {
|
||||
height: 100vh;
|
||||
width: 100vw;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
/* Global markdown styling */
|
||||
.markdown-content {
|
||||
padding: 12px;
|
||||
}
|
||||
|
||||
.markdown-content p {
|
||||
margin: 0 0 12px 0;
|
||||
}
|
||||
|
||||
.markdown-content p:last-child {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.markdown-content h1,
|
||||
.markdown-content h2,
|
||||
.markdown-content h3,
|
||||
.markdown-content h4,
|
||||
.markdown-content h5,
|
||||
.markdown-content h6 {
|
||||
margin: 16px 0 8px 0;
|
||||
}
|
||||
|
||||
.markdown-content h1:first-child,
|
||||
.markdown-content h2:first-child,
|
||||
.markdown-content h3:first-child,
|
||||
.markdown-content h4:first-child,
|
||||
.markdown-content h5:first-child,
|
||||
.markdown-content h6:first-child {
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
.markdown-content ul,
|
||||
.markdown-content ol {
|
||||
margin: 0 0 12px 0;
|
||||
padding-left: 24px;
|
||||
}
|
||||
|
||||
.markdown-content li {
|
||||
margin: 4px 0;
|
||||
}
|
||||
|
||||
.markdown-content pre {
|
||||
background: var(--bg-code);
|
||||
padding: 12px;
|
||||
border-radius: 4px;
|
||||
overflow-x: auto;
|
||||
margin: 0 0 12px 0;
|
||||
}
|
||||
|
||||
.markdown-content code {
|
||||
background: var(--bg-code);
|
||||
padding: 2px 6px;
|
||||
border-radius: 3px;
|
||||
font-family: monospace;
|
||||
font-size: 0.9em;
|
||||
}
|
||||
|
||||
.markdown-content pre code {
|
||||
background: none;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
.markdown-content blockquote {
|
||||
margin: 0 0 12px 0;
|
||||
padding-left: 16px;
|
||||
border-left: 4px solid var(--border-color);
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
10
frontend/src/main.jsx
Normal file
10
frontend/src/main.jsx
Normal file
@ -0,0 +1,10 @@
|
||||
import { StrictMode } from 'react'
|
||||
import { createRoot } from 'react-dom/client'
|
||||
import './index.css'
|
||||
import App from './App.jsx'
|
||||
|
||||
createRoot(document.getElementById('root')).render(
|
||||
<StrictMode>
|
||||
<App />
|
||||
</StrictMode>,
|
||||
)
|
||||
101
frontend/tests/api.test.js
Normal file
101
frontend/tests/api.test.js
Normal file
@ -0,0 +1,101 @@
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
|
||||
import { api } from '../src/api.js';
|
||||
|
||||
|
||||
test('api.listDocuments calls correct endpoint', async () => {
|
||||
const calls = [];
|
||||
globalThis.fetch = async (url, options) => {
|
||||
calls.push({ url: String(url), options });
|
||||
return {
|
||||
ok: true,
|
||||
json: async () => [{ id: '1', filename: 'a.md', bytes: 3 }],
|
||||
};
|
||||
};
|
||||
|
||||
const out = await api.listDocuments('conv123');
|
||||
assert.equal(calls.length, 1);
|
||||
assert.match(calls[0].url, /\/api\/conversations\/conv123\/documents$/);
|
||||
assert.deepEqual(out, [{ id: '1', filename: 'a.md', bytes: 3 }]);
|
||||
});
|
||||
|
||||
|
||||
test('api.uploadDocument uses multipart form and POST', async () => {
|
||||
const calls = [];
|
||||
globalThis.fetch = async (url, options) => {
|
||||
calls.push({ url: String(url), options });
|
||||
return {
|
||||
ok: true,
|
||||
json: async () => ({ id: 'doc1', filename: 'x.md', bytes: 10 }),
|
||||
};
|
||||
};
|
||||
|
||||
// Node 18 doesn't provide a global File, but it does provide Blob.
|
||||
// Our api.uploadDocument supports Blob + filename.
|
||||
const file = new Blob(['hello'], { type: 'text/markdown' });
|
||||
file.name = 'x.md';
|
||||
const out = await api.uploadDocument('conv999', file);
|
||||
|
||||
assert.equal(calls.length, 1);
|
||||
assert.match(calls[0].url, /\/api\/conversations\/conv999\/documents$/);
|
||||
assert.equal(calls[0].options.method, 'POST');
|
||||
assert.ok(calls[0].options.body instanceof FormData);
|
||||
assert.equal(out.id, 'doc1');
|
||||
});
|
||||
|
||||
test('api.uploadDocuments uses multipart form and POST', async () => {
|
||||
const calls = [];
|
||||
globalThis.fetch = async (url, options) => {
|
||||
calls.push({ url: String(url), options });
|
||||
return {
|
||||
ok: true,
|
||||
json: async () => ({ uploaded: [{ id: 'a' }, { id: 'b' }] }),
|
||||
};
|
||||
};
|
||||
|
||||
const a = new Blob(['one'], { type: 'text/markdown' });
|
||||
a.name = 'a.md';
|
||||
const b = new Blob(['two'], { type: 'text/markdown' });
|
||||
b.name = 'b.md';
|
||||
|
||||
const out = await api.uploadDocuments('conv999', [a, b]);
|
||||
|
||||
assert.equal(calls.length, 1);
|
||||
assert.match(calls[0].url, /\/api\/conversations\/conv999\/documents$/);
|
||||
assert.equal(calls[0].options.method, 'POST');
|
||||
assert.ok(calls[0].options.body instanceof FormData);
|
||||
assert.equal(out.uploaded.length, 2);
|
||||
});
|
||||
|
||||
test('api.getDocument calls correct endpoint', async () => {
|
||||
const calls = [];
|
||||
globalThis.fetch = async (url) => {
|
||||
calls.push({ url: String(url) });
|
||||
return {
|
||||
ok: true,
|
||||
json: async () => ({ id: 'd1', content: '# hi' }),
|
||||
};
|
||||
};
|
||||
|
||||
const out = await api.getDocument('c1', 'd1');
|
||||
assert.equal(calls.length, 1);
|
||||
assert.match(calls[0].url, /\/api\/conversations\/c1\/documents\/d1$/);
|
||||
assert.equal(out.content, '# hi');
|
||||
});
|
||||
|
||||
test('api.deleteDocument uses DELETE', async () => {
|
||||
const calls = [];
|
||||
globalThis.fetch = async (url, options) => {
|
||||
calls.push({ url: String(url), options });
|
||||
return { ok: true, json: async () => ({ ok: true }) };
|
||||
};
|
||||
|
||||
const out = await api.deleteDocument('c1', 'd1');
|
||||
assert.equal(calls.length, 1);
|
||||
assert.match(calls[0].url, /\/api\/conversations\/c1\/documents\/d1$/);
|
||||
assert.equal(calls[0].options.method, 'DELETE');
|
||||
assert.equal(out.ok, true);
|
||||
});
|
||||
|
||||
|
||||
23
frontend/tests/chatinterface.ssr.test.js
Normal file
23
frontend/tests/chatinterface.ssr.test.js
Normal file
@ -0,0 +1,23 @@
|
||||
import test from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
import fs from 'node:fs';
|
||||
import path from 'node:path';
|
||||
|
||||
// Node 18 can't import .jsx without a loader; do a lightweight source-based test instead.
|
||||
test('ChatInterface includes Documents UI and always-visible input form', () => {
|
||||
const filePath = path.resolve(
|
||||
process.cwd(),
|
||||
'src/components/ChatInterface.jsx'
|
||||
);
|
||||
const src = fs.readFileSync(filePath, 'utf8');
|
||||
|
||||
assert.ok(src.includes('className="docs-bar"'));
|
||||
assert.ok(src.includes('Upload .md'));
|
||||
assert.ok(src.includes('className="docs-list"'));
|
||||
assert.ok(src.includes('Delete'));
|
||||
assert.ok(src.includes('doc-preview'));
|
||||
assert.ok(src.includes('className="input-form"'));
|
||||
assert.ok(src.includes('className="message-input"'));
|
||||
});
|
||||
|
||||
|
||||
7
frontend/vite.config.js
Normal file
7
frontend/vite.config.js
Normal file
@ -0,0 +1,7 @@
|
||||
import { defineConfig } from 'vite'
|
||||
import react from '@vitejs/plugin-react'
|
||||
|
||||
// https://vite.dev/config/
|
||||
export default defineConfig({
|
||||
plugins: [react()],
|
||||
})
|
||||
14
pyproject.toml
Normal file
14
pyproject.toml
Normal file
@ -0,0 +1,14 @@
|
||||
[project]
|
||||
name = "llm-council"
|
||||
version = "0.1.0"
|
||||
description = "Your LLM Council"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10"
|
||||
dependencies = [
|
||||
"fastapi>=0.115.0",
|
||||
"uvicorn[standard]>=0.32.0",
|
||||
"python-dotenv>=1.0.0",
|
||||
"httpx>=0.27.0",
|
||||
"pydantic>=2.9.0",
|
||||
"python-multipart>=0.0.9",
|
||||
]
|
||||
24
scripts/check_firewall.sh
Normal file
24
scripts/check_firewall.sh
Normal file
@ -0,0 +1,24 @@
|
||||
#!/bin/bash
|
||||
# Check if firewall might be blocking the connection
|
||||
|
||||
GPU_VM="10.0.30.63"
|
||||
|
||||
echo "=== Firewall Check ==="
|
||||
echo ""
|
||||
echo "The fact that Ollama shows ':::11434' means it's listening on all interfaces."
|
||||
echo "If curl still times out, it's likely a firewall issue."
|
||||
echo ""
|
||||
echo "On the GPU VM, check firewall:"
|
||||
echo " # Check if firewall is running:"
|
||||
echo " sudo ufw status"
|
||||
echo " # OR"
|
||||
echo " sudo iptables -L -n | grep 11434"
|
||||
echo ""
|
||||
echo "If firewall is blocking, allow the port:"
|
||||
echo " sudo ufw allow 11434/tcp"
|
||||
echo " # OR for iptables:"
|
||||
echo " sudo iptables -A INPUT -p tcp --dport 11434 -j ACCEPT"
|
||||
echo ""
|
||||
echo "Testing connection from local machine..."
|
||||
timeout 3 curl -v http://$GPU_VM:11434/api/tags 2>&1 | grep -E "Connected|timeout|refused|Connection" | head -3
|
||||
|
||||
60
scripts/check_ollama_models.sh
Executable file
60
scripts/check_ollama_models.sh
Executable file
@ -0,0 +1,60 @@
|
||||
#!/bin/bash
|
||||
# Check if Ollama models still exist on GPU VM
|
||||
# Run this ON THE GPU VM
|
||||
|
||||
echo "=== Checking Ollama Models ==="
|
||||
echo ""
|
||||
|
||||
# Check Ollama's model storage locations
|
||||
echo "1. Checking Ollama API for models:"
|
||||
curl -s http://localhost:11434/api/tags | python3 -m json.tool 2>/dev/null | grep -E '"name"|"model"' | head -10
|
||||
|
||||
echo ""
|
||||
echo "2. Checking common model storage locations:"
|
||||
echo " ~/.ollama/models:"
|
||||
if [ -d ~/.ollama/models ]; then
|
||||
du -sh ~/.ollama/models
|
||||
ls -lh ~/.ollama/models | head -5
|
||||
else
|
||||
echo " ✗ Not found"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo " /usr/share/ollama/models:"
|
||||
if [ -d /usr/share/ollama/models ]; then
|
||||
du -sh /usr/share/ollama/models
|
||||
ls -lh /usr/share/ollama/models | head -5
|
||||
else
|
||||
echo " ✗ Not found"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo " /var/lib/ollama/models:"
|
||||
if [ -d /var/lib/ollama/models ]; then
|
||||
du -sh /var/lib/ollama/models
|
||||
ls -lh /var/lib/ollama/models | head -5
|
||||
else
|
||||
echo " ✗ Not found"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "3. Finding Ollama data directory:"
|
||||
if command -v ollama > /dev/null; then
|
||||
ollama show 2>&1 | head -5
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "4. Checking systemd service for OLLAMA_MODELS path:"
|
||||
systemctl show ollama | grep -i model || echo " No OLLAMA_MODELS env var set"
|
||||
|
||||
echo ""
|
||||
echo "=== What we did ==="
|
||||
echo "We only created: /etc/systemd/system/ollama.service.d/override.conf"
|
||||
echo "This file only sets: OLLAMA_HOST=0.0.0.0:11434"
|
||||
echo "It does NOT delete models."
|
||||
echo ""
|
||||
echo "If models are missing, they might be:"
|
||||
echo " 1. In a different location (check above)"
|
||||
echo " 2. Ollama needs to be restarted to see them"
|
||||
echo " 3. Models were deleted separately (not by our script)"
|
||||
|
||||
40
scripts/configure_ollama_gpu_vm.sh
Executable file
40
scripts/configure_ollama_gpu_vm.sh
Executable file
@ -0,0 +1,40 @@
|
||||
#!/bin/bash
|
||||
# Script to configure Ollama on GPU VM to listen on all interfaces
|
||||
# Run this on the GPU VM (10.0.30.63)
|
||||
|
||||
echo "Configuring Ollama to listen on all interfaces..."
|
||||
|
||||
# Method 1: Set environment variable (temporary, until reboot)
|
||||
export OLLAMA_HOST=0.0.0.0:11434
|
||||
echo "✓ Set OLLAMA_HOST=0.0.0.0:11434 (temporary)"
|
||||
|
||||
# Method 2: Create systemd override (permanent)
|
||||
echo ""
|
||||
echo "Creating systemd override for permanent configuration..."
|
||||
sudo mkdir -p /etc/systemd/system/ollama.service.d
|
||||
sudo tee /etc/systemd/system/ollama.service.d/override.conf > /dev/null <<EOF
|
||||
[Service]
|
||||
Environment="OLLAMA_HOST=0.0.0.0:11434"
|
||||
EOF
|
||||
|
||||
echo "✓ Created override file: /etc/systemd/system/ollama.service.d/override.conf"
|
||||
|
||||
# Reload systemd and restart Ollama
|
||||
echo ""
|
||||
echo "Reloading systemd and restarting Ollama..."
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart ollama
|
||||
|
||||
echo ""
|
||||
echo "✓ Ollama restarted"
|
||||
echo ""
|
||||
echo "Verifying configuration..."
|
||||
sleep 2
|
||||
curl -s http://localhost:11434/api/tags | head -c 200
|
||||
echo ""
|
||||
echo ""
|
||||
echo "✓ Configuration complete!"
|
||||
echo ""
|
||||
echo "You can now test from your local machine:"
|
||||
echo " curl http://10.0.30.63:11434/api/tags"
|
||||
|
||||
57
scripts/diagnose_gpu_vm.sh
Executable file
57
scripts/diagnose_gpu_vm.sh
Executable file
@ -0,0 +1,57 @@
|
||||
#!/bin/bash
|
||||
# Quick diagnostic script for GPU VM connection
|
||||
|
||||
GPU_VM="10.0.30.63"
|
||||
|
||||
echo "=== GPU VM Connection Diagnostics ==="
|
||||
echo ""
|
||||
|
||||
echo "1. Testing basic connectivity..."
|
||||
if ping -c 1 -W 2 $GPU_VM > /dev/null 2>&1; then
|
||||
echo " ✓ GPU VM is reachable"
|
||||
else
|
||||
echo " ✗ Cannot reach GPU VM - check network/firewall"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "2. Testing port 11434 (Ollama default)..."
|
||||
if timeout 3 curl -s http://$GPU_VM:11434/api/tags > /dev/null 2>&1; then
|
||||
echo " ✓ Port 11434 is open and responding"
|
||||
echo " Models available:"
|
||||
curl -s http://$GPU_VM:11434/api/tags | python3 -m json.tool 2>/dev/null | grep -E '"name"|"model"' | head -5
|
||||
else
|
||||
echo " ✗ Port 11434 not responding"
|
||||
echo " Error details:"
|
||||
timeout 3 curl -v http://$GPU_VM:11434/api/tags 2>&1 | grep -E "Connection|timeout|refused" | head -3
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "3. Testing port 8000 (alternative)..."
|
||||
if timeout 3 curl -s http://$GPU_VM:8000/v1/models > /dev/null 2>&1; then
|
||||
echo " ✓ Port 8000 is open and responding"
|
||||
else
|
||||
echo " ✗ Port 8000 not responding"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "4. Checking your .env configuration..."
|
||||
if [ -f .env ]; then
|
||||
echo " OPENAI_COMPAT_BASE_URL: $(grep OPENAI_COMPAT_BASE_URL .env | grep -v '^#' | cut -d'=' -f2)"
|
||||
echo " USE_LOCAL_OLLAMA: $(grep USE_LOCAL_OLLAMA .env | grep -v '^#' | cut -d'=' -f2)"
|
||||
else
|
||||
echo " ✗ .env file not found"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "=== Recommendations ==="
|
||||
echo ""
|
||||
echo "If port 11434 is not working:"
|
||||
echo " 1. SSH to GPU VM: ssh root@$GPU_VM"
|
||||
echo " 2. Check if Ollama is running: systemctl status ollama"
|
||||
echo " 3. Check what port Ollama is listening on: netstat -tlnp | grep ollama"
|
||||
echo " 4. If only listening on 127.0.0.1, configure it to listen on 0.0.0.0"
|
||||
echo ""
|
||||
echo "If you need to use a different port, update .env:"
|
||||
echo " OPENAI_COMPAT_BASE_URL=http://$GPU_VM:PORT"
|
||||
|
||||
59
scripts/find_ollama_models.sh
Executable file
59
scripts/find_ollama_models.sh
Executable file
@ -0,0 +1,59 @@
|
||||
#!/bin/bash
|
||||
# Find and verify Ollama models on GPU VM
|
||||
# Run this ON THE GPU VM
|
||||
|
||||
echo "=== Finding Ollama Models ==="
|
||||
echo ""
|
||||
|
||||
echo "1. Check what Ollama API reports:"
|
||||
echo " Running: curl http://localhost:11434/api/tags"
|
||||
curl -s http://localhost:11434/api/tags | python3 -m json.tool 2>/dev/null || curl -s http://localhost:11434/api/tags
|
||||
|
||||
echo ""
|
||||
echo ""
|
||||
echo "2. Find Ollama data directory:"
|
||||
echo " Checking common locations..."
|
||||
|
||||
# Check for OLLAMA_MODELS env var
|
||||
if [ -n "$OLLAMA_MODELS" ]; then
|
||||
echo " OLLAMA_MODELS env var: $OLLAMA_MODELS"
|
||||
if [ -d "$OLLAMA_MODELS" ]; then
|
||||
echo " ✓ Found! Size: $(du -sh "$OLLAMA_MODELS" 2>/dev/null | cut -f1)"
|
||||
echo " Models:"
|
||||
ls -lh "$OLLAMA_MODELS" | head -10
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check common locations
|
||||
for dir in ~/.ollama/models ~/.ollama /usr/share/ollama/models /usr/share/ollama /var/lib/ollama/models /var/lib/ollama; do
|
||||
if [ -d "$dir" ]; then
|
||||
echo " Found: $dir"
|
||||
echo " Size: $(du -sh "$dir" 2>/dev/null | cut -f1)"
|
||||
if [ -d "$dir/models" ]; then
|
||||
echo " Models in subdirectory:"
|
||||
ls -lh "$dir/models" 2>/dev/null | head -5
|
||||
fi
|
||||
find "$dir" -name "*.gguf" -o -name "*.bin" 2>/dev/null | head -5
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "3. Check Ollama process environment:"
|
||||
sudo cat /proc/$(pgrep -f ollama | head -1)/environ 2>/dev/null | tr '\0' '\n' | grep -i model || echo " No OLLAMA_MODELS in process env"
|
||||
|
||||
echo ""
|
||||
echo "4. Check systemd service environment:"
|
||||
systemctl show ollama | grep -i environment
|
||||
|
||||
echo ""
|
||||
echo "=== If models are missing ==="
|
||||
echo "They might be in a different location. Ollama stores models in:"
|
||||
echo " - Default: ~/.ollama/models (or /usr/share/ollama/models)"
|
||||
echo " - Or wherever OLLAMA_MODELS env var points"
|
||||
echo ""
|
||||
echo "To re-download models:"
|
||||
echo " ollama pull qwen2:latest"
|
||||
echo " ollama pull qwen2.5:14b"
|
||||
echo " ollama pull llama3.1:8b"
|
||||
echo " ollama pull qwen2.5:7b"
|
||||
|
||||
40
scripts/fix_firewall_gpu_vm.sh
Executable file
40
scripts/fix_firewall_gpu_vm.sh
Executable file
@ -0,0 +1,40 @@
|
||||
#!/bin/bash
|
||||
# Run this ON THE GPU VM to fix firewall
|
||||
|
||||
echo "=== Fixing Firewall for Ollama ==="
|
||||
echo ""
|
||||
|
||||
# Check what firewall is running
|
||||
if command -v ufw > /dev/null 2>&1; then
|
||||
echo "Detected UFW firewall"
|
||||
echo "Current status:"
|
||||
sudo ufw status | head -5
|
||||
echo ""
|
||||
echo "Allowing port 11434..."
|
||||
sudo ufw allow 11434/tcp
|
||||
echo "✓ Port 11434 allowed"
|
||||
elif command -v firewall-cmd > /dev/null 2>&1; then
|
||||
echo "Detected firewalld"
|
||||
sudo firewall-cmd --permanent --add-port=11434/tcp
|
||||
sudo firewall-cmd --reload
|
||||
echo "✓ Port 11434 allowed"
|
||||
else
|
||||
echo "Checking iptables..."
|
||||
if sudo iptables -L -n | grep -q 11434; then
|
||||
echo "Found iptables rules for 11434"
|
||||
else
|
||||
echo "Adding iptables rule..."
|
||||
sudo iptables -A INPUT -p tcp --dport 11434 -j ACCEPT
|
||||
echo "✓ Rule added (may need to save: sudo iptables-save)"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Verifying Ollama is accessible..."
|
||||
sleep 1
|
||||
curl -s http://localhost:11434/api/tags | head -c 100
|
||||
echo ""
|
||||
echo ""
|
||||
echo "✓ Done! Test from your local machine:"
|
||||
echo " curl http://10.0.30.63:11434/api/tags"
|
||||
|
||||
57
scripts/fix_ollama_remote.sh
Executable file
57
scripts/fix_ollama_remote.sh
Executable file
@ -0,0 +1,57 @@
|
||||
#!/bin/bash
|
||||
# Run this ON THE GPU VM (10.0.30.63) to fix Ollama remote access
|
||||
|
||||
echo "=== Fixing Ollama Remote Access ==="
|
||||
echo ""
|
||||
|
||||
# Check if Ollama is running
|
||||
if ! systemctl is-active --quiet ollama; then
|
||||
echo "✗ Ollama is not running. Starting it..."
|
||||
sudo systemctl start ollama
|
||||
sleep 2
|
||||
fi
|
||||
|
||||
echo "✓ Ollama is running"
|
||||
echo ""
|
||||
|
||||
# Check what it's listening on
|
||||
echo "Current Ollama listening status:"
|
||||
sudo netstat -tlnp 2>/dev/null | grep 11434 || ss -tlnp 2>/dev/null | grep 11434
|
||||
echo ""
|
||||
|
||||
# Create systemd override
|
||||
echo "Creating systemd override to listen on all interfaces..."
|
||||
sudo mkdir -p /etc/systemd/system/ollama.service.d
|
||||
|
||||
sudo tee /etc/systemd/system/ollama.service.d/override.conf > /dev/null <<'EOF'
|
||||
[Service]
|
||||
Environment="OLLAMA_HOST=0.0.0.0:11434"
|
||||
EOF
|
||||
|
||||
echo "✓ Override file created"
|
||||
echo ""
|
||||
|
||||
# Reload and restart
|
||||
echo "Reloading systemd and restarting Ollama..."
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart ollama
|
||||
sleep 3
|
||||
|
||||
echo "✓ Ollama restarted"
|
||||
echo ""
|
||||
|
||||
# Verify
|
||||
echo "Verifying configuration..."
|
||||
if sudo netstat -tlnp 2>/dev/null | grep -q "0.0.0.0:11434" || sudo ss -tlnp 2>/dev/null | grep -q "0.0.0.0:11434"; then
|
||||
echo "✓ SUCCESS! Ollama is now listening on 0.0.0.0:11434"
|
||||
echo ""
|
||||
echo "Test from your local machine:"
|
||||
echo " curl http://10.0.30.63:11434/api/tags"
|
||||
else
|
||||
echo "✗ Still not listening on 0.0.0.0 - checking status..."
|
||||
sudo systemctl status ollama --no-pager | head -10
|
||||
echo ""
|
||||
echo "Try checking Ollama logs:"
|
||||
echo " sudo journalctl -u ollama -n 20"
|
||||
fi
|
||||
|
||||
87
scripts/fix_ollama_storage.sh
Executable file
87
scripts/fix_ollama_storage.sh
Executable file
@ -0,0 +1,87 @@
|
||||
#!/bin/bash
|
||||
# Configure Ollama to use /mnt/data for model storage
|
||||
# Run this ON THE GPU VM
|
||||
|
||||
echo "=== Fixing Ollama Storage Location ==="
|
||||
echo ""
|
||||
|
||||
# Check current disk usage
|
||||
echo "Current disk usage:"
|
||||
df -h | grep -E "Filesystem|/dev/sda"
|
||||
echo ""
|
||||
|
||||
# Create models directory on /mnt/data
|
||||
echo "Creating Ollama models directory on /mnt/data..."
|
||||
sudo mkdir -p /mnt/data/ollama/models
|
||||
sudo chown -R ollama:ollama /mnt/data/ollama 2>/dev/null || sudo chown -R $(whoami):$(whoami) /mnt/data/ollama
|
||||
echo "✓ Directory created: /mnt/data/ollama/models"
|
||||
echo ""
|
||||
|
||||
# Check if there are existing models to move
|
||||
if [ -d ~/.ollama/models ] && [ "$(ls -A ~/.ollama/models 2>/dev/null)" ]; then
|
||||
echo "Found existing models in ~/.ollama/models"
|
||||
echo "Moving to /mnt/data/ollama/models..."
|
||||
sudo mv ~/.ollama/models/* /mnt/data/ollama/models/ 2>/dev/null
|
||||
echo "✓ Models moved"
|
||||
elif [ -d /usr/share/ollama/models ] && [ "$(ls -A /usr/share/ollama/models 2>/dev/null)" ]; then
|
||||
echo "Found existing models in /usr/share/ollama/models"
|
||||
echo "Moving to /mnt/data/ollama/models..."
|
||||
sudo mv /usr/share/ollama/models/* /mnt/data/ollama/models/ 2>/dev/null
|
||||
echo "✓ Models moved"
|
||||
else
|
||||
echo "No existing models found to move"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Update systemd service to use new location
|
||||
echo "Updating systemd service configuration..."
|
||||
sudo mkdir -p /etc/systemd/system/ollama.service.d
|
||||
|
||||
# Check if override.conf exists and update it, or create new
|
||||
if [ -f /etc/systemd/system/ollama.service.d/override.conf ]; then
|
||||
echo "Updating existing override.conf..."
|
||||
# Add OLLAMA_MODELS if not already there
|
||||
if ! grep -q "OLLAMA_MODELS" /etc/systemd/system/ollama.service.d/override.conf; then
|
||||
sudo sed -i '/\[Service\]/a Environment="OLLAMA_MODELS=/mnt/data/ollama/models"' /etc/systemd/system/ollama.service.d/override.conf
|
||||
else
|
||||
sudo sed -i 's|OLLAMA_MODELS=.*|OLLAMA_MODELS=/mnt/data/ollama/models|' /etc/systemd/system/ollama.service.d/override.conf
|
||||
fi
|
||||
else
|
||||
echo "Creating new override.conf..."
|
||||
sudo tee /etc/systemd/system/ollama.service.d/override.conf > /dev/null <<EOF
|
||||
[Service]
|
||||
Environment="OLLAMA_HOST=0.0.0.0:11434"
|
||||
Environment="OLLAMA_MODELS=/mnt/data/ollama/models"
|
||||
EOF
|
||||
fi
|
||||
|
||||
echo "✓ Systemd configuration updated"
|
||||
echo ""
|
||||
|
||||
# Reload and restart
|
||||
echo "Reloading systemd and restarting Ollama..."
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart ollama
|
||||
sleep 3
|
||||
|
||||
echo "✓ Ollama restarted with new storage location"
|
||||
echo ""
|
||||
|
||||
# Verify
|
||||
echo "Verifying configuration:"
|
||||
echo " Storage location: /mnt/data/ollama/models"
|
||||
echo " Disk space available:"
|
||||
df -h /mnt/data | tail -1
|
||||
echo ""
|
||||
echo " Checking if Ollama is running:"
|
||||
systemctl is-active ollama && echo " ✓ Ollama is running" || echo " ✗ Ollama is not running"
|
||||
|
||||
echo ""
|
||||
echo "=== Done! ==="
|
||||
echo "You can now pull models:"
|
||||
echo " ollama pull qwen2:latest"
|
||||
echo " ollama pull qwen2.5:14b"
|
||||
echo " ollama pull llama3.1:8b"
|
||||
echo " ollama pull qwen2.5:7b"
|
||||
|
||||
54
scripts/test_gpu_vm.py
Executable file
54
scripts/test_gpu_vm.py
Executable file
@ -0,0 +1,54 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Quick test script to check GPU VM connection."""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
sys.path.insert(0, '.')
|
||||
|
||||
from backend.llm_client import list_models
|
||||
from backend.config import OPENAI_COMPAT_BASE_URL, COUNCIL_MODELS, CHAIRMAN_MODEL
|
||||
|
||||
async def test_connection():
|
||||
print(f"Testing connection to: {OPENAI_COMPAT_BASE_URL}")
|
||||
print(f"Configured council models: {COUNCIL_MODELS}")
|
||||
print(f"Chairman model: {CHAIRMAN_MODEL}")
|
||||
print("-" * 60)
|
||||
|
||||
try:
|
||||
models = await list_models()
|
||||
if models is None:
|
||||
print("✗ Unable to list models (connection error, timeout, or incompatible endpoint).")
|
||||
print("")
|
||||
print("Next checks:")
|
||||
print(f" - curl {OPENAI_COMPAT_BASE_URL.rstrip('/')}/api/tags")
|
||||
print(f" - curl {OPENAI_COMPAT_BASE_URL.rstrip('/')}/v1/models")
|
||||
print("")
|
||||
print("If you're using Ollama remotely, the port is usually 11434.")
|
||||
return
|
||||
if models:
|
||||
print(f"✓ Connection successful!")
|
||||
print(f"Found {len(models)} available models:\n")
|
||||
for model in models:
|
||||
marker = "✓" if model in COUNCIL_MODELS else " "
|
||||
chairman_marker = " (CHAIRMAN)" if model == CHAIRMAN_MODEL else ""
|
||||
print(f" {marker} {model}{chairman_marker}")
|
||||
|
||||
print("\n" + "-" * 60)
|
||||
missing = [m for m in COUNCIL_MODELS if m not in models]
|
||||
if missing:
|
||||
print(f"⚠ Warning: {len(missing)} configured models not found:")
|
||||
for m in missing:
|
||||
print(f" - {m}")
|
||||
else:
|
||||
print("✓ All configured council models are available!")
|
||||
else:
|
||||
print("✗ Connected, but the server returned an empty model list.")
|
||||
print(" This is unusual for Ollama; double-check the base URL/port and server.")
|
||||
except Exception as e:
|
||||
print(f"✗ Connection failed: {type(e).__name__}: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_connection())
|
||||
|
||||
62
scripts/test_gpu_vm_detailed.py
Normal file
62
scripts/test_gpu_vm_detailed.py
Normal file
@ -0,0 +1,62 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Detailed test script to check GPU VM connection and endpoints."""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
sys.path.insert(0, '.')
|
||||
|
||||
import httpx
|
||||
from backend.config import OPENAI_COMPAT_BASE_URL
|
||||
|
||||
async def test_endpoints():
|
||||
base_url = "http://10.0.30.63"
|
||||
ports = [8000, 11434]
|
||||
endpoints = [
|
||||
"/v1/models",
|
||||
"/api/tags",
|
||||
"/api/version",
|
||||
"/",
|
||||
]
|
||||
|
||||
print("Testing GPU VM connection...")
|
||||
print("=" * 60)
|
||||
|
||||
for port in ports:
|
||||
print(f"\nTesting port {port}:")
|
||||
print("-" * 60)
|
||||
for endpoint in endpoints:
|
||||
url = f"{base_url}:{port}{endpoint}"
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
resp = await client.get(url)
|
||||
print(f" {endpoint:20} -> Status: {resp.status_code}")
|
||||
if resp.status_code == 200:
|
||||
try:
|
||||
data = resp.json()
|
||||
if isinstance(data, dict):
|
||||
if 'data' in data:
|
||||
models = data['data']
|
||||
print(f" Found {len(models)} models via /v1/models")
|
||||
if models:
|
||||
print(f" First model: {models[0].get('id', models[0])}")
|
||||
elif 'models' in data:
|
||||
models = data['models']
|
||||
print(f" Found {len(models)} models via /api/tags")
|
||||
if models:
|
||||
print(f" First model: {models[0].get('name', models[0])}")
|
||||
else:
|
||||
print(f" Response keys: {list(data.keys())[:5]}")
|
||||
elif isinstance(data, list):
|
||||
print(f" Found {len(data)} items")
|
||||
except:
|
||||
print(f" Response (first 200 chars): {resp.text[:200]}")
|
||||
except httpx.TimeoutException:
|
||||
print(f" {endpoint:20} -> Timeout")
|
||||
except httpx.ConnectError:
|
||||
print(f" {endpoint:20} -> Connection refused")
|
||||
except Exception as e:
|
||||
print(f" {endpoint:20} -> Error: {type(e).__name__}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test_endpoints())
|
||||
|
||||
48
scripts/test_model_query.py
Normal file
48
scripts/test_model_query.py
Normal file
@ -0,0 +1,48 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test that we can actually query a model and get a response."""
|
||||
|
||||
import asyncio
|
||||
import sys
|
||||
sys.path.insert(0, '.')
|
||||
|
||||
from backend.llm_client import query_model
|
||||
from backend.config import COUNCIL_MODELS
|
||||
|
||||
async def test_query():
|
||||
"""Test querying one of the council models."""
|
||||
if not COUNCIL_MODELS:
|
||||
print("✗ No council models configured")
|
||||
return False
|
||||
|
||||
test_model = COUNCIL_MODELS[0]
|
||||
print(f"Testing query to model: {test_model}")
|
||||
print("-" * 60)
|
||||
|
||||
try:
|
||||
response = await query_model(
|
||||
model=test_model,
|
||||
messages=[{"role": "user", "content": "Say 'Hello, GPU Ollama is working!' in one sentence."}],
|
||||
max_tokens_override=50, # Short response for quick test
|
||||
timeout=30.0
|
||||
)
|
||||
|
||||
if response and response.get('content'):
|
||||
content = response['content'].strip()
|
||||
print(f"✓ Query successful!")
|
||||
print(f"\nResponse:")
|
||||
print(f" {content}")
|
||||
return True
|
||||
else:
|
||||
print(f"✗ Query returned no content")
|
||||
print(f"Response: {response}")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"✗ Query failed: {type(e).__name__}: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = asyncio.run(test_query())
|
||||
sys.exit(0 if success else 1)
|
||||
|
||||
70
scripts/test_model_timeout.py
Normal file
70
scripts/test_model_timeout.py
Normal file
@ -0,0 +1,70 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test script to diagnose model timeout issues."""
|
||||
import asyncio
|
||||
import time
|
||||
import httpx
|
||||
from backend.config import OPENAI_COMPAT_BASE_URL, LLM_TIMEOUT_SECONDS, DEBUG
|
||||
|
||||
async def test_model(model: str, max_tokens: int = 10):
|
||||
"""Test a single model query."""
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Testing model: {model}")
|
||||
print(f"Timeout: {LLM_TIMEOUT_SECONDS}s")
|
||||
print(f"Base URL: {OPENAI_COMPAT_BASE_URL}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
url = f"{OPENAI_COMPAT_BASE_URL}/v1/chat/completions"
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": [{"role": "user", "content": "Say hello"}],
|
||||
"max_tokens": max_tokens
|
||||
}
|
||||
|
||||
start_time = time.time()
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=LLM_TIMEOUT_SECONDS) as client:
|
||||
print(f"[{time.time() - start_time:.1f}s] Sending request...")
|
||||
response = await client.post(url, json=payload)
|
||||
elapsed = time.time() - start_time
|
||||
print(f"[{elapsed:.1f}s] Response received: Status {response.status_code}")
|
||||
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
content = data.get("choices", [{}])[0].get("message", {}).get("content", "")
|
||||
print(f"✓ Success! Response: {content[:100]}")
|
||||
return True
|
||||
else:
|
||||
print(f"✗ Error: {response.status_code}")
|
||||
print(f" Response: {response.text[:200]}")
|
||||
return False
|
||||
except httpx.TimeoutException:
|
||||
elapsed = time.time() - start_time
|
||||
print(f"✗ Timeout after {elapsed:.1f}s (limit was {LLM_TIMEOUT_SECONDS}s)")
|
||||
return False
|
||||
except Exception as e:
|
||||
elapsed = time.time() - start_time
|
||||
print(f"✗ Error after {elapsed:.1f}s: {type(e).__name__}: {e}")
|
||||
return False
|
||||
|
||||
async def main():
|
||||
models = ["llama3.2:1b", "qwen2.5:0.5b", "gemma2:2b"]
|
||||
|
||||
print("Testing models sequentially to diagnose timeout issues...")
|
||||
print(f"Current timeout setting: {LLM_TIMEOUT_SECONDS}s")
|
||||
|
||||
results = {}
|
||||
for model in models:
|
||||
results[model] = await test_model(model)
|
||||
# Small delay between tests
|
||||
await asyncio.sleep(1)
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print("Summary:")
|
||||
for model, success in results.items():
|
||||
status = "✓ PASS" if success else "✗ FAIL"
|
||||
print(f" {model}: {status}")
|
||||
print(f"{'='*60}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
|
||||
33
scripts/test_ollama_direct.py
Normal file
33
scripts/test_ollama_direct.py
Normal file
@ -0,0 +1,33 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test direct connection to Ollama on GPU VM."""
|
||||
import asyncio
|
||||
import httpx
|
||||
|
||||
async def test():
|
||||
base = "http://10.0.30.63:11434"
|
||||
urls = [
|
||||
f"{base}/v1/models",
|
||||
f"{base}/api/tags",
|
||||
]
|
||||
|
||||
for url in urls:
|
||||
print(f"\nTesting: {url}")
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=5.0) as client:
|
||||
resp = await client.get(url)
|
||||
print(f" Status: {resp.status_code}")
|
||||
if resp.status_code == 200:
|
||||
data = resp.json()
|
||||
print(f" Keys: {list(data.keys())}")
|
||||
if 'models' in data:
|
||||
models = [m.get('name', m.get('model', m)) for m in data['models']]
|
||||
print(f" Found {len(models)} models: {models}")
|
||||
elif 'data' in data:
|
||||
models = [m.get('id', m) for m in data['data']]
|
||||
print(f" Found {len(models)} models: {models}")
|
||||
except httpx.TimeoutException:
|
||||
print(f" ✗ Timeout - Ollama may only be listening on localhost")
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
|
||||
asyncio.run(test())
|
||||
140
scripts/test_setup.py
Executable file
140
scripts/test_setup.py
Executable file
@ -0,0 +1,140 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Setup a test conversation with optional message and documents."""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import httpx
|
||||
import time
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from dotenv import load_dotenv
|
||||
from urllib.parse import quote
|
||||
|
||||
# Load .env file if it exists
|
||||
load_dotenv()
|
||||
|
||||
API_BASE = "http://localhost:8001"
|
||||
|
||||
def create_test_conversation():
|
||||
"""Create a new conversation with today's date/time as title."""
|
||||
now = datetime.now()
|
||||
title = now.strftime("%Y-%m-%d %H:%M:%S")
|
||||
|
||||
with httpx.Client(timeout=10.0) as client:
|
||||
# Create conversation
|
||||
response = client.post(f"{API_BASE}/api/conversations", json={})
|
||||
if response.status_code != 200:
|
||||
print(f"Error creating conversation: {response.text}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
conv = response.json()
|
||||
conv_id = conv["id"]
|
||||
|
||||
# Update title
|
||||
response = client.patch(
|
||||
f"{API_BASE}/api/conversations/{conv_id}/title",
|
||||
json={"title": title}
|
||||
)
|
||||
if response.status_code != 200:
|
||||
print(f"Warning: Could not set title: {response.text}", file=sys.stderr)
|
||||
|
||||
print(f"Created conversation: {conv_id}")
|
||||
print(f"Title: {title}")
|
||||
return conv_id
|
||||
|
||||
def upload_document(conv_id, filepath):
|
||||
"""Upload a document to a conversation."""
|
||||
path = Path(filepath)
|
||||
if not path.exists():
|
||||
print(f"Warning: File not found: {filepath}", file=sys.stderr)
|
||||
return False
|
||||
|
||||
with httpx.Client(timeout=30.0) as client:
|
||||
with open(path, 'rb') as f:
|
||||
files = {'file': (path.name, f.read(), 'text/markdown')}
|
||||
response = client.post(
|
||||
f"{API_BASE}/api/conversations/{conv_id}/documents",
|
||||
files=files
|
||||
)
|
||||
if response.status_code != 200:
|
||||
print(f"Warning: Could not upload {filepath}: {response.text}", file=sys.stderr)
|
||||
return False
|
||||
|
||||
print(f"Uploaded: {path.name}")
|
||||
return True
|
||||
|
||||
def send_message(conv_id, message):
|
||||
"""Send a message to a conversation."""
|
||||
# Use a very long timeout since the council process can take several minutes
|
||||
# Default to 10 minutes, but allow override via env var
|
||||
timeout_seconds = float(os.getenv("TEST_MESSAGE_TIMEOUT_SECONDS", "600.0"))
|
||||
|
||||
print(f"Sending message (this may take several minutes, timeout: {timeout_seconds}s)...")
|
||||
with httpx.Client(timeout=timeout_seconds) as client:
|
||||
try:
|
||||
response = client.post(
|
||||
f"{API_BASE}/api/conversations/{conv_id}/message",
|
||||
json={"content": message}
|
||||
)
|
||||
if response.status_code != 200:
|
||||
print(f"Warning: Could not send message: {response.text}", file=sys.stderr)
|
||||
return False
|
||||
print(f"✓ Message sent and processed: {message[:50]}...")
|
||||
return True
|
||||
except httpx.ReadTimeout:
|
||||
print(f"Error: Request timed out after {timeout_seconds}s", file=sys.stderr)
|
||||
print("The council process is still running. You can check the conversation in the UI.", file=sys.stderr)
|
||||
print(f"Conversation ID: {conv_id}", file=sys.stderr)
|
||||
return False
|
||||
except httpx.RequestError as e:
|
||||
print(f"Error sending message: {e}", file=sys.stderr)
|
||||
return False
|
||||
|
||||
def main():
|
||||
# Wait for backend to be ready
|
||||
max_retries = 10
|
||||
for i in range(max_retries):
|
||||
try:
|
||||
with httpx.Client(timeout=1.0) as client:
|
||||
response = client.get(f"{API_BASE}/")
|
||||
if response.status_code == 200:
|
||||
break
|
||||
except httpx.RequestError:
|
||||
if i < max_retries - 1:
|
||||
time.sleep(1)
|
||||
else:
|
||||
print(f"Error: Backend not available at {API_BASE}", file=sys.stderr)
|
||||
print("Make sure the backend is running: uv run python -m backend.main", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Create conversation
|
||||
conv_id = create_test_conversation()
|
||||
|
||||
# Upload documents if specified
|
||||
test_docs = os.getenv("TEST_DOCS", "")
|
||||
if test_docs:
|
||||
doc_paths = [p.strip() for p in test_docs.split(",") if p.strip()]
|
||||
for doc_path in doc_paths:
|
||||
upload_document(conv_id, doc_path)
|
||||
|
||||
# Note: TEST_MESSAGE is NOT sent automatically
|
||||
# It's provided here for reference - user should type it in the UI
|
||||
test_message = os.getenv("TEST_MESSAGE", "")
|
||||
open_url = f"http://localhost:5173/?conversation={conv_id}"
|
||||
if test_message:
|
||||
open_url += f"&message={quote(test_message)}"
|
||||
|
||||
print(f"\n✓ Conversation created: {conv_id}")
|
||||
print(f"CONVERSATION_ID={conv_id}") # For Makefile to parse
|
||||
print(f"OPEN_URL={open_url}") # For Makefile to parse
|
||||
print(f"Open in browser: {open_url}")
|
||||
|
||||
if test_message:
|
||||
print(f"\n💡 Pre-filled message (copy/paste into input):")
|
||||
print(f" {test_message}")
|
||||
|
||||
return conv_id
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
31
start.sh
Executable file
31
start.sh
Executable file
@ -0,0 +1,31 @@
|
||||
#!/bin/bash
|
||||
|
||||
# LLM Council - Start script
|
||||
|
||||
echo "Starting LLM Council..."
|
||||
echo ""
|
||||
|
||||
# Start backend
|
||||
echo "Starting backend on http://localhost:8001..."
|
||||
uv run python -m backend.main &
|
||||
BACKEND_PID=$!
|
||||
|
||||
# Wait a bit for backend to start
|
||||
sleep 2
|
||||
|
||||
# Start frontend
|
||||
echo "Starting frontend on http://localhost:5173..."
|
||||
cd frontend
|
||||
npm run dev &
|
||||
FRONTEND_PID=$!
|
||||
|
||||
echo ""
|
||||
echo "✓ LLM Council is running!"
|
||||
echo " Backend: http://localhost:8001"
|
||||
echo " Frontend: http://localhost:5173"
|
||||
echo ""
|
||||
echo "Press Ctrl+C to stop both servers"
|
||||
|
||||
# Wait for Ctrl+C
|
||||
trap "kill $BACKEND_PID $FRONTEND_PID 2>/dev/null; exit" SIGINT SIGTERM
|
||||
wait
|
||||
30
test-story.md
Normal file
30
test-story.md
Normal file
@ -0,0 +1,30 @@
|
||||
# The Digital Garden
|
||||
|
||||
Once upon a time, in a world where code grew like vines, there lived a developer named Alex who discovered something magical in their repository.
|
||||
|
||||
Alex had been debugging a particularly stubborn bug for three days straight. The error messages were cryptic, the stack traces were confusing, and coffee had stopped working its usual magic.
|
||||
|
||||
## The Discovery
|
||||
|
||||
On the fourth morning, while scrolling through documentation that made less sense than the bug itself, Alex noticed something strange. Every time they ran their tests, a new file appeared in the `mysteries/` directory—a file that hadn't been there before.
|
||||
|
||||
The file was called `clue.md` and it contained only this:
|
||||
|
||||
> "Look not in the code, but between the lines."
|
||||
|
||||
## The Revelation
|
||||
|
||||
At first, Alex thought it was a prank from a coworker or a stray script. But as the days passed, the file began to change. New clues appeared, each more cryptic than the last.
|
||||
|
||||
It wasn't until Alex uploaded one of these files to their LLM Council that everything clicked. The council of AI models analyzed the patterns, the timing, the syntax. They saw connections Alex had missed.
|
||||
|
||||
## The Solution
|
||||
|
||||
The bug wasn't in Alex's code at all. It was in the dependencies, hidden in a nested module that updated itself every time the tests ran. The mysterious files were breadcrumbs left by a self-modifying dependency that was trying to communicate.
|
||||
|
||||
Alex fixed the issue, updated the dependencies, and the mysterious files stopped appearing. But they kept one—the first `clue.md`—as a reminder that sometimes the solution lies not in what you're looking for, but in what finds you.
|
||||
|
||||
---
|
||||
|
||||
*The end. Or is it just the beginning?*
|
||||
|
||||
145
tests/Allan + burridge - taboos.md
Normal file
145
tests/Allan + burridge - taboos.md
Normal file
@ -0,0 +1,145 @@
|
||||
#author: allan & Burridge
|
||||
#text: taboos
|
||||
|
||||
## Definition
|
||||
|
||||
- Taboo = a “proscription of behaviour that affects everyday life” (p. 1).
|
||||
- Taboo arises where behaviors “can cause discomfort, harm or injury” (p. 1), encompassing both physical and metaphysical risks, including contact with sacred or dangerous persons or objects.
|
||||
- The concept extends far beyond ritual acts or special circumstances.
|
||||
- Risk can be metaphysical (contaminating soul), moral, or physical (exposure to disease or power), with possible “contamination” of others (p. 1).
|
||||
- Can be protective: rules against incest, waste management, food taboos, and avoidance speech styles are evolutionarily or functionally sensible, though motivation may become obscured as rituals develop (p. 8).
|
||||
|
||||
### Domains (p. 1)
|
||||
|
||||
- bodies and their effluvia (sweat, snot, faeces, menstrual fluid)
|
||||
- organs and acts of sex
|
||||
- micturition and defecation; disease
|
||||
- death and killing
|
||||
- naming, addressing, touching, and viewing persons and sacred beings
|
||||
- food gathering, preparation, and consumption
|
||||
|
||||
### Characteristics
|
||||
|
||||
1. not static: relative, context-dependent, historically shifting
|
||||
2. multi-functional: protects, polices, empowers, and creates
|
||||
3. systemic: operates across language, ritual, law, social identity
|
||||
4. generative: produces linguistic creativity and cultural cohesion even as it restricts
|
||||
|
||||
### Key Mechanisms
|
||||
|
||||
1. orthophemism (straight talking)
|
||||
2. euphemism (sweet talking)
|
||||
3. dysphemism (speaking offensively)
|
||||
|
||||
## Informal and Formal Social & Legal Control
|
||||
|
||||
- Taboo is imposed externally: constraints “imposed by someone or some physical or metaphysical force that the individual believes has authority or power over them—the law, the gods, the society in which one lives, even proprioceptions” (p. 7–8).
|
||||
- Informal vs. formal control: Many proscribed acts are maintained “by unwritten conventions governing behavioural standards” in families, teams, or workplaces, and only some become written law (pp. 7–8).
|
||||
- Consequences span biological, social, supernatural, and legal domains.
|
||||
- “Infractions ... can lead to illness or death, as well as to the lesser penalties of corporal punishment, incarceration, social ostracism or mere disapproval” (p. 1).
|
||||
- Examples: death for incest in Hawaii, stoning for adultery under Sharia, execution for murder (p. 5), retrospective social blame (“bad karma,” “bad luck”) (p. 6).
|
||||
- Consequences can follow accidental or intentional actions.
|
||||
- “Even an unintended contravention ... risks condemnation and censure; generally, people can and do avoid tabooed behaviour unless they intend to violate a taboo” (p. 1).
|
||||
|
||||
## Everyday Censoring vs. Institutional Censorship
|
||||
|
||||
- Everyday censoring: “People constantly censor the language they use (we differentiate this from the institutionalized imposition of censorship)” (p. 1).
|
||||
- “By default we are polite, euphemistic, orthophemistic and inoffensive and we censor our language use to eschew tabooed topics in pursuit of well-being for ourselves and for others” (p. 1).
|
||||
- Institutional censorship: “Censorship is the suppression or prohibition of speech or writing that is condemned as subversive of the common good” (p. 13); formal, often governmental or religious, with legally enforced penalties.
|
||||
- Focus shifted over time—from blasphemy/heresy toward indecency, sedition, political and ethnic slurs (pp. 13–15).
|
||||
|
||||
## Language Change and Innovation
|
||||
|
||||
- Taboo motivates invention: “Poetic inventiveness” and new vocabulary arise as taboos push speakers to “create highly inventive and often playful new expressions” (p. 2).
|
||||
- Taboo prompts “word addition, word loss, sound change and semantic shift” (p. 2).
|
||||
- New terms constantly “arise by a changed form for the tabooed expression and by figurative language sparked by perceptions ... of the denotata about faeces, menstrual blood, genitals, death and so on” (p. 2).
|
||||
- Taboos “play havoc with the standard methods of historical linguistics by undermining the supposed arbitrary link between the meaning and form of words” (p. 2).
|
||||
|
||||
## Origins and Colonial Encounter: Word “Taboo”
|
||||
|
||||
- “The English word taboo derives from the Tongan tabu… [which] means simply ‘to forbid’, ‘forbidden’, and can be applied to any sort of prohibition,” from etiquette to “an order issued by a chief” (p. 2–3).
|
||||
- Accounts of Tahitian food taboos and gendered prohibitions (“the women never upon any account eat with the men, but always by themselves”) (p. 3); legitimacy or social status marked through eating habits.
|
||||
- Social function parallels US caste/race systems—exclusion practices reinforce social hierarchy (p. 4).
|
||||
|
||||
## Fatal, Demonic, and Dangerous Taboos
|
||||
|
||||
- Inherent danger: “A nineteenth-century view, attributable directly to Wundt’s folk psychology, is a belief... that there is a ‘demonic’ power within a tabooed object comparable with the dangerous power of a Polynesian chief or the Emperor of Japan or Satan himself” (p. 5).
|
||||
- Psychosomatic effect: “Cases are on record in which persons who had unwittingly broken a taboo actually died of terror on discovering their fatal error, writes Frazer” (p. 5).
|
||||
- Some anthropologists argue only certain extremely dangerous taboos (e.g., incest, sacrilege) are believed to carry supernatural consequences (p. 5).
|
||||
|
||||
## Rituals and Repair After Violation
|
||||
|
||||
- Violations often require purification—e.g., “the elders take a sheep and place it on the woman’s shoulders, and it is then killed ... [announcing] that they are severing the bond of blood relationship that exists between the pair. A medicine man then comes and purifies the couple” (Kikuyu, p. 6).
|
||||
- Balinese build only one-story houses to avoid being under “unclean” garments; Christians confess sins; spitting used for expiation in southern Africa (p. 6).
|
||||
- Ritual persists even as original motivations fade: many become “symbolic idiom” after the source is forgotten (p. 8).
|
||||
|
||||
## Taboos as Strategy: Exploitation and Symbolic Power
|
||||
|
||||
- Strategic tabooing keeps others away (e.g., land, trees, or objects marked as belonging to chiefs or spirits to deter theft) (p. 7).
|
||||
- Gender and power: women’s genital organs viewed as “potent means of defeating evil”; exposing the vulva to demons drives evil away (pp. 7–8).
|
||||
- Symbolic authority: names, body parts, and behaviors may be tabooed for status/power or to assert territorial/customary rights (p. 7).
|
||||
|
||||
## The Relativity and Contextuality of Taboo
|
||||
|
||||
- No universal taboo: “Nothing is taboo for all people, under all circumstances, for all time... Every taboo must be specified for a particular community of people, for a specified context, at a given place and time” (p. 10–11).
|
||||
- Variation examples: “Women ... exposed one or both breasts in public ... as a display of youth and beauty” in 17th-century Europe; taboo today. Visual representations of nudity or death vary dramatically by culture (p. 10).
|
||||
- Taboo applies to behavior, not objects: “Where something physical or metaphysical is said to be tabooed, what is in fact tabooed is its interaction with an individual... In short, a taboo applies to behaviour” (p. 11).
|
||||
- Deliberate flouting: during conscious violation, “the so-called taboo is flouted... [it] does not function as a taboo for the perpetrator” (p. 10).
|
||||
|
||||
## 12. Psycholinguistic Contamination and “Strong Language”
|
||||
|
||||
- Core idea: taboo words are “contaminated” by their association with taboo referents; the denotatum taints the word itself.
|
||||
- Quote: “Society’s perception of a dirty word’s tainted denotatum contaminates the word itself and we discuss how the saliency of obscenity and dysphemism makes the description ‘strong language’ particularly appropriate” (p. 1).
|
||||
- Expansion: Burridge suggests a quasi-psychological mechanism; signifier inherits stigma of the signified, explaining euphemism cycles (“lavatory” → “toilet” → “bathroom” → “restroom”).
|
||||
- Implication: Taboo operates semiotically—contamination spreads through linguistic signs, not just physical contact.
|
||||
|
||||
## 13. Mary Douglas’s “Matter Out of Place”
|
||||
|
||||
- Core idea: Douglas’s theory of pollution is structuralist—dirt is “matter out of place”; taboo marks category boundaries.
|
||||
- Quote: “Mary Douglas’s anthropological study of ritual pollution offers insights here. As she saw it, the distinction between cleanliness and filth stems from the basic human need to structure experience and render it understandable. That which is taboo threatens chaos and disorder” (p. 8).
|
||||
- Expansion: Taboo is not about intrinsic danger of substances (blood, corpses, menstruation) but about violations of categorical boundaries (life/death, male/female, pure/impure).
|
||||
- Implication: Taboo is a cognitive and cultural ordering device enforcing symbolic boundaries.
|
||||
|
||||
## 14. Social Cohesion and Boundary-Work
|
||||
|
||||
- Core idea: shared taboos bind communities together.
|
||||
- Quote: “Shared taboos are therefore a sign of social cohesion. Moreover, as part of a wider belief system, they provide the basis people need to function in an otherwise confused and hostile environment” (p. 8).
|
||||
- Expansion: Taboo is constitutive—it creates “us” vs. “them,” reinforcing group identity.
|
||||
- Implication: Taboo performs boundary-work, policing acceptable speech/behavior to define the community.
|
||||
|
||||
## 15. Language as Shield, Weapon, and Release
|
||||
|
||||
- Core idea: language is strategic in relation to taboo.
|
||||
- Quote: “Language is used as a shield against malign fate and the disapprobation of fellow human beings; it is used as a weapon against enemies and as a release valve when we are angry, frustrated or hurt” (p. 2).
|
||||
- Expansion: Euphemism shields, dysphemism weaponizes, swearing releases tension; taboo structures the emotional economy of language.
|
||||
- Implication: Taboo enables expressive force, catharsis, and aggression, not merely avoidance.
|
||||
|
||||
## 16. Authority Behind Taboos
|
||||
|
||||
- Core idea: taboos are enforced by multiple sources of authority.
|
||||
- Quote: “The constraint on behaviour is imposed by someone or some physical or metaphysical force that the individual believes has authority or power over them—the law, the gods, the society in which one lives, even proprioceptions” (pp. 7–8).
|
||||
- Expansion: Authority is plural—legal, religious, social, bodily—undermining single explanatory models.
|
||||
- Implication: Taboo is a multi-level regulatory system spanning external institutions and internalized bodily responses.
|
||||
|
||||
## 17. Political Correctness and Contemporary Censorship
|
||||
|
||||
- Core idea: modern tabooing behavior manifests as political correctness and linguistic prescription.
|
||||
- Quote: “Political correctness and linguistic prescription are aspects of tabooing behaviour” (p. 1).
|
||||
- Expansion: As older taboos policed sexuality or blasphemy, contemporary taboos police race, gender, and identity categories.
|
||||
- Quote: “There is software to sanitize DVDs … filters sensitive to sex, drug use, some violence, profanity and crude language and bodily humor to skip scenes. It is nationalistic and politically biased” (p. 26).
|
||||
- Implication: Taboo evolves with cultural priorities; mechanisms of censoring remain constant.
|
||||
|
||||
## 18. Taboo as Creative Force in Language
|
||||
|
||||
- Core idea: taboo drives euphemistic innovation and linguistic creativity.
|
||||
- Quote: “Taboo and the consequent censoring of language motivate language change by promoting the creation of highly inventive and often playful new expressions… These creations occasionally rival Shakespeare” (p. 2).
|
||||
- Expansion: Avoidance of taboo generates cycles of euphemism → contamination → replacement, acting as a motor of semantic change.
|
||||
- Implication: Taboo is paradoxically productive. It restricts speech but enriches language.
|
||||
|
||||
## 🧩 Synthesis
|
||||
|
||||
- Taboo is not static: relative, context-dependent, historically shifting.
|
||||
- Taboo is multi-functional: protects, polices, empowers, and creates.
|
||||
- Taboo is systemic: operates across language, ritual, law, and social identity.
|
||||
- Taboo is generative: produces linguistic creativity and cultural cohesion even as it restricts.
|
||||
|
||||
1127
tests/allan-burridge.md
Normal file
1127
tests/allan-burridge.md
Normal file
File diff suppressed because it is too large
Load Diff
30
tests/test-story.md
Normal file
30
tests/test-story.md
Normal file
@ -0,0 +1,30 @@
|
||||
# The Digital Garden
|
||||
|
||||
Once upon a time, in a world where code grew like vines, there lived a developer named Alex who discovered something magical in their repository.
|
||||
|
||||
Alex had been debugging a particularly stubborn bug for three days straight. The error messages were cryptic, the stack traces were confusing, and coffee had stopped working its usual magic.
|
||||
|
||||
## The Discovery
|
||||
|
||||
On the fourth morning, while scrolling through documentation that made less sense than the bug itself, Alex noticed something strange. Every time they ran their tests, a new file appeared in the `mysteries/` directory—a file that hadn't been there before.
|
||||
|
||||
The file was called `clue.md` and it contained only this:
|
||||
|
||||
> "Look not in the code, but between the lines."
|
||||
|
||||
## The Revelation
|
||||
|
||||
At first, Alex thought it was a prank from a coworker or a stray script. But as the days passed, the file began to change. New clues appeared, each more cryptic than the last.
|
||||
|
||||
It wasn't until Alex uploaded one of these files to their LLM Council that everything clicked. The council of AI models analyzed the patterns, the timing, the syntax. They saw connections Alex had missed.
|
||||
|
||||
## The Solution
|
||||
|
||||
The bug wasn't in Alex's code at all. It was in the dependencies, hidden in a nested module that updated itself every time the tests ran. The mysterious files were breadcrumbs left by a self-modifying dependency that was trying to communicate.
|
||||
|
||||
Alex fixed the issue, updated the dependencies, and the mysterious files stopped appearing. But they kept one—the first `clue.md`—as a reminder that sometimes the solution lies not in what you're looking for, but in what finds you.
|
||||
|
||||
---
|
||||
|
||||
*The end. Or is it just the beginning?*
|
||||
|
||||
694
uv.lock
generated
Normal file
694
uv.lock
generated
Normal file
@ -0,0 +1,694 @@
|
||||
version = 1
|
||||
revision = 3
|
||||
requires-python = ">=3.10"
|
||||
|
||||
[[package]]
|
||||
name = "annotated-doc"
|
||||
version = "0.0.4"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/57/ba/046ceea27344560984e26a590f90bc7f4a75b06701f653222458922b558c/annotated_doc-0.0.4.tar.gz", hash = "sha256:fbcda96e87e9c92ad167c2e53839e57503ecfda18804ea28102353485033faa4", size = 7288, upload-time = "2025-11-10T22:07:42.062Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/1e/d3/26bf1008eb3d2daa8ef4cacc7f3bfdc11818d111f7e2d0201bc6e3b49d45/annotated_doc-0.0.4-py3-none-any.whl", hash = "sha256:571ac1dc6991c450b25a9c2d84a3705e2ae7a53467b5d111c24fa8baabbed320", size = 5303, upload-time = "2025-11-10T22:07:40.673Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "annotated-types"
|
||||
version = "0.7.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/ee/67/531ea369ba64dcff5ec9c3402f9f51bf748cec26dde048a2f973a4eea7f5/annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89", size = 16081, upload-time = "2024-05-20T21:33:25.928Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53", size = 13643, upload-time = "2024-05-20T21:33:24.1Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "anyio"
|
||||
version = "4.11.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "exceptiongroup", marker = "python_full_version < '3.11'" },
|
||||
{ name = "idna" },
|
||||
{ name = "sniffio" },
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.13'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/c6/78/7d432127c41b50bccba979505f272c16cbcadcc33645d5fa3a738110ae75/anyio-4.11.0.tar.gz", hash = "sha256:82a8d0b81e318cc5ce71a5f1f8b5c4e63619620b63141ef8c995fa0db95a57c4", size = 219094, upload-time = "2025-09-23T09:19:12.58Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/15/b3/9b1a8074496371342ec1e796a96f99c82c945a339cd81a8e73de28b4cf9e/anyio-4.11.0-py3-none-any.whl", hash = "sha256:0287e96f4d26d4149305414d4e3bc32f0dcd0862365a4bddea19d7a1ec38c4fc", size = 109097, upload-time = "2025-09-23T09:19:10.601Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "certifi"
|
||||
version = "2025.11.12"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/a2/8c/58f469717fa48465e4a50c014a0400602d3c437d7c0c468e17ada824da3a/certifi-2025.11.12.tar.gz", hash = "sha256:d8ab5478f2ecd78af242878415affce761ca6bc54a22a27e026d7c25357c3316", size = 160538, upload-time = "2025-11-12T02:54:51.517Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/70/7d/9bc192684cea499815ff478dfcdc13835ddf401365057044fb721ec6bddb/certifi-2025.11.12-py3-none-any.whl", hash = "sha256:97de8790030bbd5c2d96b7ec782fc2f7820ef8dba6db909ccf95449f2d062d4b", size = 159438, upload-time = "2025-11-12T02:54:49.735Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "click"
|
||||
version = "8.3.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/3d/fa/656b739db8587d7b5dfa22e22ed02566950fbfbcdc20311993483657a5c0/click-8.3.1.tar.gz", hash = "sha256:12ff4785d337a1bb490bb7e9c2b1ee5da3112e94a8622f26a6c77f5d2fc6842a", size = 295065, upload-time = "2025-11-15T20:45:42.706Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/98/78/01c019cdb5d6498122777c1a43056ebb3ebfeef2076d9d026bfe15583b2b/click-8.3.1-py3-none-any.whl", hash = "sha256:981153a64e25f12d547d3426c367a4857371575ee7ad18df2a6183ab0545b2a6", size = 108274, upload-time = "2025-11-15T20:45:41.139Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "colorama"
|
||||
version = "0.4.6"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "exceptiongroup"
|
||||
version = "1.3.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.13'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/50/79/66800aadf48771f6b62f7eb014e352e5d06856655206165d775e675a02c9/exceptiongroup-1.3.1.tar.gz", hash = "sha256:8b412432c6055b0b7d14c310000ae93352ed6754f70fa8f7c34141f91c4e3219", size = 30371, upload-time = "2025-11-21T23:01:54.787Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/0e/97c33bf5009bdbac74fd2beace167cab3f978feb69cc36f1ef79360d6c4e/exceptiongroup-1.3.1-py3-none-any.whl", hash = "sha256:a7a39a3bd276781e98394987d3a5701d0c4edffb633bb7a5144577f82c773598", size = 16740, upload-time = "2025-11-21T23:01:53.443Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "fastapi"
|
||||
version = "0.121.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "annotated-doc" },
|
||||
{ name = "pydantic" },
|
||||
{ name = "starlette" },
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/80/f0/086c442c6516195786131b8ca70488c6ef11d2f2e33c9a893576b2b0d3f7/fastapi-0.121.3.tar.gz", hash = "sha256:0055bc24fe53e56a40e9e0ad1ae2baa81622c406e548e501e717634e2dfbc40b", size = 344501, upload-time = "2025-11-19T16:53:39.243Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/98/b6/4f620d7720fc0a754c8c1b7501d73777f6ba43b57c8ab99671f4d7441eb8/fastapi-0.121.3-py3-none-any.whl", hash = "sha256:0c78fc87587fcd910ca1bbf5bc8ba37b80e119b388a7206b39f0ecc95ebf53e9", size = 109801, upload-time = "2025-11-19T16:53:37.918Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "h11"
|
||||
version = "0.16.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/01/ee/02a2c011bdab74c6fb3c75474d40b3052059d95df7e73351460c8588d963/h11-0.16.0.tar.gz", hash = "sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1", size = 101250, upload-time = "2025-04-24T03:35:25.427Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/04/4b/29cac41a4d98d144bf5f6d33995617b185d14b22401f75ca86f384e87ff1/h11-0.16.0-py3-none-any.whl", hash = "sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86", size = 37515, upload-time = "2025-04-24T03:35:24.344Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "httpcore"
|
||||
version = "1.0.9"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "certifi" },
|
||||
{ name = "h11" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/06/94/82699a10bca87a5556c9c59b5963f2d039dbd239f25bc2a63907a05a14cb/httpcore-1.0.9.tar.gz", hash = "sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8", size = 85484, upload-time = "2025-04-24T22:06:22.219Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/7e/f5/f66802a942d491edb555dd61e3a9961140fd64c90bce1eafd741609d334d/httpcore-1.0.9-py3-none-any.whl", hash = "sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55", size = 78784, upload-time = "2025-04-24T22:06:20.566Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "httptools"
|
||||
version = "0.7.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b5/46/120a669232c7bdedb9d52d4aeae7e6c7dfe151e99dc70802e2fc7a5e1993/httptools-0.7.1.tar.gz", hash = "sha256:abd72556974f8e7c74a259655924a717a2365b236c882c3f6f8a45fe94703ac9", size = 258961, upload-time = "2025-10-10T03:55:08.559Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/c7/e5/c07e0bcf4ec8db8164e9f6738c048b2e66aabf30e7506f440c4cc6953f60/httptools-0.7.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:11d01b0ff1fe02c4c32d60af61a4d613b74fad069e47e06e9067758c01e9ac78", size = 204531, upload-time = "2025-10-10T03:54:20.887Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7e/4f/35e3a63f863a659f92ffd92bef131f3e81cf849af26e6435b49bd9f6f751/httptools-0.7.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:84d86c1e5afdc479a6fdabf570be0d3eb791df0ae727e8dbc0259ed1249998d4", size = 109408, upload-time = "2025-10-10T03:54:22.455Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f5/71/b0a9193641d9e2471ac541d3b1b869538a5fb6419d52fd2669fa9c79e4b8/httptools-0.7.1-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:c8c751014e13d88d2be5f5f14fc8b89612fcfa92a9cc480f2bc1598357a23a05", size = 440889, upload-time = "2025-10-10T03:54:23.753Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/eb/d9/2e34811397b76718750fea44658cb0205b84566e895192115252e008b152/httptools-0.7.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:654968cb6b6c77e37b832a9be3d3ecabb243bbe7a0b8f65fbc5b6b04c8fcabed", size = 440460, upload-time = "2025-10-10T03:54:25.313Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/01/3f/a04626ebeacc489866bb4d82362c0657b2262bef381d68310134be7f40bb/httptools-0.7.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:b580968316348b474b020edf3988eecd5d6eec4634ee6561e72ae3a2a0e00a8a", size = 425267, upload-time = "2025-10-10T03:54:26.81Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a5/99/adcd4f66614db627b587627c8ad6f4c55f18881549bab10ecf180562e7b9/httptools-0.7.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:d496e2f5245319da9d764296e86c5bb6fcf0cf7a8806d3d000717a889c8c0b7b", size = 424429, upload-time = "2025-10-10T03:54:28.174Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/72/ec8fc904a8fd30ba022dfa85f3bbc64c3c7cd75b669e24242c0658e22f3c/httptools-0.7.1-cp310-cp310-win_amd64.whl", hash = "sha256:cbf8317bfccf0fed3b5680c559d3459cccf1abe9039bfa159e62e391c7270568", size = 86173, upload-time = "2025-10-10T03:54:29.5Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9c/08/17e07e8d89ab8f343c134616d72eebfe03798835058e2ab579dcc8353c06/httptools-0.7.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:474d3b7ab469fefcca3697a10d11a32ee2b9573250206ba1e50d5980910da657", size = 206521, upload-time = "2025-10-10T03:54:31.002Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/06/c9c1b41ff52f16aee526fd10fbda99fa4787938aa776858ddc4a1ea825ec/httptools-0.7.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a3c3b7366bb6c7b96bd72d0dbe7f7d5eead261361f013be5f6d9590465ea1c70", size = 110375, upload-time = "2025-10-10T03:54:31.941Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/cc/10935db22fda0ee34c76f047590ca0a8bd9de531406a3ccb10a90e12ea21/httptools-0.7.1-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:379b479408b8747f47f3b253326183d7c009a3936518cdb70db58cffd369d9df", size = 456621, upload-time = "2025-10-10T03:54:33.176Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0e/84/875382b10d271b0c11aa5d414b44f92f8dd53e9b658aec338a79164fa548/httptools-0.7.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:cad6b591a682dcc6cf1397c3900527f9affef1e55a06c4547264796bbd17cf5e", size = 454954, upload-time = "2025-10-10T03:54:34.226Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/30/e1/44f89b280f7e46c0b1b2ccee5737d46b3bb13136383958f20b580a821ca0/httptools-0.7.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:eb844698d11433d2139bbeeb56499102143beb582bd6c194e3ba69c22f25c274", size = 440175, upload-time = "2025-10-10T03:54:35.942Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6f/7e/b9287763159e700e335028bc1824359dc736fa9b829dacedace91a39b37e/httptools-0.7.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f65744d7a8bdb4bda5e1fa23e4ba16832860606fcc09d674d56e425e991539ec", size = 440310, upload-time = "2025-10-10T03:54:37.1Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b3/07/5b614f592868e07f5c94b1f301b5e14a21df4e8076215a3bccb830a687d8/httptools-0.7.1-cp311-cp311-win_amd64.whl", hash = "sha256:135fbe974b3718eada677229312e97f3b31f8a9c8ffa3ae6f565bf808d5b6bcb", size = 86875, upload-time = "2025-10-10T03:54:38.421Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/53/7f/403e5d787dc4942316e515e949b0c8a013d84078a915910e9f391ba9b3ed/httptools-0.7.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:38e0c83a2ea9746ebbd643bdfb521b9aa4a91703e2cd705c20443405d2fd16a5", size = 206280, upload-time = "2025-10-10T03:54:39.274Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2a/0d/7f3fd28e2ce311ccc998c388dd1c53b18120fda3b70ebb022b135dc9839b/httptools-0.7.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f25bbaf1235e27704f1a7b86cd3304eabc04f569c828101d94a0e605ef7205a5", size = 110004, upload-time = "2025-10-10T03:54:40.403Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/84/a6/b3965e1e146ef5762870bbe76117876ceba51a201e18cc31f5703e454596/httptools-0.7.1-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:2c15f37ef679ab9ecc06bfc4e6e8628c32a8e4b305459de7cf6785acd57e4d03", size = 517655, upload-time = "2025-10-10T03:54:41.347Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/11/7d/71fee6f1844e6fa378f2eddde6c3e41ce3a1fb4b2d81118dd544e3441ec0/httptools-0.7.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7fe6e96090df46b36ccfaf746f03034e5ab723162bc51b0a4cf58305324036f2", size = 511440, upload-time = "2025-10-10T03:54:42.452Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/a5/079d216712a4f3ffa24af4a0381b108aa9c45b7a5cc6eb141f81726b1823/httptools-0.7.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:f72fdbae2dbc6e68b8239defb48e6a5937b12218e6ffc2c7846cc37befa84362", size = 495186, upload-time = "2025-10-10T03:54:43.937Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e9/9e/025ad7b65278745dee3bd0ebf9314934c4592560878308a6121f7f812084/httptools-0.7.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e99c7b90a29fd82fea9ef57943d501a16f3404d7b9ee81799d41639bdaae412c", size = 499192, upload-time = "2025-10-10T03:54:45.003Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6d/de/40a8f202b987d43afc4d54689600ff03ce65680ede2f31df348d7f368b8f/httptools-0.7.1-cp312-cp312-win_amd64.whl", hash = "sha256:3e14f530fefa7499334a79b0cf7e7cd2992870eb893526fb097d51b4f2d0f321", size = 86694, upload-time = "2025-10-10T03:54:45.923Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/09/8f/c77b1fcbfd262d422f12da02feb0d218fa228d52485b77b953832105bb90/httptools-0.7.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:6babce6cfa2a99545c60bfef8bee0cc0545413cb0018f617c8059a30ad985de3", size = 202889, upload-time = "2025-10-10T03:54:47.089Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0a/1a/22887f53602feaa066354867bc49a68fc295c2293433177ee90870a7d517/httptools-0.7.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:601b7628de7504077dd3dcb3791c6b8694bbd967148a6d1f01806509254fb1ca", size = 108180, upload-time = "2025-10-10T03:54:48.052Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/32/6a/6aaa91937f0010d288d3d124ca2946d48d60c3a5ee7ca62afe870e3ea011/httptools-0.7.1-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:04c6c0e6c5fb0739c5b8a9eb046d298650a0ff38cf42537fc372b28dc7e4472c", size = 478596, upload-time = "2025-10-10T03:54:48.919Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6d/70/023d7ce117993107be88d2cbca566a7c1323ccbaf0af7eabf2064fe356f6/httptools-0.7.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:69d4f9705c405ae3ee83d6a12283dc9feba8cc6aaec671b412917e644ab4fa66", size = 473268, upload-time = "2025-10-10T03:54:49.993Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/32/4d/9dd616c38da088e3f436e9a616e1d0cc66544b8cdac405cc4e81c8679fc7/httptools-0.7.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:44c8f4347d4b31269c8a9205d8a5ee2df5322b09bbbd30f8f862185bb6b05346", size = 455517, upload-time = "2025-10-10T03:54:51.066Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1d/3a/a6c595c310b7df958e739aae88724e24f9246a514d909547778d776799be/httptools-0.7.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:465275d76db4d554918aba40bf1cbebe324670f3dfc979eaffaa5d108e2ed650", size = 458337, upload-time = "2025-10-10T03:54:52.196Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fd/82/88e8d6d2c51edc1cc391b6e044c6c435b6aebe97b1abc33db1b0b24cd582/httptools-0.7.1-cp313-cp313-win_amd64.whl", hash = "sha256:322d00c2068d125bd570f7bf78b2d367dad02b919d8581d7476d8b75b294e3e6", size = 85743, upload-time = "2025-10-10T03:54:53.448Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/34/50/9d095fcbb6de2d523e027a2f304d4551855c2f46e0b82befd718b8b20056/httptools-0.7.1-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:c08fe65728b8d70b6923ce31e3956f859d5e1e8548e6f22ec520a962c6757270", size = 203619, upload-time = "2025-10-10T03:54:54.321Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/07/f0/89720dc5139ae54b03f861b5e2c55a37dba9a5da7d51e1e824a1f343627f/httptools-0.7.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:7aea2e3c3953521c3c51106ee11487a910d45586e351202474d45472db7d72d3", size = 108714, upload-time = "2025-10-10T03:54:55.163Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b3/cb/eea88506f191fb552c11787c23f9a405f4c7b0c5799bf73f2249cd4f5228/httptools-0.7.1-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:0e68b8582f4ea9166be62926077a3334064d422cf08ab87d8b74664f8e9058e1", size = 472909, upload-time = "2025-10-10T03:54:56.056Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e0/4a/a548bdfae6369c0d078bab5769f7b66f17f1bfaa6fa28f81d6be6959066b/httptools-0.7.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:df091cf961a3be783d6aebae963cc9b71e00d57fa6f149025075217bc6a55a7b", size = 470831, upload-time = "2025-10-10T03:54:57.219Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4d/31/14df99e1c43bd132eec921c2e7e11cda7852f65619bc0fc5bdc2d0cb126c/httptools-0.7.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:f084813239e1eb403ddacd06a30de3d3e09a9b76e7894dcda2b22f8a726e9c60", size = 452631, upload-time = "2025-10-10T03:54:58.219Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/d2/b7e131f7be8d854d48cb6d048113c30f9a46dca0c9a8b08fcb3fcd588cdc/httptools-0.7.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:7347714368fb2b335e9063bc2b96f2f87a9ceffcd9758ac295f8bbcd3ffbc0ca", size = 452910, upload-time = "2025-10-10T03:54:59.366Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/53/cf/878f3b91e4e6e011eff6d1fa9ca39f7eb17d19c9d7971b04873734112f30/httptools-0.7.1-cp314-cp314-win_amd64.whl", hash = "sha256:cfabda2a5bb85aa2a904ce06d974a3f30fb36cc63d7feaddec05d2050acede96", size = 88205, upload-time = "2025-10-10T03:55:00.389Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "httpx"
|
||||
version = "0.28.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "anyio" },
|
||||
{ name = "certifi" },
|
||||
{ name = "httpcore" },
|
||||
{ name = "idna" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b1/df/48c586a5fe32a0f01324ee087459e112ebb7224f646c0b5023f5e79e9956/httpx-0.28.1.tar.gz", hash = "sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc", size = 141406, upload-time = "2024-12-06T15:37:23.222Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517, upload-time = "2024-12-06T15:37:21.509Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "idna"
|
||||
version = "3.11"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/6f/6d/0703ccc57f3a7233505399edb88de3cbd678da106337b9fcde432b65ed60/idna-3.11.tar.gz", hash = "sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902", size = 194582, upload-time = "2025-10-12T14:55:20.501Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "llm-council"
|
||||
version = "0.1.0"
|
||||
source = { virtual = "." }
|
||||
dependencies = [
|
||||
{ name = "fastapi" },
|
||||
{ name = "httpx" },
|
||||
{ name = "pydantic" },
|
||||
{ name = "python-dotenv" },
|
||||
{ name = "python-multipart" },
|
||||
{ name = "uvicorn", extra = ["standard"] },
|
||||
]
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "fastapi", specifier = ">=0.115.0" },
|
||||
{ name = "httpx", specifier = ">=0.27.0" },
|
||||
{ name = "pydantic", specifier = ">=2.9.0" },
|
||||
{ name = "python-dotenv", specifier = ">=1.0.0" },
|
||||
{ name = "python-multipart", specifier = ">=0.0.9" },
|
||||
{ name = "uvicorn", extras = ["standard"], specifier = ">=0.32.0" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pydantic"
|
||||
version = "2.12.4"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "annotated-types" },
|
||||
{ name = "pydantic-core" },
|
||||
{ name = "typing-extensions" },
|
||||
{ name = "typing-inspection" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/96/ad/a17bc283d7d81837c061c49e3eaa27a45991759a1b7eae1031921c6bd924/pydantic-2.12.4.tar.gz", hash = "sha256:0f8cb9555000a4b5b617f66bfd2566264c4984b27589d3b845685983e8ea85ac", size = 821038, upload-time = "2025-11-05T10:50:08.59Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/82/2f/e68750da9b04856e2a7ec56fc6f034a5a79775e9b9a81882252789873798/pydantic-2.12.4-py3-none-any.whl", hash = "sha256:92d3d202a745d46f9be6df459ac5a064fdaa3c1c4cd8adcfa332ccf3c05f871e", size = 463400, upload-time = "2025-11-05T10:50:06.732Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pydantic-core"
|
||||
version = "2.41.5"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/71/70/23b021c950c2addd24ec408e9ab05d59b035b39d97cdc1130e1bce647bb6/pydantic_core-2.41.5.tar.gz", hash = "sha256:08daa51ea16ad373ffd5e7606252cc32f07bc72b28284b6bc9c6df804816476e", size = 460952, upload-time = "2025-11-04T13:43:49.098Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/c6/90/32c9941e728d564b411d574d8ee0cf09b12ec978cb22b294995bae5549a5/pydantic_core-2.41.5-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:77b63866ca88d804225eaa4af3e664c5faf3568cea95360d21f4725ab6e07146", size = 2107298, upload-time = "2025-11-04T13:39:04.116Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fb/a8/61c96a77fe28993d9a6fb0f4127e05430a267b235a124545d79fea46dd65/pydantic_core-2.41.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:dfa8a0c812ac681395907e71e1274819dec685fec28273a28905df579ef137e2", size = 1901475, upload-time = "2025-11-04T13:39:06.055Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5d/b6/338abf60225acc18cdc08b4faef592d0310923d19a87fba1faf05af5346e/pydantic_core-2.41.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5921a4d3ca3aee735d9fd163808f5e8dd6c6972101e4adbda9a4667908849b97", size = 1918815, upload-time = "2025-11-04T13:39:10.41Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/1c/2ed0433e682983d8e8cba9c8d8ef274d4791ec6a6f24c58935b90e780e0a/pydantic_core-2.41.5-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e25c479382d26a2a41b7ebea1043564a937db462816ea07afa8a44c0866d52f9", size = 2065567, upload-time = "2025-11-04T13:39:12.244Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b3/24/cf84974ee7d6eae06b9e63289b7b8f6549d416b5c199ca2d7ce13bbcf619/pydantic_core-2.41.5-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f547144f2966e1e16ae626d8ce72b4cfa0caedc7fa28052001c94fb2fcaa1c52", size = 2230442, upload-time = "2025-11-04T13:39:13.962Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fd/21/4e287865504b3edc0136c89c9c09431be326168b1eb7841911cbc877a995/pydantic_core-2.41.5-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6f52298fbd394f9ed112d56f3d11aabd0d5bd27beb3084cc3d8ad069483b8941", size = 2350956, upload-time = "2025-11-04T13:39:15.889Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a8/76/7727ef2ffa4b62fcab916686a68a0426b9b790139720e1934e8ba797e238/pydantic_core-2.41.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:100baa204bb412b74fe285fb0f3a385256dad1d1879f0a5cb1499ed2e83d132a", size = 2068253, upload-time = "2025-11-04T13:39:17.403Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/8c/a4abfc79604bcb4c748e18975c44f94f756f08fb04218d5cb87eb0d3a63e/pydantic_core-2.41.5-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:05a2c8852530ad2812cb7914dc61a1125dc4e06252ee98e5638a12da6cc6fb6c", size = 2177050, upload-time = "2025-11-04T13:39:19.351Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/67/b1/de2e9a9a79b480f9cb0b6e8b6ba4c50b18d4e89852426364c66aa82bb7b3/pydantic_core-2.41.5-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:29452c56df2ed968d18d7e21f4ab0ac55e71dc59524872f6fc57dcf4a3249ed2", size = 2147178, upload-time = "2025-11-04T13:39:21Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/16/c1/dfb33f837a47b20417500efaa0378adc6635b3c79e8369ff7a03c494b4ac/pydantic_core-2.41.5-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:d5160812ea7a8a2ffbe233d8da666880cad0cbaf5d4de74ae15c313213d62556", size = 2341833, upload-time = "2025-11-04T13:39:22.606Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/47/36/00f398642a0f4b815a9a558c4f1dca1b4020a7d49562807d7bc9ff279a6c/pydantic_core-2.41.5-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:df3959765b553b9440adfd3c795617c352154e497a4eaf3752555cfb5da8fc49", size = 2321156, upload-time = "2025-11-04T13:39:25.843Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7e/70/cad3acd89fde2010807354d978725ae111ddf6d0ea46d1ea1775b5c1bd0c/pydantic_core-2.41.5-cp310-cp310-win32.whl", hash = "sha256:1f8d33a7f4d5a7889e60dc39856d76d09333d8a6ed0f5f1190635cbec70ec4ba", size = 1989378, upload-time = "2025-11-04T13:39:27.92Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/76/92/d338652464c6c367e5608e4488201702cd1cbb0f33f7b6a85a60fe5f3720/pydantic_core-2.41.5-cp310-cp310-win_amd64.whl", hash = "sha256:62de39db01b8d593e45871af2af9e497295db8d73b085f6bfd0b18c83c70a8f9", size = 2013622, upload-time = "2025-11-04T13:39:29.848Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e8/72/74a989dd9f2084b3d9530b0915fdda64ac48831c30dbf7c72a41a5232db8/pydantic_core-2.41.5-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:a3a52f6156e73e7ccb0f8cced536adccb7042be67cb45f9562e12b319c119da6", size = 2105873, upload-time = "2025-11-04T13:39:31.373Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/44/37e403fd9455708b3b942949e1d7febc02167662bf1a7da5b78ee1ea2842/pydantic_core-2.41.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7f3bf998340c6d4b0c9a2f02d6a400e51f123b59565d74dc60d252ce888c260b", size = 1899826, upload-time = "2025-11-04T13:39:32.897Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/33/7f/1d5cab3ccf44c1935a359d51a8a2a9e1a654b744b5e7f80d41b88d501eec/pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:378bec5c66998815d224c9ca994f1e14c0c21cb95d2f52b6021cc0b2a58f2a5a", size = 1917869, upload-time = "2025-11-04T13:39:34.469Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/6a/30d94a9674a7fe4f4744052ed6c5e083424510be1e93da5bc47569d11810/pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e7b576130c69225432866fe2f4a469a85a54ade141d96fd396dffcf607b558f8", size = 2063890, upload-time = "2025-11-04T13:39:36.053Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/50/be/76e5d46203fcb2750e542f32e6c371ffa9b8ad17364cf94bb0818dbfb50c/pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6cb58b9c66f7e4179a2d5e0f849c48eff5c1fca560994d6eb6543abf955a149e", size = 2229740, upload-time = "2025-11-04T13:39:37.753Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/ee/fed784df0144793489f87db310a6bbf8118d7b630ed07aa180d6067e653a/pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:88942d3a3dff3afc8288c21e565e476fc278902ae4d6d134f1eeda118cc830b1", size = 2350021, upload-time = "2025-11-04T13:39:40.94Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c8/be/8fed28dd0a180dca19e72c233cbf58efa36df055e5b9d90d64fd1740b828/pydantic_core-2.41.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f31d95a179f8d64d90f6831d71fa93290893a33148d890ba15de25642c5d075b", size = 2066378, upload-time = "2025-11-04T13:39:42.523Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b0/3b/698cf8ae1d536a010e05121b4958b1257f0b5522085e335360e53a6b1c8b/pydantic_core-2.41.5-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c1df3d34aced70add6f867a8cf413e299177e0c22660cc767218373d0779487b", size = 2175761, upload-time = "2025-11-04T13:39:44.553Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b8/ba/15d537423939553116dea94ce02f9c31be0fa9d0b806d427e0308ec17145/pydantic_core-2.41.5-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:4009935984bd36bd2c774e13f9a09563ce8de4abaa7226f5108262fa3e637284", size = 2146303, upload-time = "2025-11-04T13:39:46.238Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/58/7f/0de669bf37d206723795f9c90c82966726a2ab06c336deba4735b55af431/pydantic_core-2.41.5-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:34a64bc3441dc1213096a20fe27e8e128bd3ff89921706e83c0b1ac971276594", size = 2340355, upload-time = "2025-11-04T13:39:48.002Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e5/de/e7482c435b83d7e3c3ee5ee4451f6e8973cff0eb6007d2872ce6383f6398/pydantic_core-2.41.5-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c9e19dd6e28fdcaa5a1de679aec4141f691023916427ef9bae8584f9c2fb3b0e", size = 2319875, upload-time = "2025-11-04T13:39:49.705Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fe/e6/8c9e81bb6dd7560e33b9053351c29f30c8194b72f2d6932888581f503482/pydantic_core-2.41.5-cp311-cp311-win32.whl", hash = "sha256:2c010c6ded393148374c0f6f0bf89d206bf3217f201faa0635dcd56bd1520f6b", size = 1987549, upload-time = "2025-11-04T13:39:51.842Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/11/66/f14d1d978ea94d1bc21fc98fcf570f9542fe55bfcc40269d4e1a21c19bf7/pydantic_core-2.41.5-cp311-cp311-win_amd64.whl", hash = "sha256:76ee27c6e9c7f16f47db7a94157112a2f3a00e958bc626e2f4ee8bec5c328fbe", size = 2011305, upload-time = "2025-11-04T13:39:53.485Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/56/d8/0e271434e8efd03186c5386671328154ee349ff0354d83c74f5caaf096ed/pydantic_core-2.41.5-cp311-cp311-win_arm64.whl", hash = "sha256:4bc36bbc0b7584de96561184ad7f012478987882ebf9f9c389b23f432ea3d90f", size = 1972902, upload-time = "2025-11-04T13:39:56.488Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/5d/5f6c63eebb5afee93bcaae4ce9a898f3373ca23df3ccaef086d0233a35a7/pydantic_core-2.41.5-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:f41a7489d32336dbf2199c8c0a215390a751c5b014c2c1c5366e817202e9cdf7", size = 2110990, upload-time = "2025-11-04T13:39:58.079Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/32/9c2e8ccb57c01111e0fd091f236c7b371c1bccea0fa85247ac55b1e2b6b6/pydantic_core-2.41.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:070259a8818988b9a84a449a2a7337c7f430a22acc0859c6b110aa7212a6d9c0", size = 1896003, upload-time = "2025-11-04T13:39:59.956Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/68/b8/a01b53cb0e59139fbc9e4fda3e9724ede8de279097179be4ff31f1abb65a/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e96cea19e34778f8d59fe40775a7a574d95816eb150850a85a7a4c8f4b94ac69", size = 1919200, upload-time = "2025-11-04T13:40:02.241Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/38/de/8c36b5198a29bdaade07b5985e80a233a5ac27137846f3bc2d3b40a47360/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ed2e99c456e3fadd05c991f8f437ef902e00eedf34320ba2b0842bd1c3ca3a75", size = 2052578, upload-time = "2025-11-04T13:40:04.401Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/00/b5/0e8e4b5b081eac6cb3dbb7e60a65907549a1ce035a724368c330112adfdd/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:65840751b72fbfd82c3c640cff9284545342a4f1eb1586ad0636955b261b0b05", size = 2208504, upload-time = "2025-11-04T13:40:06.072Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/77/56/87a61aad59c7c5b9dc8caad5a41a5545cba3810c3e828708b3d7404f6cef/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e536c98a7626a98feb2d3eaf75944ef6f3dbee447e1f841eae16f2f0a72d8ddc", size = 2335816, upload-time = "2025-11-04T13:40:07.835Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0d/76/941cc9f73529988688a665a5c0ecff1112b3d95ab48f81db5f7606f522d3/pydantic_core-2.41.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eceb81a8d74f9267ef4081e246ffd6d129da5d87e37a77c9bde550cb04870c1c", size = 2075366, upload-time = "2025-11-04T13:40:09.804Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/43/ebef01f69baa07a482844faaa0a591bad1ef129253ffd0cdaa9d8a7f72d3/pydantic_core-2.41.5-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d38548150c39b74aeeb0ce8ee1d8e82696f4a4e16ddc6de7b1d8823f7de4b9b5", size = 2171698, upload-time = "2025-11-04T13:40:12.004Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/87/41f3202e4193e3bacfc2c065fab7706ebe81af46a83d3e27605029c1f5a6/pydantic_core-2.41.5-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:c23e27686783f60290e36827f9c626e63154b82b116d7fe9adba1fda36da706c", size = 2132603, upload-time = "2025-11-04T13:40:13.868Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/49/7d/4c00df99cb12070b6bccdef4a195255e6020a550d572768d92cc54dba91a/pydantic_core-2.41.5-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:482c982f814460eabe1d3bb0adfdc583387bd4691ef00b90575ca0d2b6fe2294", size = 2329591, upload-time = "2025-11-04T13:40:15.672Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/6a/ebf4b1d65d458f3cda6a7335d141305dfa19bdc61140a884d165a8a1bbc7/pydantic_core-2.41.5-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:bfea2a5f0b4d8d43adf9d7b8bf019fb46fdd10a2e5cde477fbcb9d1fa08c68e1", size = 2319068, upload-time = "2025-11-04T13:40:17.532Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/49/3b/774f2b5cd4192d5ab75870ce4381fd89cf218af999515baf07e7206753f0/pydantic_core-2.41.5-cp312-cp312-win32.whl", hash = "sha256:b74557b16e390ec12dca509bce9264c3bbd128f8a2c376eaa68003d7f327276d", size = 1985908, upload-time = "2025-11-04T13:40:19.309Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/45/00173a033c801cacf67c190fef088789394feaf88a98a7035b0e40d53dc9/pydantic_core-2.41.5-cp312-cp312-win_amd64.whl", hash = "sha256:1962293292865bca8e54702b08a4f26da73adc83dd1fcf26fbc875b35d81c815", size = 2020145, upload-time = "2025-11-04T13:40:21.548Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f9/22/91fbc821fa6d261b376a3f73809f907cec5ca6025642c463d3488aad22fb/pydantic_core-2.41.5-cp312-cp312-win_arm64.whl", hash = "sha256:1746d4a3d9a794cacae06a5eaaccb4b8643a131d45fbc9af23e353dc0a5ba5c3", size = 1976179, upload-time = "2025-11-04T13:40:23.393Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/87/06/8806241ff1f70d9939f9af039c6c35f2360cf16e93c2ca76f184e76b1564/pydantic_core-2.41.5-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:941103c9be18ac8daf7b7adca8228f8ed6bb7a1849020f643b3a14d15b1924d9", size = 2120403, upload-time = "2025-11-04T13:40:25.248Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/94/02/abfa0e0bda67faa65fef1c84971c7e45928e108fe24333c81f3bfe35d5f5/pydantic_core-2.41.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:112e305c3314f40c93998e567879e887a3160bb8689ef3d2c04b6cc62c33ac34", size = 1896206, upload-time = "2025-11-04T13:40:27.099Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/15/df/a4c740c0943e93e6500f9eb23f4ca7ec9bf71b19e608ae5b579678c8d02f/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0cbaad15cb0c90aa221d43c00e77bb33c93e8d36e0bf74760cd00e732d10a6a0", size = 1919307, upload-time = "2025-11-04T13:40:29.806Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9a/e3/6324802931ae1d123528988e0e86587c2072ac2e5394b4bc2bc34b61ff6e/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:03ca43e12fab6023fc79d28ca6b39b05f794ad08ec2feccc59a339b02f2b3d33", size = 2063258, upload-time = "2025-11-04T13:40:33.544Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c9/d4/2230d7151d4957dd79c3044ea26346c148c98fbf0ee6ebd41056f2d62ab5/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:dc799088c08fa04e43144b164feb0c13f9a0bc40503f8df3e9fde58a3c0c101e", size = 2214917, upload-time = "2025-11-04T13:40:35.479Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e6/9f/eaac5df17a3672fef0081b6c1bb0b82b33ee89aa5cec0d7b05f52fd4a1fa/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:97aeba56665b4c3235a0e52b2c2f5ae9cd071b8a8310ad27bddb3f7fb30e9aa2", size = 2332186, upload-time = "2025-11-04T13:40:37.436Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cf/4e/35a80cae583a37cf15604b44240e45c05e04e86f9cfd766623149297e971/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:406bf18d345822d6c21366031003612b9c77b3e29ffdb0f612367352aab7d586", size = 2073164, upload-time = "2025-11-04T13:40:40.289Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bf/e3/f6e262673c6140dd3305d144d032f7bd5f7497d3871c1428521f19f9efa2/pydantic_core-2.41.5-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b93590ae81f7010dbe380cdeab6f515902ebcbefe0b9327cc4804d74e93ae69d", size = 2179146, upload-time = "2025-11-04T13:40:42.809Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/75/c7/20bd7fc05f0c6ea2056a4565c6f36f8968c0924f19b7d97bbfea55780e73/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:01a3d0ab748ee531f4ea6c3e48ad9dac84ddba4b0d82291f87248f2f9de8d740", size = 2137788, upload-time = "2025-11-04T13:40:44.752Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3a/8d/34318ef985c45196e004bc46c6eab2eda437e744c124ef0dbe1ff2c9d06b/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:6561e94ba9dacc9c61bce40e2d6bdc3bfaa0259d3ff36ace3b1e6901936d2e3e", size = 2340133, upload-time = "2025-11-04T13:40:46.66Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9c/59/013626bf8c78a5a5d9350d12e7697d3d4de951a75565496abd40ccd46bee/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:915c3d10f81bec3a74fbd4faebe8391013ba61e5a1a8d48c4455b923bdda7858", size = 2324852, upload-time = "2025-11-04T13:40:48.575Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1a/d9/c248c103856f807ef70c18a4f986693a46a8ffe1602e5d361485da502d20/pydantic_core-2.41.5-cp313-cp313-win32.whl", hash = "sha256:650ae77860b45cfa6e2cdafc42618ceafab3a2d9a3811fcfbd3bbf8ac3c40d36", size = 1994679, upload-time = "2025-11-04T13:40:50.619Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9e/8b/341991b158ddab181cff136acd2552c9f35bd30380422a639c0671e99a91/pydantic_core-2.41.5-cp313-cp313-win_amd64.whl", hash = "sha256:79ec52ec461e99e13791ec6508c722742ad745571f234ea6255bed38c6480f11", size = 2019766, upload-time = "2025-11-04T13:40:52.631Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/73/7d/f2f9db34af103bea3e09735bb40b021788a5e834c81eedb541991badf8f5/pydantic_core-2.41.5-cp313-cp313-win_arm64.whl", hash = "sha256:3f84d5c1b4ab906093bdc1ff10484838aca54ef08de4afa9de0f5f14d69639cd", size = 1981005, upload-time = "2025-11-04T13:40:54.734Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ea/28/46b7c5c9635ae96ea0fbb779e271a38129df2550f763937659ee6c5dbc65/pydantic_core-2.41.5-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:3f37a19d7ebcdd20b96485056ba9e8b304e27d9904d233d7b1015db320e51f0a", size = 2119622, upload-time = "2025-11-04T13:40:56.68Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/1a/145646e5687e8d9a1e8d09acb278c8535ebe9e972e1f162ed338a622f193/pydantic_core-2.41.5-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1d1d9764366c73f996edd17abb6d9d7649a7eb690006ab6adbda117717099b14", size = 1891725, upload-time = "2025-11-04T13:40:58.807Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/23/04/e89c29e267b8060b40dca97bfc64a19b2a3cf99018167ea1677d96368273/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25e1c2af0fce638d5f1988b686f3b3ea8cd7de5f244ca147c777769e798a9cd1", size = 1915040, upload-time = "2025-11-04T13:41:00.853Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/84/a3/15a82ac7bd97992a82257f777b3583d3e84bdb06ba6858f745daa2ec8a85/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:506d766a8727beef16b7adaeb8ee6217c64fc813646b424d0804d67c16eddb66", size = 2063691, upload-time = "2025-11-04T13:41:03.504Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/9b/0046701313c6ef08c0c1cf0e028c67c770a4e1275ca73131563c5f2a310a/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4819fa52133c9aa3c387b3328f25c1facc356491e6135b459f1de698ff64d869", size = 2213897, upload-time = "2025-11-04T13:41:05.804Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/cd/6bac76ecd1b27e75a95ca3a9a559c643b3afcd2dd62086d4b7a32a18b169/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2b761d210c9ea91feda40d25b4efe82a1707da2ef62901466a42492c028553a2", size = 2333302, upload-time = "2025-11-04T13:41:07.809Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4c/d2/ef2074dc020dd6e109611a8be4449b98cd25e1b9b8a303c2f0fca2f2bcf7/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:22f0fb8c1c583a3b6f24df2470833b40207e907b90c928cc8d3594b76f874375", size = 2064877, upload-time = "2025-11-04T13:41:09.827Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/18/66/e9db17a9a763d72f03de903883c057b2592c09509ccfe468187f2a2eef29/pydantic_core-2.41.5-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2782c870e99878c634505236d81e5443092fba820f0373997ff75f90f68cd553", size = 2180680, upload-time = "2025-11-04T13:41:12.379Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/9e/3ce66cebb929f3ced22be85d4c2399b8e85b622db77dad36b73c5387f8f8/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:0177272f88ab8312479336e1d777f6b124537d47f2123f89cb37e0accea97f90", size = 2138960, upload-time = "2025-11-04T13:41:14.627Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a6/62/205a998f4327d2079326b01abee48e502ea739d174f0a89295c481a2272e/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_armv7l.whl", hash = "sha256:63510af5e38f8955b8ee5687740d6ebf7c2a0886d15a6d65c32814613681bc07", size = 2339102, upload-time = "2025-11-04T13:41:16.868Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3c/0d/f05e79471e889d74d3d88f5bd20d0ed189ad94c2423d81ff8d0000aab4ff/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:e56ba91f47764cc14f1daacd723e3e82d1a89d783f0f5afe9c364b8bb491ccdb", size = 2326039, upload-time = "2025-11-04T13:41:18.934Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ec/e1/e08a6208bb100da7e0c4b288eed624a703f4d129bde2da475721a80cab32/pydantic_core-2.41.5-cp314-cp314-win32.whl", hash = "sha256:aec5cf2fd867b4ff45b9959f8b20ea3993fc93e63c7363fe6851424c8a7e7c23", size = 1995126, upload-time = "2025-11-04T13:41:21.418Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/48/5d/56ba7b24e9557f99c9237e29f5c09913c81eeb2f3217e40e922353668092/pydantic_core-2.41.5-cp314-cp314-win_amd64.whl", hash = "sha256:8e7c86f27c585ef37c35e56a96363ab8de4e549a95512445b85c96d3e2f7c1bf", size = 2015489, upload-time = "2025-11-04T13:41:24.076Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4e/bb/f7a190991ec9e3e0ba22e4993d8755bbc4a32925c0b5b42775c03e8148f9/pydantic_core-2.41.5-cp314-cp314-win_arm64.whl", hash = "sha256:e672ba74fbc2dc8eea59fb6d4aed6845e6905fc2a8afe93175d94a83ba2a01a0", size = 1977288, upload-time = "2025-11-04T13:41:26.33Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/92/ed/77542d0c51538e32e15afe7899d79efce4b81eee631d99850edc2f5e9349/pydantic_core-2.41.5-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:8566def80554c3faa0e65ac30ab0932b9e3a5cd7f8323764303d468e5c37595a", size = 2120255, upload-time = "2025-11-04T13:41:28.569Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/3d/6913dde84d5be21e284439676168b28d8bbba5600d838b9dca99de0fad71/pydantic_core-2.41.5-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:b80aa5095cd3109962a298ce14110ae16b8c1aece8b72f9dafe81cf597ad80b3", size = 1863760, upload-time = "2025-11-04T13:41:31.055Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5a/f0/e5e6b99d4191da102f2b0eb9687aaa7f5bea5d9964071a84effc3e40f997/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3006c3dd9ba34b0c094c544c6006cc79e87d8612999f1a5d43b769b89181f23c", size = 1878092, upload-time = "2025-11-04T13:41:33.21Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/71/48/36fb760642d568925953bcc8116455513d6e34c4beaa37544118c36aba6d/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:72f6c8b11857a856bcfa48c86f5368439f74453563f951e473514579d44aa612", size = 2053385, upload-time = "2025-11-04T13:41:35.508Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/20/25/92dc684dd8eb75a234bc1c764b4210cf2646479d54b47bf46061657292a8/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5cb1b2f9742240e4bb26b652a5aeb840aa4b417c7748b6f8387927bc6e45e40d", size = 2218832, upload-time = "2025-11-04T13:41:37.732Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e2/09/f53e0b05023d3e30357d82eb35835d0f6340ca344720a4599cd663dca599/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bd3d54f38609ff308209bd43acea66061494157703364ae40c951f83ba99a1a9", size = 2327585, upload-time = "2025-11-04T13:41:40Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/4e/2ae1aa85d6af35a39b236b1b1641de73f5a6ac4d5a7509f77b814885760c/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ff4321e56e879ee8d2a879501c8e469414d948f4aba74a2d4593184eb326660", size = 2041078, upload-time = "2025-11-04T13:41:42.323Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cd/13/2e215f17f0ef326fc72afe94776edb77525142c693767fc347ed6288728d/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d0d2568a8c11bf8225044aa94409e21da0cb09dcdafe9ecd10250b2baad531a9", size = 2173914, upload-time = "2025-11-04T13:41:45.221Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/7a/f999a6dcbcd0e5660bc348a3991c8915ce6599f4f2c6ac22f01d7a10816c/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_aarch64.whl", hash = "sha256:a39455728aabd58ceabb03c90e12f71fd30fa69615760a075b9fec596456ccc3", size = 2129560, upload-time = "2025-11-04T13:41:47.474Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3a/b1/6c990ac65e3b4c079a4fb9f5b05f5b013afa0f4ed6780a3dd236d2cbdc64/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_armv7l.whl", hash = "sha256:239edca560d05757817c13dc17c50766136d21f7cd0fac50295499ae24f90fdf", size = 2329244, upload-time = "2025-11-04T13:41:49.992Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d9/02/3c562f3a51afd4d88fff8dffb1771b30cfdfd79befd9883ee094f5b6c0d8/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_x86_64.whl", hash = "sha256:2a5e06546e19f24c6a96a129142a75cee553cc018ffee48a460059b1185f4470", size = 2331955, upload-time = "2025-11-04T13:41:54.079Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5c/96/5fb7d8c3c17bc8c62fdb031c47d77a1af698f1d7a406b0f79aaa1338f9ad/pydantic_core-2.41.5-cp314-cp314t-win32.whl", hash = "sha256:b4ececa40ac28afa90871c2cc2b9ffd2ff0bf749380fbdf57d165fd23da353aa", size = 1988906, upload-time = "2025-11-04T13:41:56.606Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/ed/182129d83032702912c2e2d8bbe33c036f342cc735737064668585dac28f/pydantic_core-2.41.5-cp314-cp314t-win_amd64.whl", hash = "sha256:80aa89cad80b32a912a65332f64a4450ed00966111b6615ca6816153d3585a8c", size = 1981607, upload-time = "2025-11-04T13:41:58.889Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9f/ed/068e41660b832bb0b1aa5b58011dea2a3fe0ba7861ff38c4d4904c1c1a99/pydantic_core-2.41.5-cp314-cp314t-win_arm64.whl", hash = "sha256:35b44f37a3199f771c3eaa53051bc8a70cd7b54f333531c59e29fd4db5d15008", size = 1974769, upload-time = "2025-11-04T13:42:01.186Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/11/72/90fda5ee3b97e51c494938a4a44c3a35a9c96c19bba12372fb9c634d6f57/pydantic_core-2.41.5-graalpy311-graalpy242_311_native-macosx_10_12_x86_64.whl", hash = "sha256:b96d5f26b05d03cc60f11a7761a5ded1741da411e7fe0909e27a5e6a0cb7b034", size = 2115441, upload-time = "2025-11-04T13:42:39.557Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1f/53/8942f884fa33f50794f119012dc6a1a02ac43a56407adaac20463df8e98f/pydantic_core-2.41.5-graalpy311-graalpy242_311_native-macosx_11_0_arm64.whl", hash = "sha256:634e8609e89ceecea15e2d61bc9ac3718caaaa71963717bf3c8f38bfde64242c", size = 1930291, upload-time = "2025-11-04T13:42:42.169Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/79/c8/ecb9ed9cd942bce09fc888ee960b52654fbdbede4ba6c2d6e0d3b1d8b49c/pydantic_core-2.41.5-graalpy311-graalpy242_311_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:93e8740d7503eb008aa2df04d3b9735f845d43ae845e6dcd2be0b55a2da43cd2", size = 1948632, upload-time = "2025-11-04T13:42:44.564Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2e/1b/687711069de7efa6af934e74f601e2a4307365e8fdc404703afc453eab26/pydantic_core-2.41.5-graalpy311-graalpy242_311_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f15489ba13d61f670dcc96772e733aad1a6f9c429cc27574c6cdaed82d0146ad", size = 2138905, upload-time = "2025-11-04T13:42:47.156Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/09/32/59b0c7e63e277fa7911c2fc70ccfb45ce4b98991e7ef37110663437005af/pydantic_core-2.41.5-graalpy312-graalpy250_312_native-macosx_10_12_x86_64.whl", hash = "sha256:7da7087d756b19037bc2c06edc6c170eeef3c3bafcb8f532ff17d64dc427adfd", size = 2110495, upload-time = "2025-11-04T13:42:49.689Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/81/05e400037eaf55ad400bcd318c05bb345b57e708887f07ddb2d20e3f0e98/pydantic_core-2.41.5-graalpy312-graalpy250_312_native-macosx_11_0_arm64.whl", hash = "sha256:aabf5777b5c8ca26f7824cb4a120a740c9588ed58df9b2d196ce92fba42ff8dc", size = 1915388, upload-time = "2025-11-04T13:42:52.215Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/0d/e3549b2399f71d56476b77dbf3cf8937cec5cd70536bdc0e374a421d0599/pydantic_core-2.41.5-graalpy312-graalpy250_312_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c007fe8a43d43b3969e8469004e9845944f1a80e6acd47c150856bb87f230c56", size = 1942879, upload-time = "2025-11-04T13:42:56.483Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f7/07/34573da085946b6a313d7c42f82f16e8920bfd730665de2d11c0c37a74b5/pydantic_core-2.41.5-graalpy312-graalpy250_312_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:76d0819de158cd855d1cbb8fcafdf6f5cf1eb8e470abe056d5d161106e38062b", size = 2139017, upload-time = "2025-11-04T13:42:59.471Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e6/b0/1a2aa41e3b5a4ba11420aba2d091b2d17959c8d1519ece3627c371951e73/pydantic_core-2.41.5-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:b5819cd790dbf0c5eb9f82c73c16b39a65dd6dd4d1439dcdea7816ec9adddab8", size = 2103351, upload-time = "2025-11-04T13:43:02.058Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a4/ee/31b1f0020baaf6d091c87900ae05c6aeae101fa4e188e1613c80e4f1ea31/pydantic_core-2.41.5-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:5a4e67afbc95fa5c34cf27d9089bca7fcab4e51e57278d710320a70b956d1b9a", size = 1925363, upload-time = "2025-11-04T13:43:05.159Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e1/89/ab8e86208467e467a80deaca4e434adac37b10a9d134cd2f99b28a01e483/pydantic_core-2.41.5-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ece5c59f0ce7d001e017643d8d24da587ea1f74f6993467d85ae8a5ef9d4f42b", size = 2135615, upload-time = "2025-11-04T13:43:08.116Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/99/0a/99a53d06dd0348b2008f2f30884b34719c323f16c3be4e6cc1203b74a91d/pydantic_core-2.41.5-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:16f80f7abe3351f8ea6858914ddc8c77e02578544a0ebc15b4c2e1a0e813b0b2", size = 2175369, upload-time = "2025-11-04T13:43:12.49Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6d/94/30ca3b73c6d485b9bb0bc66e611cff4a7138ff9736b7e66bcf0852151636/pydantic_core-2.41.5-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:33cb885e759a705b426baada1fe68cbb0a2e68e34c5d0d0289a364cf01709093", size = 2144218, upload-time = "2025-11-04T13:43:15.431Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/87/57/31b4f8e12680b739a91f472b5671294236b82586889ef764b5fbc6669238/pydantic_core-2.41.5-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:c8d8b4eb992936023be7dee581270af5c6e0697a8559895f527f5b7105ecd36a", size = 2329951, upload-time = "2025-11-04T13:43:18.062Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/73/3c2c8edef77b8f7310e6fb012dbc4b8551386ed575b9eb6fb2506e28a7eb/pydantic_core-2.41.5-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:242a206cd0318f95cd21bdacff3fcc3aab23e79bba5cac3db5a841c9ef9c6963", size = 2318428, upload-time = "2025-11-04T13:43:20.679Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2f/02/8559b1f26ee0d502c74f9cca5c0d2fd97e967e083e006bbbb4e97f3a043a/pydantic_core-2.41.5-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:d3a978c4f57a597908b7e697229d996d77a6d3c94901e9edee593adada95ce1a", size = 2147009, upload-time = "2025-11-04T13:43:23.286Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/9b/1b3f0e9f9305839d7e84912f9e8bfbd191ed1b1ef48083609f0dabde978c/pydantic_core-2.41.5-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:b2379fa7ed44ddecb5bfe4e48577d752db9fc10be00a6b7446e9663ba143de26", size = 2101980, upload-time = "2025-11-04T13:43:25.97Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a4/ed/d71fefcb4263df0da6a85b5d8a7508360f2f2e9b3bf5814be9c8bccdccc1/pydantic_core-2.41.5-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:266fb4cbf5e3cbd0b53669a6d1b039c45e3ce651fd5442eff4d07c2cc8d66808", size = 1923865, upload-time = "2025-11-04T13:43:28.763Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/3a/626b38db460d675f873e4444b4bb030453bbe7b4ba55df821d026a0493c4/pydantic_core-2.41.5-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:58133647260ea01e4d0500089a8c4f07bd7aa6ce109682b1426394988d8aaacc", size = 2134256, upload-time = "2025-11-04T13:43:31.71Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/83/d9/8412d7f06f616bbc053d30cb4e5f76786af3221462ad5eee1f202021eb4e/pydantic_core-2.41.5-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:287dad91cfb551c363dc62899a80e9e14da1f0e2b6ebde82c806612ca2a13ef1", size = 2174762, upload-time = "2025-11-04T13:43:34.744Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/55/4c/162d906b8e3ba3a99354e20faa1b49a85206c47de97a639510a0e673f5da/pydantic_core-2.41.5-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:03b77d184b9eb40240ae9fd676ca364ce1085f203e1b1256f8ab9984dca80a84", size = 2143141, upload-time = "2025-11-04T13:43:37.701Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1f/f2/f11dd73284122713f5f89fc940f370d035fa8e1e078d446b3313955157fe/pydantic_core-2.41.5-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:a668ce24de96165bb239160b3d854943128f4334822900534f2fe947930e5770", size = 2330317, upload-time = "2025-11-04T13:43:40.406Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/88/9d/b06ca6acfe4abb296110fb1273a4d848a0bfb2ff65f3ee92127b3244e16b/pydantic_core-2.41.5-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:f14f8f046c14563f8eb3f45f499cc658ab8d10072961e07225e507adb700e93f", size = 2316992, upload-time = "2025-11-04T13:43:43.602Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/36/c7/cfc8e811f061c841d7990b0201912c3556bfeb99cdcb7ed24adc8d6f8704/pydantic_core-2.41.5-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:56121965f7a4dc965bff783d70b907ddf3d57f6eba29b6d2e5dabfaf07799c51", size = 2145302, upload-time = "2025-11-04T13:43:46.64Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "python-dotenv"
|
||||
version = "1.2.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f0/26/19cadc79a718c5edbec86fd4919a6b6d3f681039a2f6d66d14be94e75fb9/python_dotenv-1.2.1.tar.gz", hash = "sha256:42667e897e16ab0d66954af0e60a9caa94f0fd4ecf3aaf6d2d260eec1aa36ad6", size = 44221, upload-time = "2025-10-26T15:12:10.434Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/14/1b/a298b06749107c305e1fe0f814c6c74aea7b2f1e10989cb30f544a1b3253/python_dotenv-1.2.1-py3-none-any.whl", hash = "sha256:b81ee9561e9ca4004139c6cbba3a238c32b03e4894671e181b671e8cb8425d61", size = 21230, upload-time = "2025-10-26T15:12:09.109Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "python-multipart"
|
||||
version = "0.0.21"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/78/96/804520d0850c7db98e5ccb70282e29208723f0964e88ffd9d0da2f52ea09/python_multipart-0.0.21.tar.gz", hash = "sha256:7137ebd4d3bbf70ea1622998f902b97a29434a9e8dc40eb203bbcf7c2a2cba92", size = 37196, upload-time = "2025-12-17T09:24:22.446Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/76/03af049af4dcee5d27442f71b6924f01f3efb5d2bd34f23fcd563f2cc5f5/python_multipart-0.0.21-py3-none-any.whl", hash = "sha256:cf7a6713e01c87aa35387f4774e812c4361150938d20d232800f75ffcf266090", size = 24541, upload-time = "2025-12-17T09:24:21.153Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyyaml"
|
||||
version = "6.0.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/05/8e/961c0007c59b8dd7729d542c61a4d537767a59645b82a0b521206e1e25c2/pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f", size = 130960, upload-time = "2025-09-25T21:33:16.546Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/f4/a0/39350dd17dd6d6c6507025c0e53aef67a9293a6d37d3511f23ea510d5800/pyyaml-6.0.3-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:214ed4befebe12df36bcc8bc2b64b396ca31be9304b8f59e25c11cf94a4c033b", size = 184227, upload-time = "2025-09-25T21:31:46.04Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/05/14/52d505b5c59ce73244f59c7a50ecf47093ce4765f116cdb98286a71eeca2/pyyaml-6.0.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:02ea2dfa234451bbb8772601d7b8e426c2bfa197136796224e50e35a78777956", size = 174019, upload-time = "2025-09-25T21:31:47.706Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/43/f7/0e6a5ae5599c838c696adb4e6330a59f463265bfa1e116cfd1fbb0abaaae/pyyaml-6.0.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b30236e45cf30d2b8e7b3e85881719e98507abed1011bf463a8fa23e9c3e98a8", size = 740646, upload-time = "2025-09-25T21:31:49.21Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2f/3a/61b9db1d28f00f8fd0ae760459a5c4bf1b941baf714e207b6eb0657d2578/pyyaml-6.0.3-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:66291b10affd76d76f54fad28e22e51719ef9ba22b29e1d7d03d6777a9174198", size = 840793, upload-time = "2025-09-25T21:31:50.735Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7a/1e/7acc4f0e74c4b3d9531e24739e0ab832a5edf40e64fbae1a9c01941cabd7/pyyaml-6.0.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9c7708761fccb9397fe64bbc0395abcae8c4bf7b0eac081e12b809bf47700d0b", size = 770293, upload-time = "2025-09-25T21:31:51.828Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8b/ef/abd085f06853af0cd59fa5f913d61a8eab65d7639ff2a658d18a25d6a89d/pyyaml-6.0.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:418cf3f2111bc80e0933b2cd8cd04f286338bb88bdc7bc8e6dd775ebde60b5e0", size = 732872, upload-time = "2025-09-25T21:31:53.282Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1f/15/2bc9c8faf6450a8b3c9fc5448ed869c599c0a74ba2669772b1f3a0040180/pyyaml-6.0.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:5e0b74767e5f8c593e8c9b5912019159ed0533c70051e9cce3e8b6aa699fcd69", size = 758828, upload-time = "2025-09-25T21:31:54.807Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/00/531e92e88c00f4333ce359e50c19b8d1de9fe8d581b1534e35ccfbc5f393/pyyaml-6.0.3-cp310-cp310-win32.whl", hash = "sha256:28c8d926f98f432f88adc23edf2e6d4921ac26fb084b028c733d01868d19007e", size = 142415, upload-time = "2025-09-25T21:31:55.885Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2a/fa/926c003379b19fca39dd4634818b00dec6c62d87faf628d1394e137354d4/pyyaml-6.0.3-cp310-cp310-win_amd64.whl", hash = "sha256:bdb2c67c6c1390b63c6ff89f210c8fd09d9a1217a465701eac7316313c915e4c", size = 158561, upload-time = "2025-09-25T21:31:57.406Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6d/16/a95b6757765b7b031c9374925bb718d55e0a9ba8a1b6a12d25962ea44347/pyyaml-6.0.3-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:44edc647873928551a01e7a563d7452ccdebee747728c1080d881d68af7b997e", size = 185826, upload-time = "2025-09-25T21:31:58.655Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/16/19/13de8e4377ed53079ee996e1ab0a9c33ec2faf808a4647b7b4c0d46dd239/pyyaml-6.0.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:652cb6edd41e718550aad172851962662ff2681490a8a711af6a4d288dd96824", size = 175577, upload-time = "2025-09-25T21:32:00.088Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0c/62/d2eb46264d4b157dae1275b573017abec435397aa59cbcdab6fc978a8af4/pyyaml-6.0.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:10892704fc220243f5305762e276552a0395f7beb4dbf9b14ec8fd43b57f126c", size = 775556, upload-time = "2025-09-25T21:32:01.31Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/10/cb/16c3f2cf3266edd25aaa00d6c4350381c8b012ed6f5276675b9eba8d9ff4/pyyaml-6.0.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:850774a7879607d3a6f50d36d04f00ee69e7fc816450e5f7e58d7f17f1ae5c00", size = 882114, upload-time = "2025-09-25T21:32:03.376Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/71/60/917329f640924b18ff085ab889a11c763e0b573da888e8404ff486657602/pyyaml-6.0.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b8bb0864c5a28024fac8a632c443c87c5aa6f215c0b126c449ae1a150412f31d", size = 806638, upload-time = "2025-09-25T21:32:04.553Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dd/6f/529b0f316a9fd167281a6c3826b5583e6192dba792dd55e3203d3f8e655a/pyyaml-6.0.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1d37d57ad971609cf3c53ba6a7e365e40660e3be0e5175fa9f2365a379d6095a", size = 767463, upload-time = "2025-09-25T21:32:06.152Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f2/6a/b627b4e0c1dd03718543519ffb2f1deea4a1e6d42fbab8021936a4d22589/pyyaml-6.0.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:37503bfbfc9d2c40b344d06b2199cf0e96e97957ab1c1b546fd4f87e53e5d3e4", size = 794986, upload-time = "2025-09-25T21:32:07.367Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/45/91/47a6e1c42d9ee337c4839208f30d9f09caa9f720ec7582917b264defc875/pyyaml-6.0.3-cp311-cp311-win32.whl", hash = "sha256:8098f252adfa6c80ab48096053f512f2321f0b998f98150cea9bd23d83e1467b", size = 142543, upload-time = "2025-09-25T21:32:08.95Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/e3/ea007450a105ae919a72393cb06f122f288ef60bba2dc64b26e2646fa315/pyyaml-6.0.3-cp311-cp311-win_amd64.whl", hash = "sha256:9f3bfb4965eb874431221a3ff3fdcddc7e74e3b07799e0e84ca4a0f867d449bf", size = 158763, upload-time = "2025-09-25T21:32:09.96Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/33/422b98d2195232ca1826284a76852ad5a86fe23e31b009c9886b2d0fb8b2/pyyaml-6.0.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:7f047e29dcae44602496db43be01ad42fc6f1cc0d8cd6c83d342306c32270196", size = 182063, upload-time = "2025-09-25T21:32:11.445Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/89/a0/6cf41a19a1f2f3feab0e9c0b74134aa2ce6849093d5517a0c550fe37a648/pyyaml-6.0.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:fc09d0aa354569bc501d4e787133afc08552722d3ab34836a80547331bb5d4a0", size = 173973, upload-time = "2025-09-25T21:32:12.492Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ed/23/7a778b6bd0b9a8039df8b1b1d80e2e2ad78aa04171592c8a5c43a56a6af4/pyyaml-6.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9149cad251584d5fb4981be1ecde53a1ca46c891a79788c0df828d2f166bda28", size = 775116, upload-time = "2025-09-25T21:32:13.652Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/65/30/d7353c338e12baef4ecc1b09e877c1970bd3382789c159b4f89d6a70dc09/pyyaml-6.0.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5fdec68f91a0c6739b380c83b951e2c72ac0197ace422360e6d5a959d8d97b2c", size = 844011, upload-time = "2025-09-25T21:32:15.21Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8b/9d/b3589d3877982d4f2329302ef98a8026e7f4443c765c46cfecc8858c6b4b/pyyaml-6.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ba1cc08a7ccde2d2ec775841541641e4548226580ab850948cbfda66a1befcdc", size = 807870, upload-time = "2025-09-25T21:32:16.431Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/05/c0/b3be26a015601b822b97d9149ff8cb5ead58c66f981e04fedf4e762f4bd4/pyyaml-6.0.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:8dc52c23056b9ddd46818a57b78404882310fb473d63f17b07d5c40421e47f8e", size = 761089, upload-time = "2025-09-25T21:32:17.56Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/be/8e/98435a21d1d4b46590d5459a22d88128103f8da4c2d4cb8f14f2a96504e1/pyyaml-6.0.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:41715c910c881bc081f1e8872880d3c650acf13dfa8214bad49ed4cede7c34ea", size = 790181, upload-time = "2025-09-25T21:32:18.834Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/93/7baea19427dcfbe1e5a372d81473250b379f04b1bd3c4c5ff825e2327202/pyyaml-6.0.3-cp312-cp312-win32.whl", hash = "sha256:96b533f0e99f6579b3d4d4995707cf36df9100d67e0c8303a0c55b27b5f99bc5", size = 137658, upload-time = "2025-09-25T21:32:20.209Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/bf/899e81e4cce32febab4fb42bb97dcdf66bc135272882d1987881a4b519e9/pyyaml-6.0.3-cp312-cp312-win_amd64.whl", hash = "sha256:5fcd34e47f6e0b794d17de1b4ff496c00986e1c83f7ab2fb8fcfe9616ff7477b", size = 154003, upload-time = "2025-09-25T21:32:21.167Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1a/08/67bd04656199bbb51dbed1439b7f27601dfb576fb864099c7ef0c3e55531/pyyaml-6.0.3-cp312-cp312-win_arm64.whl", hash = "sha256:64386e5e707d03a7e172c0701abfb7e10f0fb753ee1d773128192742712a98fd", size = 140344, upload-time = "2025-09-25T21:32:22.617Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/11/0fd08f8192109f7169db964b5707a2f1e8b745d4e239b784a5a1dd80d1db/pyyaml-6.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8", size = 181669, upload-time = "2025-09-25T21:32:23.673Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/16/95309993f1d3748cd644e02e38b75d50cbc0d9561d21f390a76242ce073f/pyyaml-6.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1", size = 173252, upload-time = "2025-09-25T21:32:25.149Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/50/31/b20f376d3f810b9b2371e72ef5adb33879b25edb7a6d072cb7ca0c486398/pyyaml-6.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c", size = 767081, upload-time = "2025-09-25T21:32:26.575Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/49/1e/a55ca81e949270d5d4432fbbd19dfea5321eda7c41a849d443dc92fd1ff7/pyyaml-6.0.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5", size = 841159, upload-time = "2025-09-25T21:32:27.727Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/27/e5b8f34d02d9995b80abcef563ea1f8b56d20134d8f4e5e81733b1feceb2/pyyaml-6.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6", size = 801626, upload-time = "2025-09-25T21:32:28.878Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f9/11/ba845c23988798f40e52ba45f34849aa8a1f2d4af4b798588010792ebad6/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6", size = 753613, upload-time = "2025-09-25T21:32:30.178Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3d/e0/7966e1a7bfc0a45bf0a7fb6b98ea03fc9b8d84fa7f2229e9659680b69ee3/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be", size = 794115, upload-time = "2025-09-25T21:32:31.353Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/de/94/980b50a6531b3019e45ddeada0626d45fa85cbe22300844a7983285bed3b/pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26", size = 137427, upload-time = "2025-09-25T21:32:32.58Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/97/c9/39d5b874e8b28845e4ec2202b5da735d0199dbe5b8fb85f91398814a9a46/pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c", size = 154090, upload-time = "2025-09-25T21:32:33.659Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/73/e8/2bdf3ca2090f68bb3d75b44da7bbc71843b19c9f2b9cb9b0f4ab7a5a4329/pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb", size = 140246, upload-time = "2025-09-25T21:32:34.663Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9d/8c/f4bd7f6465179953d3ac9bc44ac1a8a3e6122cf8ada906b4f96c60172d43/pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac", size = 181814, upload-time = "2025-09-25T21:32:35.712Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bd/9c/4d95bb87eb2063d20db7b60faa3840c1b18025517ae857371c4dd55a6b3a/pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310", size = 173809, upload-time = "2025-09-25T21:32:36.789Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/92/b5/47e807c2623074914e29dabd16cbbdd4bf5e9b2db9f8090fa64411fc5382/pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7", size = 766454, upload-time = "2025-09-25T21:32:37.966Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/9e/e5e9b168be58564121efb3de6859c452fccde0ab093d8438905899a3a483/pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788", size = 836355, upload-time = "2025-09-25T21:32:39.178Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/88/f9/16491d7ed2a919954993e48aa941b200f38040928474c9e85ea9e64222c3/pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5", size = 794175, upload-time = "2025-09-25T21:32:40.865Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dd/3f/5989debef34dc6397317802b527dbbafb2b4760878a53d4166579111411e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764", size = 755228, upload-time = "2025-09-25T21:32:42.084Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d7/ce/af88a49043cd2e265be63d083fc75b27b6ed062f5f9fd6cdc223ad62f03e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35", size = 789194, upload-time = "2025-09-25T21:32:43.362Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/23/20/bb6982b26a40bb43951265ba29d4c246ef0ff59c9fdcdf0ed04e0687de4d/pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac", size = 156429, upload-time = "2025-09-25T21:32:57.844Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f4/f4/a4541072bb9422c8a883ab55255f918fa378ecf083f5b85e87fc2b4eda1b/pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3", size = 143912, upload-time = "2025-09-25T21:32:59.247Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7c/f9/07dd09ae774e4616edf6cda684ee78f97777bdd15847253637a6f052a62f/pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3", size = 189108, upload-time = "2025-09-25T21:32:44.377Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4e/78/8d08c9fb7ce09ad8c38ad533c1191cf27f7ae1effe5bb9400a46d9437fcf/pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba", size = 183641, upload-time = "2025-09-25T21:32:45.407Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/5b/3babb19104a46945cf816d047db2788bcaf8c94527a805610b0289a01c6b/pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c", size = 831901, upload-time = "2025-09-25T21:32:48.83Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8b/cc/dff0684d8dc44da4d22a13f35f073d558c268780ce3c6ba1b87055bb0b87/pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702", size = 861132, upload-time = "2025-09-25T21:32:50.149Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/5e/f77dc6b9036943e285ba76b49e118d9ea929885becb0a29ba8a7c75e29fe/pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c", size = 839261, upload-time = "2025-09-25T21:32:51.808Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/88/a9db1376aa2a228197c58b37302f284b5617f56a5d959fd1763fb1675ce6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065", size = 805272, upload-time = "2025-09-25T21:32:52.941Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/92/1446574745d74df0c92e6aa4a7b0b3130706a4142b2d1a5869f2eaa423c6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65", size = 829923, upload-time = "2025-09-25T21:32:54.537Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/7a/1c7270340330e575b92f397352af856a8c06f230aa3e76f86b39d01b416a/pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9", size = 174062, upload-time = "2025-09-25T21:32:55.767Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341, upload-time = "2025-09-25T21:32:56.828Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "sniffio"
|
||||
version = "1.3.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/a2/87/a6771e1546d97e7e041b6ae58d80074f81b7d5121207425c964ddf5cfdbd/sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc", size = 20372, upload-time = "2024-02-25T23:20:04.057Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/e9/44/75a9c9421471a6c4805dbf2356f7c181a29c1879239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2", size = 10235, upload-time = "2024-02-25T23:20:01.196Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "starlette"
|
||||
version = "0.50.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "anyio" },
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.13'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/ba/b8/73a0e6a6e079a9d9cfa64113d771e421640b6f679a52eeb9b32f72d871a1/starlette-0.50.0.tar.gz", hash = "sha256:a2a17b22203254bcbc2e1f926d2d55f3f9497f769416b3190768befe598fa3ca", size = 2646985, upload-time = "2025-11-01T15:25:27.516Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d9/52/1064f510b141bd54025f9b55105e26d1fa970b9be67ad766380a3c9b74b0/starlette-0.50.0-py3-none-any.whl", hash = "sha256:9e5391843ec9b6e472eed1365a78c8098cfceb7a74bfd4d6b1c0c0095efb3bca", size = 74033, upload-time = "2025-11-01T15:25:25.461Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "typing-extensions"
|
||||
version = "4.15.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391, upload-time = "2025-08-25T13:49:26.313Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "typing-inspection"
|
||||
version = "0.4.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/55/e3/70399cb7dd41c10ac53367ae42139cf4b1ca5f36bb3dc6c9d33acdb43655/typing_inspection-0.4.2.tar.gz", hash = "sha256:ba561c48a67c5958007083d386c3295464928b01faa735ab8547c5692e87f464", size = 75949, upload-time = "2025-10-01T02:14:41.687Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/dc/9b/47798a6c91d8bdb567fe2698fe81e0c6b7cb7ef4d13da4114b41d239f65d/typing_inspection-0.4.2-py3-none-any.whl", hash = "sha256:4ed1cacbdc298c220f1bd249ed5287caa16f34d44ef4e9c3d0cbad5b521545e7", size = 14611, upload-time = "2025-10-01T02:14:40.154Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "uvicorn"
|
||||
version = "0.38.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "click" },
|
||||
{ name = "h11" },
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/cb/ce/f06b84e2697fef4688ca63bdb2fdf113ca0a3be33f94488f2cadb690b0cf/uvicorn-0.38.0.tar.gz", hash = "sha256:fd97093bdd120a2609fc0d3afe931d4d4ad688b6e75f0f929fde1bc36fe0e91d", size = 80605, upload-time = "2025-10-18T13:46:44.63Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ee/d9/d88e73ca598f4f6ff671fb5fde8a32925c2e08a637303a1d12883c7305fa/uvicorn-0.38.0-py3-none-any.whl", hash = "sha256:48c0afd214ceb59340075b4a052ea1ee91c16fbc2a9b1469cca0e54566977b02", size = 68109, upload-time = "2025-10-18T13:46:42.958Z" },
|
||||
]
|
||||
|
||||
[package.optional-dependencies]
|
||||
standard = [
|
||||
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
||||
{ name = "httptools" },
|
||||
{ name = "python-dotenv" },
|
||||
{ name = "pyyaml" },
|
||||
{ name = "uvloop", marker = "platform_python_implementation != 'PyPy' and sys_platform != 'cygwin' and sys_platform != 'win32'" },
|
||||
{ name = "watchfiles" },
|
||||
{ name = "websockets" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "uvloop"
|
||||
version = "0.22.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/06/f0/18d39dbd1971d6d62c4629cc7fa67f74821b0dc1f5a77af43719de7936a7/uvloop-0.22.1.tar.gz", hash = "sha256:6c84bae345b9147082b17371e3dd5d42775bddce91f885499017f4607fdaf39f", size = 2443250, upload-time = "2025-10-16T22:17:19.342Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/eb/14/ecceb239b65adaaf7fde510aa8bd534075695d1e5f8dadfa32b5723d9cfb/uvloop-0.22.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:ef6f0d4cc8a9fa1f6a910230cd53545d9a14479311e87e3cb225495952eb672c", size = 1343335, upload-time = "2025-10-16T22:16:11.43Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ba/ae/6f6f9af7f590b319c94532b9567409ba11f4fa71af1148cab1bf48a07048/uvloop-0.22.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7cd375a12b71d33d46af85a3343b35d98e8116134ba404bd657b3b1d15988792", size = 742903, upload-time = "2025-10-16T22:16:12.979Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/09/bd/3667151ad0702282a1f4d5d29288fce8a13c8b6858bf0978c219cd52b231/uvloop-0.22.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ac33ed96229b7790eb729702751c0e93ac5bc3bcf52ae9eccbff30da09194b86", size = 3648499, upload-time = "2025-10-16T22:16:14.451Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b3/f6/21657bb3beb5f8c57ce8be3b83f653dd7933c2fd00545ed1b092d464799a/uvloop-0.22.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:481c990a7abe2c6f4fc3d98781cc9426ebd7f03a9aaa7eb03d3bfc68ac2a46bd", size = 3700133, upload-time = "2025-10-16T22:16:16.272Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/09/e0/604f61d004ded805f24974c87ddd8374ef675644f476f01f1df90e4cdf72/uvloop-0.22.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:a592b043a47ad17911add5fbd087c76716d7c9ccc1d64ec9249ceafd735f03c2", size = 3512681, upload-time = "2025-10-16T22:16:18.07Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/ce/8491fd370b0230deb5eac69c7aae35b3be527e25a911c0acdffb922dc1cd/uvloop-0.22.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:1489cf791aa7b6e8c8be1c5a080bae3a672791fcb4e9e12249b05862a2ca9cec", size = 3615261, upload-time = "2025-10-16T22:16:19.596Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c7/d5/69900f7883235562f1f50d8184bb7dd84a2fb61e9ec63f3782546fdbd057/uvloop-0.22.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:c60ebcd36f7b240b30788554b6f0782454826a0ed765d8430652621b5de674b9", size = 1352420, upload-time = "2025-10-16T22:16:21.187Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a8/73/c4e271b3bce59724e291465cc936c37758886a4868787da0278b3b56b905/uvloop-0.22.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3b7f102bf3cb1995cfeaee9321105e8f5da76fdb104cdad8986f85461a1b7b77", size = 748677, upload-time = "2025-10-16T22:16:22.558Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/94/9fb7fad2f824d25f8ecac0d70b94d0d48107ad5ece03769a9c543444f78a/uvloop-0.22.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:53c85520781d84a4b8b230e24a5af5b0778efdb39142b424990ff1ef7c48ba21", size = 3753819, upload-time = "2025-10-16T22:16:23.903Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/4f/256aca690709e9b008b7108bc85fba619a2bc37c6d80743d18abad16ee09/uvloop-0.22.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:56a2d1fae65fd82197cb8c53c367310b3eabe1bbb9fb5a04d28e3e3520e4f702", size = 3804529, upload-time = "2025-10-16T22:16:25.246Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7f/74/03c05ae4737e871923d21a76fe28b6aad57f5c03b6e6bfcfa5ad616013e4/uvloop-0.22.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:40631b049d5972c6755b06d0bfe8233b1bd9a8a6392d9d1c45c10b6f9e9b2733", size = 3621267, upload-time = "2025-10-16T22:16:26.819Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/75/be/f8e590fe61d18b4a92070905497aec4c0e64ae1761498cad09023f3f4b3e/uvloop-0.22.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:535cc37b3a04f6cd2c1ef65fa1d370c9a35b6695df735fcff5427323f2cd5473", size = 3723105, upload-time = "2025-10-16T22:16:28.252Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3d/ff/7f72e8170be527b4977b033239a83a68d5c881cc4775fca255c677f7ac5d/uvloop-0.22.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:fe94b4564e865d968414598eea1a6de60adba0c040ba4ed05ac1300de402cd42", size = 1359936, upload-time = "2025-10-16T22:16:29.436Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c3/c6/e5d433f88fd54d81ef4be58b2b7b0cea13c442454a1db703a1eea0db1a59/uvloop-0.22.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:51eb9bd88391483410daad430813d982010f9c9c89512321f5b60e2cddbdddd6", size = 752769, upload-time = "2025-10-16T22:16:30.493Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/24/68/a6ac446820273e71aa762fa21cdcc09861edd3536ff47c5cd3b7afb10eeb/uvloop-0.22.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:700e674a166ca5778255e0e1dc4e9d79ab2acc57b9171b79e65feba7184b3370", size = 4317413, upload-time = "2025-10-16T22:16:31.644Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/6f/e62b4dfc7ad6518e7eff2516f680d02a0f6eb62c0c212e152ca708a0085e/uvloop-0.22.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7b5b1ac819a3f946d3b2ee07f09149578ae76066d70b44df3fa990add49a82e4", size = 4426307, upload-time = "2025-10-16T22:16:32.917Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/90/60/97362554ac21e20e81bcef1150cb2a7e4ffdaf8ea1e5b2e8bf7a053caa18/uvloop-0.22.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e047cc068570bac9866237739607d1313b9253c3051ad84738cbb095be0537b2", size = 4131970, upload-time = "2025-10-16T22:16:34.015Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/99/39/6b3f7d234ba3964c428a6e40006340f53ba37993f46ed6e111c6e9141d18/uvloop-0.22.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:512fec6815e2dd45161054592441ef76c830eddaad55c8aa30952e6fe1ed07c0", size = 4296343, upload-time = "2025-10-16T22:16:35.149Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/89/8c/182a2a593195bfd39842ea68ebc084e20c850806117213f5a299dfc513d9/uvloop-0.22.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:561577354eb94200d75aca23fbde86ee11be36b00e52a4eaf8f50fb0c86b7705", size = 1358611, upload-time = "2025-10-16T22:16:36.833Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/14/e301ee96a6dc95224b6f1162cd3312f6d1217be3907b79173b06785f2fe7/uvloop-0.22.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:1cdf5192ab3e674ca26da2eada35b288d2fa49fdd0f357a19f0e7c4e7d5077c8", size = 751811, upload-time = "2025-10-16T22:16:38.275Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b7/02/654426ce265ac19e2980bfd9ea6590ca96a56f10c76e63801a2df01c0486/uvloop-0.22.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6e2ea3d6190a2968f4a14a23019d3b16870dd2190cd69c8180f7c632d21de68d", size = 4288562, upload-time = "2025-10-16T22:16:39.375Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/15/c0/0be24758891ef825f2065cd5db8741aaddabe3e248ee6acc5e8a80f04005/uvloop-0.22.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0530a5fbad9c9e4ee3f2b33b148c6a64d47bbad8000ea63704fa8260f4cf728e", size = 4366890, upload-time = "2025-10-16T22:16:40.547Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/53/8369e5219a5855869bcee5f4d317f6da0e2c669aecf0ef7d371e3d084449/uvloop-0.22.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:bc5ef13bbc10b5335792360623cc378d52d7e62c2de64660616478c32cd0598e", size = 4119472, upload-time = "2025-10-16T22:16:41.694Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/ba/d69adbe699b768f6b29a5eec7b47dd610bd17a69de51b251126a801369ea/uvloop-0.22.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:1f38ec5e3f18c8a10ded09742f7fb8de0108796eb673f30ce7762ce1b8550cad", size = 4239051, upload-time = "2025-10-16T22:16:43.224Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/90/cd/b62bdeaa429758aee8de8b00ac0dd26593a9de93d302bff3d21439e9791d/uvloop-0.22.1-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:3879b88423ec7e97cd4eba2a443aa26ed4e59b45e6b76aabf13fe2f27023a142", size = 1362067, upload-time = "2025-10-16T22:16:44.503Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0d/f8/a132124dfda0777e489ca86732e85e69afcd1ff7686647000050ba670689/uvloop-0.22.1-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:4baa86acedf1d62115c1dc6ad1e17134476688f08c6efd8a2ab076e815665c74", size = 752423, upload-time = "2025-10-16T22:16:45.968Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/94/94af78c156f88da4b3a733773ad5ba0b164393e357cc4bd0ab2e2677a7d6/uvloop-0.22.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:297c27d8003520596236bdb2335e6b3f649480bd09e00d1e3a99144b691d2a35", size = 4272437, upload-time = "2025-10-16T22:16:47.451Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b5/35/60249e9fd07b32c665192cec7af29e06c7cd96fa1d08b84f012a56a0b38e/uvloop-0.22.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c1955d5a1dd43198244d47664a5858082a3239766a839b2102a269aaff7a4e25", size = 4292101, upload-time = "2025-10-16T22:16:49.318Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/62/67d382dfcb25d0a98ce73c11ed1a6fba5037a1a1d533dcbb7cab033a2636/uvloop-0.22.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:b31dc2fccbd42adc73bc4e7cdbae4fc5086cf378979e53ca5d0301838c5682c6", size = 4114158, upload-time = "2025-10-16T22:16:50.517Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/7a/f1171b4a882a5d13c8b7576f348acfe6074d72eaf52cccef752f748d4a9f/uvloop-0.22.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:93f617675b2d03af4e72a5333ef89450dfaa5321303ede6e67ba9c9d26878079", size = 4177360, upload-time = "2025-10-16T22:16:52.646Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/79/7b/b01414f31546caf0919da80ad57cbfe24c56b151d12af68cee1b04922ca8/uvloop-0.22.1-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:37554f70528f60cad66945b885eb01f1bb514f132d92b6eeed1c90fd54ed6289", size = 1454790, upload-time = "2025-10-16T22:16:54.355Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d4/31/0bb232318dd838cad3fa8fb0c68c8b40e1145b32025581975e18b11fab40/uvloop-0.22.1-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:b76324e2dc033a0b2f435f33eb88ff9913c156ef78e153fb210e03c13da746b3", size = 796783, upload-time = "2025-10-16T22:16:55.906Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/42/38/c9b09f3271a7a723a5de69f8e237ab8e7803183131bc57c890db0b6bb872/uvloop-0.22.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:badb4d8e58ee08dad957002027830d5c3b06aea446a6a3744483c2b3b745345c", size = 4647548, upload-time = "2025-10-16T22:16:57.008Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/37/945b4ca0ac27e3dc4952642d4c900edd030b3da6c9634875af6e13ae80e5/uvloop-0.22.1-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b91328c72635f6f9e0282e4a57da7470c7350ab1c9f48546c0f2866205349d21", size = 4467065, upload-time = "2025-10-16T22:16:58.206Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/97/cc/48d232f33d60e2e2e0b42f4e73455b146b76ebe216487e862700457fbf3c/uvloop-0.22.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:daf620c2995d193449393d6c62131b3fbd40a63bf7b307a1527856ace637fe88", size = 4328384, upload-time = "2025-10-16T22:16:59.36Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e4/16/c1fd27e9549f3c4baf1dc9c20c456cd2f822dbf8de9f463824b0c0357e06/uvloop-0.22.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6cde23eeda1a25c75b2e07d39970f3374105d5eafbaab2a4482be82f272d5a5e", size = 4296730, upload-time = "2025-10-16T22:17:00.744Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "watchfiles"
|
||||
version = "1.1.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "anyio" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/c2/c9/8869df9b2a2d6c59d79220a4db37679e74f807c559ffe5265e08b227a210/watchfiles-1.1.1.tar.gz", hash = "sha256:a173cb5c16c4f40ab19cecf48a534c409f7ea983ab8fed0741304a1c0a31b3f2", size = 94440, upload-time = "2025-10-14T15:06:21.08Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/a7/1a/206e8cf2dd86fddf939165a57b4df61607a1e0add2785f170a3f616b7d9f/watchfiles-1.1.1-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:eef58232d32daf2ac67f42dea51a2c80f0d03379075d44a587051e63cc2e368c", size = 407318, upload-time = "2025-10-14T15:04:18.753Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b3/0f/abaf5262b9c496b5dad4ed3c0e799cbecb1f8ea512ecb6ddd46646a9fca3/watchfiles-1.1.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:03fa0f5237118a0c5e496185cafa92878568b652a2e9a9382a5151b1a0380a43", size = 394478, upload-time = "2025-10-14T15:04:20.297Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/04/9cc0ba88697b34b755371f5ace8d3a4d9a15719c07bdc7bd13d7d8c6a341/watchfiles-1.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8ca65483439f9c791897f7db49202301deb6e15fe9f8fe2fed555bf986d10c31", size = 449894, upload-time = "2025-10-14T15:04:21.527Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/9c/eda4615863cd8621e89aed4df680d8c3ec3da6a4cf1da113c17decd87c7f/watchfiles-1.1.1-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f0ab1c1af0cb38e3f598244c17919fb1a84d1629cc08355b0074b6d7f53138ac", size = 459065, upload-time = "2025-10-14T15:04:22.795Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/84/13/f28b3f340157d03cbc8197629bc109d1098764abe1e60874622a0be5c112/watchfiles-1.1.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3bc570d6c01c206c46deb6e935a260be44f186a2f05179f52f7fcd2be086a94d", size = 488377, upload-time = "2025-10-14T15:04:24.138Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/93/cfa597fa9389e122488f7ffdbd6db505b3b915ca7435ecd7542e855898c2/watchfiles-1.1.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e84087b432b6ac94778de547e08611266f1f8ffad28c0ee4c82e028b0fc5966d", size = 595837, upload-time = "2025-10-14T15:04:25.057Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/57/1e/68c1ed5652b48d89fc24d6af905d88ee4f82fa8bc491e2666004e307ded1/watchfiles-1.1.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:620bae625f4cb18427b1bb1a2d9426dc0dd5a5ba74c7c2cdb9de405f7b129863", size = 473456, upload-time = "2025-10-14T15:04:26.497Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/dc/1a680b7458ffa3b14bb64878112aefc8f2e4f73c5af763cbf0bd43100658/watchfiles-1.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:544364b2b51a9b0c7000a4b4b02f90e9423d97fbbf7e06689236443ebcad81ab", size = 455614, upload-time = "2025-10-14T15:04:27.539Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/61/a5/3d782a666512e01eaa6541a72ebac1d3aae191ff4a31274a66b8dd85760c/watchfiles-1.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:bbe1ef33d45bc71cf21364df962af171f96ecaeca06bd9e3d0b583efb12aec82", size = 630690, upload-time = "2025-10-14T15:04:28.495Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9b/73/bb5f38590e34687b2a9c47a244aa4dd50c56a825969c92c9c5fc7387cea1/watchfiles-1.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:1a0bb430adb19ef49389e1ad368450193a90038b5b752f4ac089ec6942c4dff4", size = 622459, upload-time = "2025-10-14T15:04:29.491Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f1/ac/c9bb0ec696e07a20bd58af5399aeadaef195fb2c73d26baf55180fe4a942/watchfiles-1.1.1-cp310-cp310-win32.whl", hash = "sha256:3f6d37644155fb5beca5378feb8c1708d5783145f2a0f1c4d5a061a210254844", size = 272663, upload-time = "2025-10-14T15:04:30.435Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/11/a0/a60c5a7c2ec59fa062d9a9c61d02e3b6abd94d32aac2d8344c4bdd033326/watchfiles-1.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:a36d8efe0f290835fd0f33da35042a1bb5dc0e83cbc092dcf69bce442579e88e", size = 287453, upload-time = "2025-10-14T15:04:31.53Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1f/f8/2c5f479fb531ce2f0564eda479faecf253d886b1ab3630a39b7bf7362d46/watchfiles-1.1.1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:f57b396167a2565a4e8b5e56a5a1c537571733992b226f4f1197d79e94cf0ae5", size = 406529, upload-time = "2025-10-14T15:04:32.899Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fe/cd/f515660b1f32f65df671ddf6f85bfaca621aee177712874dc30a97397977/watchfiles-1.1.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:421e29339983e1bebc281fab40d812742268ad057db4aee8c4d2bce0af43b741", size = 394384, upload-time = "2025-10-14T15:04:33.761Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/c3/28b7dc99733eab43fca2d10f55c86e03bd6ab11ca31b802abac26b23d161/watchfiles-1.1.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6e43d39a741e972bab5d8100b5cdacf69db64e34eb19b6e9af162bccf63c5cc6", size = 448789, upload-time = "2025-10-14T15:04:34.679Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4a/24/33e71113b320030011c8e4316ccca04194bf0cbbaeee207f00cbc7d6b9f5/watchfiles-1.1.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f537afb3276d12814082a2e9b242bdcf416c2e8fd9f799a737990a1dbe906e5b", size = 460521, upload-time = "2025-10-14T15:04:35.963Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f4/c3/3c9a55f255aa57b91579ae9e98c88704955fa9dac3e5614fb378291155df/watchfiles-1.1.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b2cd9e04277e756a2e2d2543d65d1e2166d6fd4c9b183f8808634fda23f17b14", size = 488722, upload-time = "2025-10-14T15:04:37.091Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/49/36/506447b73eb46c120169dc1717fe2eff07c234bb3232a7200b5f5bd816e9/watchfiles-1.1.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5f3f58818dc0b07f7d9aa7fe9eb1037aecb9700e63e1f6acfed13e9fef648f5d", size = 596088, upload-time = "2025-10-14T15:04:38.39Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/82/ab/5f39e752a9838ec4d52e9b87c1e80f1ee3ccdbe92e183c15b6577ab9de16/watchfiles-1.1.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9bb9f66367023ae783551042d31b1d7fd422e8289eedd91f26754a66f44d5cff", size = 472923, upload-time = "2025-10-14T15:04:39.666Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/af/b9/a419292f05e302dea372fa7e6fda5178a92998411f8581b9830d28fb9edb/watchfiles-1.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aebfd0861a83e6c3d1110b78ad54704486555246e542be3e2bb94195eabb2606", size = 456080, upload-time = "2025-10-14T15:04:40.643Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b0/c3/d5932fd62bde1a30c36e10c409dc5d54506726f08cb3e1d8d0ba5e2bc8db/watchfiles-1.1.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:5fac835b4ab3c6487b5dbad78c4b3724e26bcc468e886f8ba8cc4306f68f6701", size = 629432, upload-time = "2025-10-14T15:04:41.789Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f7/77/16bddd9779fafb795f1a94319dc965209c5641db5bf1edbbccace6d1b3c0/watchfiles-1.1.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:399600947b170270e80134ac854e21b3ccdefa11a9529a3decc1327088180f10", size = 623046, upload-time = "2025-10-14T15:04:42.718Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/46/ef/f2ecb9a0f342b4bfad13a2787155c6ee7ce792140eac63a34676a2feeef2/watchfiles-1.1.1-cp311-cp311-win32.whl", hash = "sha256:de6da501c883f58ad50db3a32ad397b09ad29865b5f26f64c24d3e3281685849", size = 271473, upload-time = "2025-10-14T15:04:43.624Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/94/bc/f42d71125f19731ea435c3948cad148d31a64fccde3867e5ba4edee901f9/watchfiles-1.1.1-cp311-cp311-win_amd64.whl", hash = "sha256:35c53bd62a0b885bf653ebf6b700d1bf05debb78ad9292cf2a942b23513dc4c4", size = 287598, upload-time = "2025-10-14T15:04:44.516Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/57/c9/a30f897351f95bbbfb6abcadafbaca711ce1162f4db95fc908c98a9165f3/watchfiles-1.1.1-cp311-cp311-win_arm64.whl", hash = "sha256:57ca5281a8b5e27593cb7d82c2ac927ad88a96ed406aa446f6344e4328208e9e", size = 277210, upload-time = "2025-10-14T15:04:45.883Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/d5/f039e7e3c639d9b1d09b07ea412a6806d38123f0508e5f9b48a87b0a76cc/watchfiles-1.1.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:8c89f9f2f740a6b7dcc753140dd5e1ab9215966f7a3530d0c0705c83b401bd7d", size = 404745, upload-time = "2025-10-14T15:04:46.731Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a5/96/a881a13aa1349827490dab2d363c8039527060cfcc2c92cc6d13d1b1049e/watchfiles-1.1.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:bd404be08018c37350f0d6e34676bd1e2889990117a2b90070b3007f172d0610", size = 391769, upload-time = "2025-10-14T15:04:48.003Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4b/5b/d3b460364aeb8da471c1989238ea0e56bec24b6042a68046adf3d9ddb01c/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8526e8f916bb5b9a0a777c8317c23ce65de259422bba5b31325a6fa6029d33af", size = 449374, upload-time = "2025-10-14T15:04:49.179Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b9/44/5769cb62d4ed055cb17417c0a109a92f007114a4e07f30812a73a4efdb11/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2edc3553362b1c38d9f06242416a5d8e9fe235c204a4072e988ce2e5bb1f69f6", size = 459485, upload-time = "2025-10-14T15:04:50.155Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/19/0c/286b6301ded2eccd4ffd0041a1b726afda999926cf720aab63adb68a1e36/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:30f7da3fb3f2844259cba4720c3fc7138eb0f7b659c38f3bfa65084c7fc7abce", size = 488813, upload-time = "2025-10-14T15:04:51.059Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c7/2b/8530ed41112dd4a22f4dcfdb5ccf6a1baad1ff6eed8dc5a5f09e7e8c41c7/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f8979280bdafff686ba5e4d8f97840f929a87ed9cdf133cbbd42f7766774d2aa", size = 594816, upload-time = "2025-10-14T15:04:52.031Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/d2/f5f9fb49489f184f18470d4f99f4e862a4b3e9ac2865688eb2099e3d837a/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dcc5c24523771db3a294c77d94771abcfcb82a0e0ee8efd910c37c59ec1b31bb", size = 475186, upload-time = "2025-10-14T15:04:53.064Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cf/68/5707da262a119fb06fbe214d82dd1fe4a6f4af32d2d14de368d0349eb52a/watchfiles-1.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1db5d7ae38ff20153d542460752ff397fcf5c96090c1230803713cf3147a6803", size = 456812, upload-time = "2025-10-14T15:04:55.174Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/66/ab/3cbb8756323e8f9b6f9acb9ef4ec26d42b2109bce830cc1f3468df20511d/watchfiles-1.1.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:28475ddbde92df1874b6c5c8aaeb24ad5be47a11f87cde5a28ef3835932e3e94", size = 630196, upload-time = "2025-10-14T15:04:56.22Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/78/46/7152ec29b8335f80167928944a94955015a345440f524d2dfe63fc2f437b/watchfiles-1.1.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:36193ed342f5b9842edd3532729a2ad55c4160ffcfa3700e0d54be496b70dd43", size = 622657, upload-time = "2025-10-14T15:04:57.521Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0a/bf/95895e78dd75efe9a7f31733607f384b42eb5feb54bd2eb6ed57cc2e94f4/watchfiles-1.1.1-cp312-cp312-win32.whl", hash = "sha256:859e43a1951717cc8de7f4c77674a6d389b106361585951d9e69572823f311d9", size = 272042, upload-time = "2025-10-14T15:04:59.046Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/87/0a/90eb755f568de2688cb220171c4191df932232c20946966c27a59c400850/watchfiles-1.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:91d4c9a823a8c987cce8fa2690923b069966dabb196dd8d137ea2cede885fde9", size = 288410, upload-time = "2025-10-14T15:05:00.081Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/36/76/f322701530586922fbd6723c4f91ace21364924822a8772c549483abed13/watchfiles-1.1.1-cp312-cp312-win_arm64.whl", hash = "sha256:a625815d4a2bdca61953dbba5a39d60164451ef34c88d751f6c368c3ea73d404", size = 278209, upload-time = "2025-10-14T15:05:01.168Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/f4/f750b29225fe77139f7ae5de89d4949f5a99f934c65a1f1c0b248f26f747/watchfiles-1.1.1-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:130e4876309e8686a5e37dba7d5e9bc77e6ed908266996ca26572437a5271e18", size = 404321, upload-time = "2025-10-14T15:05:02.063Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/f9/f07a295cde762644aa4c4bb0f88921d2d141af45e735b965fb2e87858328/watchfiles-1.1.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:5f3bde70f157f84ece3765b42b4a52c6ac1a50334903c6eaf765362f6ccca88a", size = 391783, upload-time = "2025-10-14T15:05:03.052Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bc/11/fc2502457e0bea39a5c958d86d2cb69e407a4d00b85735ca724bfa6e0d1a/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:14e0b1fe858430fc0251737ef3824c54027bedb8c37c38114488b8e131cf8219", size = 449279, upload-time = "2025-10-14T15:05:04.004Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/1f/d66bc15ea0b728df3ed96a539c777acfcad0eb78555ad9efcaa1274688f0/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f27db948078f3823a6bb3b465180db8ebecf26dd5dae6f6180bd87383b6b4428", size = 459405, upload-time = "2025-10-14T15:05:04.942Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/be/90/9f4a65c0aec3ccf032703e6db02d89a157462fbb2cf20dd415128251cac0/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:059098c3a429f62fc98e8ec62b982230ef2c8df68c79e826e37b895bc359a9c0", size = 488976, upload-time = "2025-10-14T15:05:05.905Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/37/57/ee347af605d867f712be7029bb94c8c071732a4b44792e3176fa3c612d39/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:bfb5862016acc9b869bb57284e6cb35fdf8e22fe59f7548858e2f971d045f150", size = 595506, upload-time = "2025-10-14T15:05:06.906Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a8/78/cc5ab0b86c122047f75e8fc471c67a04dee395daf847d3e59381996c8707/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:319b27255aacd9923b8a276bb14d21a5f7ff82564c744235fc5eae58d95422ae", size = 474936, upload-time = "2025-10-14T15:05:07.906Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/62/da/def65b170a3815af7bd40a3e7010bf6ab53089ef1b75d05dd5385b87cf08/watchfiles-1.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c755367e51db90e75b19454b680903631d41f9e3607fbd941d296a020c2d752d", size = 456147, upload-time = "2025-10-14T15:05:09.138Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/57/99/da6573ba71166e82d288d4df0839128004c67d2778d3b566c138695f5c0b/watchfiles-1.1.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c22c776292a23bfc7237a98f791b9ad3144b02116ff10d820829ce62dff46d0b", size = 630007, upload-time = "2025-10-14T15:05:10.117Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a8/51/7439c4dd39511368849eb1e53279cd3454b4a4dbace80bab88feeb83c6b5/watchfiles-1.1.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:3a476189be23c3686bc2f4321dd501cb329c0a0469e77b7b534ee10129ae6374", size = 622280, upload-time = "2025-10-14T15:05:11.146Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/95/9c/8ed97d4bba5db6fdcdb2b298d3898f2dd5c20f6b73aee04eabe56c59677e/watchfiles-1.1.1-cp313-cp313-win32.whl", hash = "sha256:bf0a91bfb5574a2f7fc223cf95eeea79abfefa404bf1ea5e339c0c1560ae99a0", size = 272056, upload-time = "2025-10-14T15:05:12.156Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1f/f3/c14e28429f744a260d8ceae18bf58c1d5fa56b50d006a7a9f80e1882cb0d/watchfiles-1.1.1-cp313-cp313-win_amd64.whl", hash = "sha256:52e06553899e11e8074503c8e716d574adeeb7e68913115c4b3653c53f9bae42", size = 288162, upload-time = "2025-10-14T15:05:13.208Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dc/61/fe0e56c40d5cd29523e398d31153218718c5786b5e636d9ae8ae79453d27/watchfiles-1.1.1-cp313-cp313-win_arm64.whl", hash = "sha256:ac3cc5759570cd02662b15fbcd9d917f7ecd47efe0d6b40474eafd246f91ea18", size = 277909, upload-time = "2025-10-14T15:05:14.49Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/79/42/e0a7d749626f1e28c7108a99fb9bf524b501bbbeb9b261ceecde644d5a07/watchfiles-1.1.1-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:563b116874a9a7ce6f96f87cd0b94f7faf92d08d0021e837796f0a14318ef8da", size = 403389, upload-time = "2025-10-14T15:05:15.777Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/15/49/08732f90ce0fbbc13913f9f215c689cfc9ced345fb1bcd8829a50007cc8d/watchfiles-1.1.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:3ad9fe1dae4ab4212d8c91e80b832425e24f421703b5a42ef2e4a1e215aff051", size = 389964, upload-time = "2025-10-14T15:05:16.85Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/27/0d/7c315d4bd5f2538910491a0393c56bf70d333d51bc5b34bee8e68e8cea19/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ce70f96a46b894b36eba678f153f052967a0d06d5b5a19b336ab0dbbd029f73e", size = 448114, upload-time = "2025-10-14T15:05:17.876Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c3/24/9e096de47a4d11bc4df41e9d1e61776393eac4cb6eb11b3e23315b78b2cc/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:cb467c999c2eff23a6417e58d75e5828716f42ed8289fe6b77a7e5a91036ca70", size = 460264, upload-time = "2025-10-14T15:05:18.962Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/0f/e8dea6375f1d3ba5fcb0b3583e2b493e77379834c74fd5a22d66d85d6540/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:836398932192dae4146c8f6f737d74baeac8b70ce14831a239bdb1ca882fc261", size = 487877, upload-time = "2025-10-14T15:05:20.094Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ac/5b/df24cfc6424a12deb41503b64d42fbea6b8cb357ec62ca84a5a3476f654a/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:743185e7372b7bc7c389e1badcc606931a827112fbbd37f14c537320fca08620", size = 595176, upload-time = "2025-10-14T15:05:21.134Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8f/b5/853b6757f7347de4e9b37e8cc3289283fb983cba1ab4d2d7144694871d9c/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:afaeff7696e0ad9f02cbb8f56365ff4686ab205fcf9c4c5b6fdfaaa16549dd04", size = 473577, upload-time = "2025-10-14T15:05:22.306Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e1/f7/0a4467be0a56e80447c8529c9fce5b38eab4f513cb3d9bf82e7392a5696b/watchfiles-1.1.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3f7eb7da0eb23aa2ba036d4f616d46906013a68caf61b7fdbe42fc8b25132e77", size = 455425, upload-time = "2025-10-14T15:05:23.348Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8e/e0/82583485ea00137ddf69bc84a2db88bd92ab4a6e3c405e5fb878ead8d0e7/watchfiles-1.1.1-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:831a62658609f0e5c64178211c942ace999517f5770fe9436be4c2faeba0c0ef", size = 628826, upload-time = "2025-10-14T15:05:24.398Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/28/9a/a785356fccf9fae84c0cc90570f11702ae9571036fb25932f1242c82191c/watchfiles-1.1.1-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:f9a2ae5c91cecc9edd47e041a930490c31c3afb1f5e6d71de3dc671bfaca02bf", size = 622208, upload-time = "2025-10-14T15:05:25.45Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c3/f4/0872229324ef69b2c3edec35e84bd57a1289e7d3fe74588048ed8947a323/watchfiles-1.1.1-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:d1715143123baeeaeadec0528bb7441103979a1d5f6fd0e1f915383fea7ea6d5", size = 404315, upload-time = "2025-10-14T15:05:26.501Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/22/16d5331eaed1cb107b873f6ae1b69e9ced582fcf0c59a50cd84f403b1c32/watchfiles-1.1.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:39574d6370c4579d7f5d0ad940ce5b20db0e4117444e39b6d8f99db5676c52fd", size = 390869, upload-time = "2025-10-14T15:05:27.649Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b2/7e/5643bfff5acb6539b18483128fdc0ef2cccc94a5b8fbda130c823e8ed636/watchfiles-1.1.1-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7365b92c2e69ee952902e8f70f3ba6360d0d596d9299d55d7d386df84b6941fb", size = 449919, upload-time = "2025-10-14T15:05:28.701Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/51/2e/c410993ba5025a9f9357c376f48976ef0e1b1aefb73b97a5ae01a5972755/watchfiles-1.1.1-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:bfff9740c69c0e4ed32416f013f3c45e2ae42ccedd1167ef2d805c000b6c71a5", size = 460845, upload-time = "2025-10-14T15:05:30.064Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8e/a4/2df3b404469122e8680f0fcd06079317e48db58a2da2950fb45020947734/watchfiles-1.1.1-cp314-cp314-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b27cf2eb1dda37b2089e3907d8ea92922b673c0c427886d4edc6b94d8dfe5db3", size = 489027, upload-time = "2025-10-14T15:05:31.064Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ea/84/4587ba5b1f267167ee715b7f66e6382cca6938e0a4b870adad93e44747e6/watchfiles-1.1.1-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:526e86aced14a65a5b0ec50827c745597c782ff46b571dbfe46192ab9e0b3c33", size = 595615, upload-time = "2025-10-14T15:05:32.074Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6a/0f/c6988c91d06e93cd0bb3d4a808bcf32375ca1904609835c3031799e3ecae/watchfiles-1.1.1-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:04e78dd0b6352db95507fd8cb46f39d185cf8c74e4cf1e4fbad1d3df96faf510", size = 474836, upload-time = "2025-10-14T15:05:33.209Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b4/36/ded8aebea91919485b7bbabbd14f5f359326cb5ec218cd67074d1e426d74/watchfiles-1.1.1-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5c85794a4cfa094714fb9c08d4a218375b2b95b8ed1666e8677c349906246c05", size = 455099, upload-time = "2025-10-14T15:05:34.189Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/e0/8c9bdba88af756a2fce230dd365fab2baf927ba42cd47521ee7498fd5211/watchfiles-1.1.1-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:74d5012b7630714b66be7b7b7a78855ef7ad58e8650c73afc4c076a1f480a8d6", size = 630626, upload-time = "2025-10-14T15:05:35.216Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2a/84/a95db05354bf2d19e438520d92a8ca475e578c647f78f53197f5a2f17aaf/watchfiles-1.1.1-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:8fbe85cb3201c7d380d3d0b90e63d520f15d6afe217165d7f98c9c649654db81", size = 622519, upload-time = "2025-10-14T15:05:36.259Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1d/ce/d8acdc8de545de995c339be67711e474c77d643555a9bb74a9334252bd55/watchfiles-1.1.1-cp314-cp314-win32.whl", hash = "sha256:3fa0b59c92278b5a7800d3ee7733da9d096d4aabcfabb9a928918bd276ef9b9b", size = 272078, upload-time = "2025-10-14T15:05:37.63Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c4/c9/a74487f72d0451524be827e8edec251da0cc1fcf111646a511ae752e1a3d/watchfiles-1.1.1-cp314-cp314-win_amd64.whl", hash = "sha256:c2047d0b6cea13b3316bdbafbfa0c4228ae593d995030fda39089d36e64fc03a", size = 287664, upload-time = "2025-10-14T15:05:38.95Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/df/b8/8ac000702cdd496cdce998c6f4ee0ca1f15977bba51bdf07d872ebdfc34c/watchfiles-1.1.1-cp314-cp314-win_arm64.whl", hash = "sha256:842178b126593addc05acf6fce960d28bc5fae7afbaa2c6c1b3a7b9460e5be02", size = 277154, upload-time = "2025-10-14T15:05:39.954Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/47/a8/e3af2184707c29f0f14b1963c0aace6529f9d1b8582d5b99f31bbf42f59e/watchfiles-1.1.1-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:88863fbbc1a7312972f1c511f202eb30866370ebb8493aef2812b9ff28156a21", size = 403820, upload-time = "2025-10-14T15:05:40.932Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c0/ec/e47e307c2f4bd75f9f9e8afbe3876679b18e1bcec449beca132a1c5ffb2d/watchfiles-1.1.1-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:55c7475190662e202c08c6c0f4d9e345a29367438cf8e8037f3155e10a88d5a5", size = 390510, upload-time = "2025-10-14T15:05:41.945Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/a0/ad235642118090f66e7b2f18fd5c42082418404a79205cdfca50b6309c13/watchfiles-1.1.1-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3f53fa183d53a1d7a8852277c92b967ae99c2d4dcee2bfacff8868e6e30b15f7", size = 448408, upload-time = "2025-10-14T15:05:43.385Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/df/85/97fa10fd5ff3332ae17e7e40e20784e419e28521549780869f1413742e9d/watchfiles-1.1.1-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6aae418a8b323732fa89721d86f39ec8f092fc2af67f4217a2b07fd3e93c6101", size = 458968, upload-time = "2025-10-14T15:05:44.404Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/47/c2/9059c2e8966ea5ce678166617a7f75ecba6164375f3b288e50a40dc6d489/watchfiles-1.1.1-cp314-cp314t-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f096076119da54a6080e8920cbdaac3dbee667eb91dcc5e5b78840b87415bd44", size = 488096, upload-time = "2025-10-14T15:05:45.398Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/94/44/d90a9ec8ac309bc26db808a13e7bfc0e4e78b6fc051078a554e132e80160/watchfiles-1.1.1-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:00485f441d183717038ed2e887a7c868154f216877653121068107b227a2f64c", size = 596040, upload-time = "2025-10-14T15:05:46.502Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/95/68/4e3479b20ca305cfc561db3ed207a8a1c745ee32bf24f2026a129d0ddb6e/watchfiles-1.1.1-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a55f3e9e493158d7bfdb60a1165035f1cf7d320914e7b7ea83fe22c6023b58fc", size = 473847, upload-time = "2025-10-14T15:05:47.484Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4f/55/2af26693fd15165c4ff7857e38330e1b61ab8c37d15dc79118cdba115b7a/watchfiles-1.1.1-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8c91ed27800188c2ae96d16e3149f199d62f86c7af5f5f4d2c61a3ed8cd3666c", size = 455072, upload-time = "2025-10-14T15:05:48.928Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/66/1d/d0d200b10c9311ec25d2273f8aad8c3ef7cc7ea11808022501811208a750/watchfiles-1.1.1-cp314-cp314t-musllinux_1_1_aarch64.whl", hash = "sha256:311ff15a0bae3714ffb603e6ba6dbfba4065ab60865d15a6ec544133bdb21099", size = 629104, upload-time = "2025-10-14T15:05:49.908Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/bd/fa9bb053192491b3867ba07d2343d9f2252e00811567d30ae8d0f78136fe/watchfiles-1.1.1-cp314-cp314t-musllinux_1_1_x86_64.whl", hash = "sha256:a916a2932da8f8ab582f242c065f5c81bed3462849ca79ee357dd9551b0e9b01", size = 622112, upload-time = "2025-10-14T15:05:50.941Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ba/4c/a888c91e2e326872fa4705095d64acd8aa2fb9c1f7b9bd0588f33850516c/watchfiles-1.1.1-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:17ef139237dfced9da49fb7f2232c86ca9421f666d78c264c7ffca6601d154c3", size = 409611, upload-time = "2025-10-14T15:06:05.809Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1e/c7/5420d1943c8e3ce1a21c0a9330bcf7edafb6aa65d26b21dbb3267c9e8112/watchfiles-1.1.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:672b8adf25b1a0d35c96b5888b7b18699d27d4194bac8beeae75be4b7a3fc9b2", size = 396889, upload-time = "2025-10-14T15:06:07.035Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0c/e5/0072cef3804ce8d3aaddbfe7788aadff6b3d3f98a286fdbee9fd74ca59a7/watchfiles-1.1.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77a13aea58bc2b90173bc69f2a90de8e282648939a00a602e1dc4ee23e26b66d", size = 451616, upload-time = "2025-10-14T15:06:08.072Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/83/4e/b87b71cbdfad81ad7e83358b3e447fedd281b880a03d64a760fe0a11fc2e/watchfiles-1.1.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0b495de0bb386df6a12b18335a0285dda90260f51bdb505503c02bcd1ce27a8b", size = 458413, upload-time = "2025-10-14T15:06:09.209Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/8e/e500f8b0b77be4ff753ac94dc06b33d8f0d839377fee1b78e8c8d8f031bf/watchfiles-1.1.1-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:db476ab59b6765134de1d4fe96a1a9c96ddf091683599be0f26147ea1b2e4b88", size = 408250, upload-time = "2025-10-14T15:06:10.264Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bd/95/615e72cd27b85b61eec764a5ca51bd94d40b5adea5ff47567d9ebc4d275a/watchfiles-1.1.1-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:89eef07eee5e9d1fda06e38822ad167a044153457e6fd997f8a858ab7564a336", size = 396117, upload-time = "2025-10-14T15:06:11.28Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c9/81/e7fe958ce8a7fb5c73cc9fb07f5aeaf755e6aa72498c57d760af760c91f8/watchfiles-1.1.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ce19e06cbda693e9e7686358af9cd6f5d61312ab8b00488bc36f5aabbaf77e24", size = 450493, upload-time = "2025-10-14T15:06:12.321Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/d4/ed38dd3b1767193de971e694aa544356e63353c33a85d948166b5ff58b9e/watchfiles-1.1.1-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3e6f39af2eab0118338902798b5aa6664f46ff66bc0280de76fca67a7f262a49", size = 457546, upload-time = "2025-10-14T15:06:13.372Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "websockets"
|
||||
version = "15.0.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/21/e6/26d09fab466b7ca9c7737474c52be4f76a40301b08362eb2dbc19dcc16c1/websockets-15.0.1.tar.gz", hash = "sha256:82544de02076bafba038ce055ee6412d68da13ab47f0c60cab827346de828dee", size = 177016, upload-time = "2025-03-05T20:03:41.606Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/1e/da/6462a9f510c0c49837bbc9345aca92d767a56c1fb2939e1579df1e1cdcf7/websockets-15.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d63efaa0cd96cf0c5fe4d581521d9fa87744540d4bc999ae6e08595a1014b45b", size = 175423, upload-time = "2025-03-05T20:01:35.363Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1c/9f/9d11c1a4eb046a9e106483b9ff69bce7ac880443f00e5ce64261b47b07e7/websockets-15.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ac60e3b188ec7574cb761b08d50fcedf9d77f1530352db4eef1707fe9dee7205", size = 173080, upload-time = "2025-03-05T20:01:37.304Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/4f/b462242432d93ea45f297b6179c7333dd0402b855a912a04e7fc61c0d71f/websockets-15.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5756779642579d902eed757b21b0164cd6fe338506a8083eb58af5c372e39d9a", size = 173329, upload-time = "2025-03-05T20:01:39.668Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/0c/6afa1f4644d7ed50284ac59cc70ef8abd44ccf7d45850d989ea7310538d0/websockets-15.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0fdfe3e2a29e4db3659dbd5bbf04560cea53dd9610273917799f1cde46aa725e", size = 182312, upload-time = "2025-03-05T20:01:41.815Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dd/d4/ffc8bd1350b229ca7a4db2a3e1c482cf87cea1baccd0ef3e72bc720caeec/websockets-15.0.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4c2529b320eb9e35af0fa3016c187dffb84a3ecc572bcee7c3ce302bfeba52bf", size = 181319, upload-time = "2025-03-05T20:01:43.967Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/97/3a/5323a6bb94917af13bbb34009fac01e55c51dfde354f63692bf2533ffbc2/websockets-15.0.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac1e5c9054fe23226fb11e05a6e630837f074174c4c2f0fe442996112a6de4fb", size = 181631, upload-time = "2025-03-05T20:01:46.104Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a6/cc/1aeb0f7cee59ef065724041bb7ed667b6ab1eeffe5141696cccec2687b66/websockets-15.0.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:5df592cd503496351d6dc14f7cdad49f268d8e618f80dce0cd5a36b93c3fc08d", size = 182016, upload-time = "2025-03-05T20:01:47.603Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/79/f9/c86f8f7af208e4161a7f7e02774e9d0a81c632ae76db2ff22549e1718a51/websockets-15.0.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:0a34631031a8f05657e8e90903e656959234f3a04552259458aac0b0f9ae6fd9", size = 181426, upload-time = "2025-03-05T20:01:48.949Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c7/b9/828b0bc6753db905b91df6ae477c0b14a141090df64fb17f8a9d7e3516cf/websockets-15.0.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:3d00075aa65772e7ce9e990cab3ff1de702aa09be3940d1dc88d5abf1ab8a09c", size = 181360, upload-time = "2025-03-05T20:01:50.938Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/89/fb/250f5533ec468ba6327055b7d98b9df056fb1ce623b8b6aaafb30b55d02e/websockets-15.0.1-cp310-cp310-win32.whl", hash = "sha256:1234d4ef35db82f5446dca8e35a7da7964d02c127b095e172e54397fb6a6c256", size = 176388, upload-time = "2025-03-05T20:01:52.213Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1c/46/aca7082012768bb98e5608f01658ff3ac8437e563eca41cf068bd5849a5e/websockets-15.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:39c1fec2c11dc8d89bba6b2bf1556af381611a173ac2b511cf7231622058af41", size = 176830, upload-time = "2025-03-05T20:01:53.922Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9f/32/18fcd5919c293a398db67443acd33fde142f283853076049824fc58e6f75/websockets-15.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:823c248b690b2fd9303ba00c4f66cd5e2d8c3ba4aa968b2779be9532a4dad431", size = 175423, upload-time = "2025-03-05T20:01:56.276Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/76/70/ba1ad96b07869275ef42e2ce21f07a5b0148936688c2baf7e4a1f60d5058/websockets-15.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678999709e68425ae2593acf2e3ebcbcf2e69885a5ee78f9eb80e6e371f1bf57", size = 173082, upload-time = "2025-03-05T20:01:57.563Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/f2/10b55821dd40eb696ce4704a87d57774696f9451108cff0d2824c97e0f97/websockets-15.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d50fd1ee42388dcfb2b3676132c78116490976f1300da28eb629272d5d93e905", size = 173330, upload-time = "2025-03-05T20:01:59.063Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a5/90/1c37ae8b8a113d3daf1065222b6af61cc44102da95388ac0018fcb7d93d9/websockets-15.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d99e5546bf73dbad5bf3547174cd6cb8ba7273062a23808ffea025ecb1cf8562", size = 182878, upload-time = "2025-03-05T20:02:00.305Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8e/8d/96e8e288b2a41dffafb78e8904ea7367ee4f891dafc2ab8d87e2124cb3d3/websockets-15.0.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:66dd88c918e3287efc22409d426c8f729688d89a0c587c88971a0faa2c2f3792", size = 181883, upload-time = "2025-03-05T20:02:03.148Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/93/1f/5d6dbf551766308f6f50f8baf8e9860be6182911e8106da7a7f73785f4c4/websockets-15.0.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8dd8327c795b3e3f219760fa603dcae1dcc148172290a8ab15158cf85a953413", size = 182252, upload-time = "2025-03-05T20:02:05.29Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d4/78/2d4fed9123e6620cbf1706c0de8a1632e1a28e7774d94346d7de1bba2ca3/websockets-15.0.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8fdc51055e6ff4adeb88d58a11042ec9a5eae317a0a53d12c062c8a8865909e8", size = 182521, upload-time = "2025-03-05T20:02:07.458Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e7/3b/66d4c1b444dd1a9823c4a81f50231b921bab54eee2f69e70319b4e21f1ca/websockets-15.0.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:693f0192126df6c2327cce3baa7c06f2a117575e32ab2308f7f8216c29d9e2e3", size = 181958, upload-time = "2025-03-05T20:02:09.842Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/08/ff/e9eed2ee5fed6f76fdd6032ca5cd38c57ca9661430bb3d5fb2872dc8703c/websockets-15.0.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:54479983bd5fb469c38f2f5c7e3a24f9a4e70594cd68cd1fa6b9340dadaff7cf", size = 181918, upload-time = "2025-03-05T20:02:11.968Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d8/75/994634a49b7e12532be6a42103597b71098fd25900f7437d6055ed39930a/websockets-15.0.1-cp311-cp311-win32.whl", hash = "sha256:16b6c1b3e57799b9d38427dda63edcbe4926352c47cf88588c0be4ace18dac85", size = 176388, upload-time = "2025-03-05T20:02:13.32Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/93/e36c73f78400a65f5e236cd376713c34182e6663f6889cd45a4a04d8f203/websockets-15.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:27ccee0071a0e75d22cb35849b1db43f2ecd3e161041ac1ee9d2352ddf72f065", size = 176828, upload-time = "2025-03-05T20:02:14.585Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/51/6b/4545a0d843594f5d0771e86463606a3988b5a09ca5123136f8a76580dd63/websockets-15.0.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:3e90baa811a5d73f3ca0bcbf32064d663ed81318ab225ee4f427ad4e26e5aff3", size = 175437, upload-time = "2025-03-05T20:02:16.706Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f4/71/809a0f5f6a06522af902e0f2ea2757f71ead94610010cf570ab5c98e99ed/websockets-15.0.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:592f1a9fe869c778694f0aa806ba0374e97648ab57936f092fd9d87f8bc03665", size = 173096, upload-time = "2025-03-05T20:02:18.832Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3d/69/1a681dd6f02180916f116894181eab8b2e25b31e484c5d0eae637ec01f7c/websockets-15.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0701bc3cfcb9164d04a14b149fd74be7347a530ad3bbf15ab2c678a2cd3dd9a2", size = 173332, upload-time = "2025-03-05T20:02:20.187Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a6/02/0073b3952f5bce97eafbb35757f8d0d54812b6174ed8dd952aa08429bcc3/websockets-15.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8b56bdcdb4505c8078cb6c7157d9811a85790f2f2b3632c7d1462ab5783d215", size = 183152, upload-time = "2025-03-05T20:02:22.286Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/45/c205c8480eafd114b428284840da0b1be9ffd0e4f87338dc95dc6ff961a1/websockets-15.0.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0af68c55afbd5f07986df82831c7bff04846928ea8d1fd7f30052638788bc9b5", size = 182096, upload-time = "2025-03-05T20:02:24.368Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/14/8f/aa61f528fba38578ec553c145857a181384c72b98156f858ca5c8e82d9d3/websockets-15.0.1-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:64dee438fed052b52e4f98f76c5790513235efaa1ef7f3f2192c392cd7c91b65", size = 182523, upload-time = "2025-03-05T20:02:25.669Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ec/6d/0267396610add5bc0d0d3e77f546d4cd287200804fe02323797de77dbce9/websockets-15.0.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d5f6b181bb38171a8ad1d6aa58a67a6aa9d4b38d0f8c5f496b9e42561dfc62fe", size = 182790, upload-time = "2025-03-05T20:02:26.99Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/05/c68c5adbf679cf610ae2f74a9b871ae84564462955d991178f95a1ddb7dd/websockets-15.0.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:5d54b09eba2bada6011aea5375542a157637b91029687eb4fdb2dab11059c1b4", size = 182165, upload-time = "2025-03-05T20:02:30.291Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/93/bb672df7b2f5faac89761cb5fa34f5cec45a4026c383a4b5761c6cea5c16/websockets-15.0.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:3be571a8b5afed347da347bfcf27ba12b069d9d7f42cb8c7028b5e98bbb12597", size = 182160, upload-time = "2025-03-05T20:02:31.634Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/83/de1f7709376dc3ca9b7eeb4b9a07b4526b14876b6d372a4dc62312bebee0/websockets-15.0.1-cp312-cp312-win32.whl", hash = "sha256:c338ffa0520bdb12fbc527265235639fb76e7bc7faafbb93f6ba80d9c06578a9", size = 176395, upload-time = "2025-03-05T20:02:33.017Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/71/abf2ebc3bbfa40f391ce1428c7168fb20582d0ff57019b69ea20fa698043/websockets-15.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:fcd5cf9e305d7b8338754470cf69cf81f420459dbae8a3b40cee57417f4614a7", size = 176841, upload-time = "2025-03-05T20:02:34.498Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cb/9f/51f0cf64471a9d2b4d0fc6c534f323b664e7095640c34562f5182e5a7195/websockets-15.0.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ee443ef070bb3b6ed74514f5efaa37a252af57c90eb33b956d35c8e9c10a1931", size = 175440, upload-time = "2025-03-05T20:02:36.695Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/05/aa116ec9943c718905997412c5989f7ed671bc0188ee2ba89520e8765d7b/websockets-15.0.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:5a939de6b7b4e18ca683218320fc67ea886038265fd1ed30173f5ce3f8e85675", size = 173098, upload-time = "2025-03-05T20:02:37.985Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/0b/33cef55ff24f2d92924923c99926dcce78e7bd922d649467f0eda8368923/websockets-15.0.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:746ee8dba912cd6fc889a8147168991d50ed70447bf18bcda7039f7d2e3d9151", size = 173329, upload-time = "2025-03-05T20:02:39.298Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/31/1d/063b25dcc01faa8fada1469bdf769de3768b7044eac9d41f734fd7b6ad6d/websockets-15.0.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:595b6c3969023ecf9041b2936ac3827e4623bfa3ccf007575f04c5a6aa318c22", size = 183111, upload-time = "2025-03-05T20:02:40.595Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/93/53/9a87ee494a51bf63e4ec9241c1ccc4f7c2f45fff85d5bde2ff74fcb68b9e/websockets-15.0.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3c714d2fc58b5ca3e285461a4cc0c9a66bd0e24c5da9911e30158286c9b5be7f", size = 182054, upload-time = "2025-03-05T20:02:41.926Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/b2/83a6ddf56cdcbad4e3d841fcc55d6ba7d19aeb89c50f24dd7e859ec0805f/websockets-15.0.1-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f3c1e2ab208db911594ae5b4f79addeb3501604a165019dd221c0bdcabe4db8", size = 182496, upload-time = "2025-03-05T20:02:43.304Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/41/e7038944ed0abf34c45aa4635ba28136f06052e08fc2168520bb8b25149f/websockets-15.0.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:229cf1d3ca6c1804400b0a9790dc66528e08a6a1feec0d5040e8b9eb14422375", size = 182829, upload-time = "2025-03-05T20:02:48.812Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e0/17/de15b6158680c7623c6ef0db361da965ab25d813ae54fcfeae2e5b9ef910/websockets-15.0.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:756c56e867a90fb00177d530dca4b097dd753cde348448a1012ed6c5131f8b7d", size = 182217, upload-time = "2025-03-05T20:02:50.14Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/33/2b/1f168cb6041853eef0362fb9554c3824367c5560cbdaad89ac40f8c2edfc/websockets-15.0.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:558d023b3df0bffe50a04e710bc87742de35060580a293c2a984299ed83bc4e4", size = 182195, upload-time = "2025-03-05T20:02:51.561Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/eb/20b6cdf273913d0ad05a6a14aed4b9a85591c18a987a3d47f20fa13dcc47/websockets-15.0.1-cp313-cp313-win32.whl", hash = "sha256:ba9e56e8ceeeedb2e080147ba85ffcd5cd0711b89576b83784d8605a7df455fa", size = 176393, upload-time = "2025-03-05T20:02:53.814Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1b/6c/c65773d6cab416a64d191d6ee8a8b1c68a09970ea6909d16965d26bfed1e/websockets-15.0.1-cp313-cp313-win_amd64.whl", hash = "sha256:e09473f095a819042ecb2ab9465aee615bd9c2028e4ef7d933600a8401c79561", size = 176837, upload-time = "2025-03-05T20:02:55.237Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/9e/d40f779fa16f74d3468357197af8d6ad07e7c5a27ea1ca74ceb38986f77a/websockets-15.0.1-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0c9e74d766f2818bb95f84c25be4dea09841ac0f734d1966f415e4edfc4ef1c3", size = 173109, upload-time = "2025-03-05T20:03:17.769Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bc/cd/5b887b8585a593073fd92f7c23ecd3985cd2c3175025a91b0d69b0551372/websockets-15.0.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:1009ee0c7739c08a0cd59de430d6de452a55e42d6b522de7aa15e6f67db0b8e1", size = 173343, upload-time = "2025-03-05T20:03:19.094Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fe/ae/d34f7556890341e900a95acf4886833646306269f899d58ad62f588bf410/websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:76d1f20b1c7a2fa82367e04982e708723ba0e7b8d43aa643d3dcd404d74f1475", size = 174599, upload-time = "2025-03-05T20:03:21.1Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/71/e6/5fd43993a87db364ec60fc1d608273a1a465c0caba69176dd160e197ce42/websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f29d80eb9a9263b8d109135351caf568cc3f80b9928bccde535c235de55c22d9", size = 174207, upload-time = "2025-03-05T20:03:23.221Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/fb/c492d6daa5ec067c2988ac80c61359ace5c4c674c532985ac5a123436cec/websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b359ed09954d7c18bbc1680f380c7301f92c60bf924171629c5db97febb12f04", size = 174155, upload-time = "2025-03-05T20:03:25.321Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/68/a1/dcb68430b1d00b698ae7a7e0194433bce4f07ded185f0ee5fb21e2a2e91e/websockets-15.0.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:cad21560da69f4ce7658ca2cb83138fb4cf695a2ba3e475e0559e05991aa8122", size = 176884, upload-time = "2025-03-05T20:03:27.934Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fa/a8/5b41e0da817d64113292ab1f8247140aac61cbf6cfd085d6a0fa77f4984f/websockets-15.0.1-py3-none-any.whl", hash = "sha256:f7a866fbc1e97b5c617ee4116daaa09b722101d4a3c170c787450ba409f9736f", size = 169743, upload-time = "2025-03-05T20:03:39.41Z" },
|
||||
]
|
||||
Loading…
x
Reference in New Issue
Block a user