Some checks failed
CI / backend-test (push) Successful in 4m9s
CI / frontend-test (push) Failing after 3m48s
CI / lint-python (push) Successful in 1m41s
CI / secret-scanning (push) Successful in 1m20s
CI / dependency-scan (push) Successful in 10m50s
CI / workflow-summary (push) Successful in 1m11s
## Features Added
### Document Reference System
- Implemented numbered document references (@1, @2, etc.) with autocomplete dropdown
- Added fuzzy filename matching for @filename references
- Document filtering now prioritizes numeric refs > filename refs > all documents
- Autocomplete dropdown appears when typing @ with keyboard navigation (Up/Down, Enter/Tab, Escape)
- Document numbers displayed in UI for easy reference
### Conversation Management
- Added conversation rename functionality with inline editing
- Implemented conversation search (by title and content)
- Search box always visible, even when no conversations exist
- Export reports now replace @N references with actual filenames
### UI/UX Improvements
- Removed debug toggle button
- Improved text contrast in dark mode (better visibility)
- Made input textarea expand to full available width
- Fixed file text color for better readability
- Enhanced document display with numbered badges
### Configuration & Timeouts
- Made HTTP client timeouts configurable (connect, write, pool)
- Added .env.example with all configuration options
- Updated timeout documentation
### Developer Experience
- Added `make test-setup` target for automated test conversation creation
- Test setup script supports TEST_MESSAGE and TEST_DOCS env vars
- Improved Makefile with dev and test-setup targets
### Documentation
- Updated ARCHITECTURE.md with all new features
- Created comprehensive deployment documentation
- Added GPU VM setup guides
- Removed unnecessary markdown files (CLAUDE.md, CONTRIBUTING.md, header.jpg)
- Organized documentation in docs/ directory
### GPU VM / Ollama (Stability + GPU Offload)
- Updated GPU VM docs to reflect the working systemd environment for remote Ollama
- Standardized remote Ollama port to 11434 (and added /v1/models verification)
- Documented required env for GPU offload on this VM:
- `OLLAMA_MODELS=/mnt/data/ollama`, `HOME=/mnt/data/ollama/home`
- `OLLAMA_LLM_LIBRARY=cuda_v12` (not `cuda`)
- `LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12`
## Technical Changes
### Backend
- Enhanced `docs_context.py` with reference parsing (numeric and filename)
- Added `update_conversation_title` to storage.py
- New endpoints: PATCH /api/conversations/{id}/title, GET /api/conversations/search
- Improved report generation with filename substitution
### Frontend
- Removed debugMode state and related code
- Added autocomplete dropdown component
- Implemented search functionality in Sidebar
- Enhanced ChatInterface with autocomplete and improved textarea sizing
- Updated CSS for better contrast and responsive design
## Files Changed
- Backend: config.py, council.py, docs_context.py, main.py, storage.py
- Frontend: App.jsx, ChatInterface.jsx, Sidebar.jsx, and related CSS files
- Documentation: README.md, ARCHITECTURE.md, new docs/ directory
- Configuration: .env.example, Makefile
- Scripts: scripts/test_setup.py
## Breaking Changes
None - all changes are backward compatible
## Testing
- All existing tests pass
- New test-setup script validates conversation creation workflow
- Manual testing of autocomplete, search, and rename features
104 lines
2.8 KiB
Python
104 lines
2.8 KiB
Python
"""Markdown document storage for conversations.
|
|
|
|
Stores uploaded .md files on disk under data/docs/<conversation_id>/.
|
|
"""
|
|
|
|
from __future__ import annotations
|
|
|
|
import os
|
|
import re
|
|
import uuid
|
|
from dataclasses import dataclass
|
|
from pathlib import Path
|
|
from typing import List
|
|
|
|
from .config import DOCS_DIR, MAX_DOC_BYTES
|
|
|
|
|
|
_SAFE_NAME_RE = re.compile(r"[^a-zA-Z0-9._ -]+")
|
|
|
|
|
|
def _safe_filename(name: str) -> str:
|
|
name = name.strip().replace("\\", "/").split("/")[-1] # drop any path
|
|
name = _SAFE_NAME_RE.sub("_", name)
|
|
name = name.strip(" .")
|
|
if not name:
|
|
name = "document.md"
|
|
if not name.lower().endswith(".md"):
|
|
name = f"{name}.md"
|
|
return name
|
|
|
|
|
|
def _conversation_dir(conversation_id: str) -> Path:
|
|
base = Path(DOCS_DIR)
|
|
return base / conversation_id
|
|
|
|
|
|
def ensure_docs_dir(conversation_id: str) -> Path:
|
|
d = _conversation_dir(conversation_id)
|
|
d.mkdir(parents=True, exist_ok=True)
|
|
return d
|
|
|
|
|
|
@dataclass(frozen=True)
|
|
class DocumentMeta:
|
|
id: str
|
|
filename: str
|
|
bytes: int
|
|
|
|
|
|
def save_markdown_document(conversation_id: str, filename: str, content: bytes) -> DocumentMeta:
|
|
if len(content) > MAX_DOC_BYTES:
|
|
raise ValueError(f"Document too large. Max {MAX_DOC_BYTES} bytes.")
|
|
|
|
safe_name = _safe_filename(filename)
|
|
doc_id = str(uuid.uuid4())
|
|
|
|
d = ensure_docs_dir(conversation_id)
|
|
path = d / f"{doc_id}__{safe_name}"
|
|
path.write_bytes(content)
|
|
return DocumentMeta(id=doc_id, filename=safe_name, bytes=len(content))
|
|
|
|
|
|
def list_documents(conversation_id: str) -> List[DocumentMeta]:
|
|
d = _conversation_dir(conversation_id)
|
|
if not d.exists():
|
|
return []
|
|
|
|
out: List[DocumentMeta] = []
|
|
for p in sorted(d.iterdir()):
|
|
if not p.is_file():
|
|
continue
|
|
if "__" not in p.name:
|
|
continue
|
|
doc_id, fname = p.name.split("__", 1)
|
|
out.append(DocumentMeta(id=doc_id, filename=fname, bytes=p.stat().st_size))
|
|
return out
|
|
|
|
|
|
def read_document_text(conversation_id: str, doc_id: str) -> str:
|
|
d = _conversation_dir(conversation_id)
|
|
if not d.exists():
|
|
raise FileNotFoundError("Conversation docs not found")
|
|
|
|
matches = [p for p in d.iterdir() if p.is_file() and p.name.startswith(f"{doc_id}__")]
|
|
if not matches:
|
|
raise FileNotFoundError("Document not found")
|
|
|
|
raw = matches[0].read_bytes()
|
|
# Best-effort UTF-8; replace invalid sequences
|
|
return raw.decode("utf-8", errors="replace")
|
|
|
|
|
|
def delete_document(conversation_id: str, doc_id: str) -> None:
|
|
d = _conversation_dir(conversation_id)
|
|
if not d.exists():
|
|
raise FileNotFoundError("Conversation docs not found")
|
|
|
|
matches = [p for p in d.iterdir() if p.is_file() and p.name.startswith(f"{doc_id}__")]
|
|
if not matches:
|
|
raise FileNotFoundError("Document not found")
|
|
matches[0].unlink()
|
|
|
|
|