## Features Added
### Document Reference System
- Implemented numbered document references (@1, @2, etc.) with autocomplete dropdown
- Added fuzzy filename matching for @filename references
- Document filtering now prioritizes numeric refs > filename refs > all documents
- Autocomplete dropdown appears when typing @ with keyboard navigation (Up/Down, Enter/Tab, Escape)
- Document numbers displayed in UI for easy reference
### Conversation Management
- Added conversation rename functionality with inline editing
- Implemented conversation search (by title and content)
- Search box always visible, even when no conversations exist
- Export reports now replace @N references with actual filenames
### UI/UX Improvements
- Removed debug toggle button
- Improved text contrast in dark mode (better visibility)
- Made input textarea expand to full available width
- Fixed file text color for better readability
- Enhanced document display with numbered badges
### Configuration & Timeouts
- Made HTTP client timeouts configurable (connect, write, pool)
- Added .env.example with all configuration options
- Updated timeout documentation
### Developer Experience
- Added `make test-setup` target for automated test conversation creation
- Test setup script supports TEST_MESSAGE and TEST_DOCS env vars
- Improved Makefile with dev and test-setup targets
### Documentation
- Updated ARCHITECTURE.md with all new features
- Created comprehensive deployment documentation
- Added GPU VM setup guides
- Removed unnecessary markdown files (CLAUDE.md, CONTRIBUTING.md, header.jpg)
- Organized documentation in docs/ directory
### GPU VM / Ollama (Stability + GPU Offload)
- Updated GPU VM docs to reflect the working systemd environment for remote Ollama
- Standardized remote Ollama port to 11434 (and added /v1/models verification)
- Documented required env for GPU offload on this VM:
- `OLLAMA_MODELS=/mnt/data/ollama`, `HOME=/mnt/data/ollama/home`
- `OLLAMA_LLM_LIBRARY=cuda_v12` (not `cuda`)
- `LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12`
## Technical Changes
### Backend
- Enhanced `docs_context.py` with reference parsing (numeric and filename)
- Added `update_conversation_title` to storage.py
- New endpoints: PATCH /api/conversations/{id}/title, GET /api/conversations/search
- Improved report generation with filename substitution
### Frontend
- Removed debugMode state and related code
- Added autocomplete dropdown component
- Implemented search functionality in Sidebar
- Enhanced ChatInterface with autocomplete and improved textarea sizing
- Updated CSS for better contrast and responsive design
## Files Changed
- Backend: config.py, council.py, docs_context.py, main.py, storage.py
- Frontend: App.jsx, ChatInterface.jsx, Sidebar.jsx, and related CSS files
- Documentation: README.md, ARCHITECTURE.md, new docs/ directory
- Configuration: .env.example, Makefile
- Scripts: scripts/test_setup.py
## Breaking Changes
None - all changes are backward compatible
## Testing
- All existing tests pass
- New test-setup script validates conversation creation workflow
- Manual testing of autocomplete, search, and rename features
1.7 KiB
The Digital Garden
Once upon a time, in a world where code grew like vines, there lived a developer named Alex who discovered something magical in their repository.
Alex had been debugging a particularly stubborn bug for three days straight. The error messages were cryptic, the stack traces were confusing, and coffee had stopped working its usual magic.
The Discovery
On the fourth morning, while scrolling through documentation that made less sense than the bug itself, Alex noticed something strange. Every time they ran their tests, a new file appeared in the mysteries/ directory—a file that hadn't been there before.
The file was called clue.md and it contained only this:
"Look not in the code, but between the lines."
The Revelation
At first, Alex thought it was a prank from a coworker or a stray script. But as the days passed, the file began to change. New clues appeared, each more cryptic than the last.
It wasn't until Alex uploaded one of these files to their LLM Council that everything clicked. The council of AI models analyzed the patterns, the timing, the syntax. They saw connections Alex had missed.
The Solution
The bug wasn't in Alex's code at all. It was in the dependencies, hidden in a nested module that updated itself every time the tests ran. The mysterious files were breadcrumbs left by a self-modifying dependency that was trying to communicate.
Alex fixed the issue, updated the dependencies, and the mysterious files stopped appearing. But they kept one—the first clue.md—as a reminder that sometimes the solution lies not in what you're looking for, but in what finds you.
The end. Or is it just the beginning?