Some checks failed
CI / backend-test (push) Successful in 4m9s
CI / frontend-test (push) Failing after 3m48s
CI / lint-python (push) Successful in 1m41s
CI / secret-scanning (push) Successful in 1m20s
CI / dependency-scan (push) Successful in 10m50s
CI / workflow-summary (push) Successful in 1m11s
## Features Added
### Document Reference System
- Implemented numbered document references (@1, @2, etc.) with autocomplete dropdown
- Added fuzzy filename matching for @filename references
- Document filtering now prioritizes numeric refs > filename refs > all documents
- Autocomplete dropdown appears when typing @ with keyboard navigation (Up/Down, Enter/Tab, Escape)
- Document numbers displayed in UI for easy reference
### Conversation Management
- Added conversation rename functionality with inline editing
- Implemented conversation search (by title and content)
- Search box always visible, even when no conversations exist
- Export reports now replace @N references with actual filenames
### UI/UX Improvements
- Removed debug toggle button
- Improved text contrast in dark mode (better visibility)
- Made input textarea expand to full available width
- Fixed file text color for better readability
- Enhanced document display with numbered badges
### Configuration & Timeouts
- Made HTTP client timeouts configurable (connect, write, pool)
- Added .env.example with all configuration options
- Updated timeout documentation
### Developer Experience
- Added `make test-setup` target for automated test conversation creation
- Test setup script supports TEST_MESSAGE and TEST_DOCS env vars
- Improved Makefile with dev and test-setup targets
### Documentation
- Updated ARCHITECTURE.md with all new features
- Created comprehensive deployment documentation
- Added GPU VM setup guides
- Removed unnecessary markdown files (CLAUDE.md, CONTRIBUTING.md, header.jpg)
- Organized documentation in docs/ directory
### GPU VM / Ollama (Stability + GPU Offload)
- Updated GPU VM docs to reflect the working systemd environment for remote Ollama
- Standardized remote Ollama port to 11434 (and added /v1/models verification)
- Documented required env for GPU offload on this VM:
- `OLLAMA_MODELS=/mnt/data/ollama`, `HOME=/mnt/data/ollama/home`
- `OLLAMA_LLM_LIBRARY=cuda_v12` (not `cuda`)
- `LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama/cuda_v12`
## Technical Changes
### Backend
- Enhanced `docs_context.py` with reference parsing (numeric and filename)
- Added `update_conversation_title` to storage.py
- New endpoints: PATCH /api/conversations/{id}/title, GET /api/conversations/search
- Improved report generation with filename substitution
### Frontend
- Removed debugMode state and related code
- Added autocomplete dropdown component
- Implemented search functionality in Sidebar
- Enhanced ChatInterface with autocomplete and improved textarea sizing
- Updated CSS for better contrast and responsive design
## Files Changed
- Backend: config.py, council.py, docs_context.py, main.py, storage.py
- Frontend: App.jsx, ChatInterface.jsx, Sidebar.jsx, and related CSS files
- Documentation: README.md, ARCHITECTURE.md, new docs/ directory
- Configuration: .env.example, Makefile
- Scripts: scripts/test_setup.py
## Breaking Changes
None - all changes are backward compatible
## Testing
- All existing tests pass
- New test-setup script validates conversation creation workflow
- Manual testing of autocomplete, search, and rename features
88 lines
2.9 KiB
Bash
Executable File
88 lines
2.9 KiB
Bash
Executable File
#!/bin/bash
|
|
# Configure Ollama to use /mnt/data for model storage
|
|
# Run this ON THE GPU VM
|
|
|
|
echo "=== Fixing Ollama Storage Location ==="
|
|
echo ""
|
|
|
|
# Check current disk usage
|
|
echo "Current disk usage:"
|
|
df -h | grep -E "Filesystem|/dev/sda"
|
|
echo ""
|
|
|
|
# Create models directory on /mnt/data
|
|
echo "Creating Ollama models directory on /mnt/data..."
|
|
sudo mkdir -p /mnt/data/ollama/models
|
|
sudo chown -R ollama:ollama /mnt/data/ollama 2>/dev/null || sudo chown -R $(whoami):$(whoami) /mnt/data/ollama
|
|
echo "✓ Directory created: /mnt/data/ollama/models"
|
|
echo ""
|
|
|
|
# Check if there are existing models to move
|
|
if [ -d ~/.ollama/models ] && [ "$(ls -A ~/.ollama/models 2>/dev/null)" ]; then
|
|
echo "Found existing models in ~/.ollama/models"
|
|
echo "Moving to /mnt/data/ollama/models..."
|
|
sudo mv ~/.ollama/models/* /mnt/data/ollama/models/ 2>/dev/null
|
|
echo "✓ Models moved"
|
|
elif [ -d /usr/share/ollama/models ] && [ "$(ls -A /usr/share/ollama/models 2>/dev/null)" ]; then
|
|
echo "Found existing models in /usr/share/ollama/models"
|
|
echo "Moving to /mnt/data/ollama/models..."
|
|
sudo mv /usr/share/ollama/models/* /mnt/data/ollama/models/ 2>/dev/null
|
|
echo "✓ Models moved"
|
|
else
|
|
echo "No existing models found to move"
|
|
fi
|
|
|
|
echo ""
|
|
|
|
# Update systemd service to use new location
|
|
echo "Updating systemd service configuration..."
|
|
sudo mkdir -p /etc/systemd/system/ollama.service.d
|
|
|
|
# Check if override.conf exists and update it, or create new
|
|
if [ -f /etc/systemd/system/ollama.service.d/override.conf ]; then
|
|
echo "Updating existing override.conf..."
|
|
# Add OLLAMA_MODELS if not already there
|
|
if ! grep -q "OLLAMA_MODELS" /etc/systemd/system/ollama.service.d/override.conf; then
|
|
sudo sed -i '/\[Service\]/a Environment="OLLAMA_MODELS=/mnt/data/ollama/models"' /etc/systemd/system/ollama.service.d/override.conf
|
|
else
|
|
sudo sed -i 's|OLLAMA_MODELS=.*|OLLAMA_MODELS=/mnt/data/ollama/models|' /etc/systemd/system/ollama.service.d/override.conf
|
|
fi
|
|
else
|
|
echo "Creating new override.conf..."
|
|
sudo tee /etc/systemd/system/ollama.service.d/override.conf > /dev/null <<EOF
|
|
[Service]
|
|
Environment="OLLAMA_HOST=0.0.0.0:11434"
|
|
Environment="OLLAMA_MODELS=/mnt/data/ollama/models"
|
|
EOF
|
|
fi
|
|
|
|
echo "✓ Systemd configuration updated"
|
|
echo ""
|
|
|
|
# Reload and restart
|
|
echo "Reloading systemd and restarting Ollama..."
|
|
sudo systemctl daemon-reload
|
|
sudo systemctl restart ollama
|
|
sleep 3
|
|
|
|
echo "✓ Ollama restarted with new storage location"
|
|
echo ""
|
|
|
|
# Verify
|
|
echo "Verifying configuration:"
|
|
echo " Storage location: /mnt/data/ollama/models"
|
|
echo " Disk space available:"
|
|
df -h /mnt/data | tail -1
|
|
echo ""
|
|
echo " Checking if Ollama is running:"
|
|
systemctl is-active ollama && echo " ✓ Ollama is running" || echo " ✗ Ollama is not running"
|
|
|
|
echo ""
|
|
echo "=== Done! ==="
|
|
echo "You can now pull models:"
|
|
echo " ollama pull qwen2:latest"
|
|
echo " ollama pull qwen2.5:14b"
|
|
echo " ollama pull llama3.1:8b"
|
|
echo " ollama pull qwen2.5:7b"
|
|
|