- Enhanced alias section with clearer options - Added examples for common nanobot commands - Improved formatting and organization
7.6 KiB
Nanobot Setup Guide
This guide documents how to set up and run Ollama, the virtual environment, and nanobot.
Prerequisites
- Python 3.11+
- NVIDIA GPU (for GPU acceleration)
- Ollama installed (
/usr/local/bin/ollama)
1. Running Ollama with GPU Support
Ollama must be started with GPU support to ensure fast responses. The models are stored in /mnt/data/ollama.
Start Ollama with GPU
# Stop any existing Ollama processes
pkill ollama
# Start Ollama with GPU support and custom models path
OLLAMA_NUM_GPU=1 OLLAMA_MODELS=/mnt/data/ollama ollama serve > /tmp/ollama.log 2>&1 &
Verify Ollama is Running
# Check if Ollama is responding
curl http://localhost:11434/api/tags
# Check GPU usage (should show Ollama using GPU memory)
nvidia-smi
# Check if models are available
curl http://localhost:11434/api/tags | python3 -m json.tool
Make Ollama Permanent (Systemd Service)
To make Ollama start automatically with GPU support:
# Edit the systemd service
sudo systemctl edit ollama
# Add this content:
[Service]
Environment="OLLAMA_NUM_GPU=1"
Environment="OLLAMA_MODELS=/mnt/data/ollama"
# Reload and restart
sudo systemctl daemon-reload
sudo systemctl restart ollama
sudo systemctl enable ollama
Troubleshooting Ollama
- Not using GPU: Check
nvidia-smi- if no Ollama process is using GPU memory, restart withOLLAMA_NUM_GPU=1 - Models not found: Ensure
OLLAMA_MODELS=/mnt/data/ollamais set - Port already in use: Stop existing Ollama with
pkill ollamaorsudo systemctl stop ollama
2. Virtual Environment Setup (Optional)
Note: Nanobot is installed in system Python and can run without a venv. However, if you prefer isolation or are developing, you can use the venv.
Option A: Run Without Venv (Recommended)
Nanobot is already installed in system Python:
# Just run directly
python3 -m nanobot.cli.commands agent -m "your message"
Option B: Use Virtual Environment
If you want to use the venv:
cd /root/code/nanobot
source .venv/bin/activate
python3 -m nanobot.cli.commands agent -m "your message"
Install/Update Dependencies
If dependencies are missing in system Python:
pip3 install -e /root/code/nanobot --break-system-packages
Or in venv:
cd /root/code/nanobot
source .venv/bin/activate
pip install -e .
3. Running Nanobot
Basic Usage (Without Venv)
python3 -m nanobot.cli.commands agent -m "your message here"
Basic Usage (With Venv)
cd /root/code/nanobot
source .venv/bin/activate
python3 -m nanobot.cli.commands agent -m "your message here"
Configuration
Nanobot configuration is stored in ~/.nanobot/config.json.
Example configuration for Ollama:
{
"providers": {
"custom": {
"apiKey": "no-key",
"apiBase": "http://localhost:11434/v1"
}
},
"agents": {
"defaults": {
"model": "llama3.1:8b"
}
}
}
Quick Start Script
Create an alias for convenience. Add one of these to your ~/.zshrc or ~/.bashrc:
Option 1: Without venv (Recommended - simpler)
alias nanobot='python3 -m nanobot.cli.commands'
Option 2: With venv (if you prefer isolation)
alias nanobot='cd /root/code/nanobot && source .venv/bin/activate && python3 -m nanobot.cli.commands'
After adding the alias:
# Reload your shell configuration
source ~/.zshrc # or source ~/.bashrc
# Now you can use the shorter command:
nanobot agent -m "your message here"
Example usage with alias:
# Simple message
nanobot agent -m "hello"
# Analyze Excel file
nanobot agent -m "analyze /root/.nanobot/workspace/bakery_inventory.xlsx file and calculate total inventory value"
# Start new session
nanobot agent -m "/new"
Example: Analyze Excel File
# Without venv (simpler)
python3 -m nanobot.cli.commands agent -m "analyze /root/.nanobot/workspace/bakery_inventory.xlsx file and calculate total inventory value"
# Or with venv
cd /root/code/nanobot && source .venv/bin/activate && python3 -m nanobot.cli.commands agent -m "analyze /root/.nanobot/workspace/bakery_inventory.xlsx file and calculate total inventory value"
4. Complete Startup Sequence
Here's the complete sequence to get everything running:
# 1. Start Ollama with GPU support
OLLAMA_NUM_GPU=1 OLLAMA_MODELS=/mnt/data/ollama ollama serve > /tmp/ollama.log 2>&1 &
# 2. Wait a few seconds for Ollama to start
sleep 3
# 3. Verify Ollama is running
curl http://localhost:11434/api/tags
# 4. Run nanobot (no venv needed)
python3 -m nanobot.cli.commands agent -m "hello"
# Or with venv (optional):
# cd /root/code/nanobot
# source .venv/bin/activate
# python3 -m nanobot.cli.commands agent -m "hello"
## 5. Troubleshooting
### Nanobot Hangs or "Thinking Too Long"
- **Check Ollama**: Ensure Ollama is running and responding
```bash
curl http://localhost:11434/api/tags
-
Check GPU: Verify Ollama is using GPU (should show GPU memory usage in
nvidia-smi)nvidia-smi -
Check Timeout: The CustomProvider has a 120-second timeout. If requests take longer, Ollama may be overloaded.
Python Command Not Found
If nanobot uses python instead of python3:
# Create symlink
sudo ln -sf /usr/bin/python3 /usr/local/bin/python
Pandas/Openpyxl Not Available
If nanobot needs to analyze Excel files:
# Install in system Python (for exec tool)
pip3 install pandas openpyxl --break-system-packages
# Or ensure python symlink exists (see above)
Virtual Environment Issues
If .venv doesn't exist or is corrupted:
cd /root/code/nanobot
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
6. File Locations
- Nanobot code:
/root/code/nanobot - Nanobot config:
~/.nanobot/config.json - Nanobot workspace:
~/.nanobot/workspace - Ollama models:
/mnt/data/ollama - Ollama logs:
/tmp/ollama.log
7. Environment Variables
Ollama
OLLAMA_NUM_GPU=1- Enable GPU supportOLLAMA_MODELS=/mnt/data/ollama- Custom models directoryOLLAMA_HOST=http://127.0.0.1:11434- Server address
Nanobot
- Uses
~/.nanobot/config.jsonfor configuration - Workspace defaults to
~/.nanobot/workspace
8. Performance Tips
- Always use GPU: Start Ollama with
OLLAMA_NUM_GPU=1for much faster responses - Keep models loaded: Ollama keeps frequently used models in GPU memory
- Use appropriate model size: Smaller models (like llama3.1:8b) are faster than larger ones
- Monitor GPU usage: Use
nvidia-smito check if GPU is being utilized
9. Quick Reference
# Start Ollama
OLLAMA_NUM_GPU=1 OLLAMA_MODELS=/mnt/data/ollama ollama serve > /tmp/ollama.log 2>&1 &
# Run nanobot (no venv needed)
python3 -m nanobot.cli.commands agent -m "message"
# Or with venv (optional):
# cd /root/code/nanobot && source .venv/bin/activate && python3 -m nanobot.cli.commands agent -m "message"
# Check status
nvidia-smi # GPU usage
curl http://localhost:11434/api/tags # Ollama models
ps aux | grep ollama # Ollama process
10. Common Commands
# Stop Ollama
pkill ollama
# Restart Ollama with GPU
pkill ollama && OLLAMA_NUM_GPU=1 OLLAMA_MODELS=/mnt/data/ollama ollama serve > /tmp/ollama.log 2>&1 &
# Check Ollama logs
tail -f /tmp/ollama.log
# Test Ollama directly
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"llama3.1:8b","messages":[{"role":"user","content":"hello"}],"max_tokens":10}'
Last Updated: 2026-02-23 Tested with: Ollama 0.13.5, Python 3.11.2, nanobot 0.1.4