Add SETUP_GUIDE.md with improved alias instructions
- Enhanced alias section with clearer options - Added examples for common nanobot commands - Improved formatting and organization
This commit is contained in:
parent
02cf7fb4da
commit
d3cb1d0050
324
SETUP_GUIDE.md
Normal file
324
SETUP_GUIDE.md
Normal file
@ -0,0 +1,324 @@
|
||||
# Nanobot Setup Guide
|
||||
|
||||
This guide documents how to set up and run Ollama, the virtual environment, and nanobot.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Python 3.11+
|
||||
- NVIDIA GPU (for GPU acceleration)
|
||||
- Ollama installed (`/usr/local/bin/ollama`)
|
||||
|
||||
## 1. Running Ollama with GPU Support
|
||||
|
||||
Ollama must be started with GPU support to ensure fast responses. The models are stored in `/mnt/data/ollama`.
|
||||
|
||||
### Start Ollama with GPU
|
||||
|
||||
```bash
|
||||
# Stop any existing Ollama processes
|
||||
pkill ollama
|
||||
|
||||
# Start Ollama with GPU support and custom models path
|
||||
OLLAMA_NUM_GPU=1 OLLAMA_MODELS=/mnt/data/ollama ollama serve > /tmp/ollama.log 2>&1 &
|
||||
```
|
||||
|
||||
### Verify Ollama is Running
|
||||
|
||||
```bash
|
||||
# Check if Ollama is responding
|
||||
curl http://localhost:11434/api/tags
|
||||
|
||||
# Check GPU usage (should show Ollama using GPU memory)
|
||||
nvidia-smi
|
||||
|
||||
# Check if models are available
|
||||
curl http://localhost:11434/api/tags | python3 -m json.tool
|
||||
```
|
||||
|
||||
### Make Ollama Permanent (Systemd Service)
|
||||
|
||||
To make Ollama start automatically with GPU support:
|
||||
|
||||
```bash
|
||||
# Edit the systemd service
|
||||
sudo systemctl edit ollama
|
||||
|
||||
# Add this content:
|
||||
[Service]
|
||||
Environment="OLLAMA_NUM_GPU=1"
|
||||
Environment="OLLAMA_MODELS=/mnt/data/ollama"
|
||||
|
||||
# Reload and restart
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart ollama
|
||||
sudo systemctl enable ollama
|
||||
```
|
||||
|
||||
### Troubleshooting Ollama
|
||||
|
||||
- **Not using GPU**: Check `nvidia-smi` - if no Ollama process is using GPU memory, restart with `OLLAMA_NUM_GPU=1`
|
||||
- **Models not found**: Ensure `OLLAMA_MODELS=/mnt/data/ollama` is set
|
||||
- **Port already in use**: Stop existing Ollama with `pkill ollama` or `sudo systemctl stop ollama`
|
||||
|
||||
## 2. Virtual Environment Setup (Optional)
|
||||
|
||||
**Note**: Nanobot is installed in system Python and can run without a venv. However, if you prefer isolation or are developing, you can use the venv.
|
||||
|
||||
### Option A: Run Without Venv (Recommended)
|
||||
|
||||
Nanobot is already installed in system Python:
|
||||
|
||||
```bash
|
||||
# Just run directly
|
||||
python3 -m nanobot.cli.commands agent -m "your message"
|
||||
```
|
||||
|
||||
### Option B: Use Virtual Environment
|
||||
|
||||
If you want to use the venv:
|
||||
|
||||
```bash
|
||||
cd /root/code/nanobot
|
||||
source .venv/bin/activate
|
||||
python3 -m nanobot.cli.commands agent -m "your message"
|
||||
```
|
||||
|
||||
### Install/Update Dependencies
|
||||
|
||||
If dependencies are missing in system Python:
|
||||
|
||||
```bash
|
||||
pip3 install -e /root/code/nanobot --break-system-packages
|
||||
```
|
||||
|
||||
Or in venv:
|
||||
|
||||
```bash
|
||||
cd /root/code/nanobot
|
||||
source .venv/bin/activate
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
## 3. Running Nanobot
|
||||
|
||||
### Basic Usage (Without Venv)
|
||||
|
||||
```bash
|
||||
python3 -m nanobot.cli.commands agent -m "your message here"
|
||||
```
|
||||
|
||||
### Basic Usage (With Venv)
|
||||
|
||||
```bash
|
||||
cd /root/code/nanobot
|
||||
source .venv/bin/activate
|
||||
python3 -m nanobot.cli.commands agent -m "your message here"
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
Nanobot configuration is stored in `~/.nanobot/config.json`.
|
||||
|
||||
Example configuration for Ollama:
|
||||
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"custom": {
|
||||
"apiKey": "no-key",
|
||||
"apiBase": "http://localhost:11434/v1"
|
||||
}
|
||||
},
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "llama3.1:8b"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Quick Start Script
|
||||
|
||||
Create an alias for convenience. Add one of these to your `~/.zshrc` or `~/.bashrc`:
|
||||
|
||||
**Option 1: Without venv (Recommended - simpler)**
|
||||
```bash
|
||||
alias nanobot='python3 -m nanobot.cli.commands'
|
||||
```
|
||||
|
||||
**Option 2: With venv (if you prefer isolation)**
|
||||
```bash
|
||||
alias nanobot='cd /root/code/nanobot && source .venv/bin/activate && python3 -m nanobot.cli.commands'
|
||||
```
|
||||
|
||||
**After adding the alias:**
|
||||
```bash
|
||||
# Reload your shell configuration
|
||||
source ~/.zshrc # or source ~/.bashrc
|
||||
|
||||
# Now you can use the shorter command:
|
||||
nanobot agent -m "your message here"
|
||||
```
|
||||
|
||||
**Example usage with alias:**
|
||||
```bash
|
||||
# Simple message
|
||||
nanobot agent -m "hello"
|
||||
|
||||
# Analyze Excel file
|
||||
nanobot agent -m "analyze /root/.nanobot/workspace/bakery_inventory.xlsx file and calculate total inventory value"
|
||||
|
||||
# Start new session
|
||||
nanobot agent -m "/new"
|
||||
```
|
||||
|
||||
### Example: Analyze Excel File
|
||||
|
||||
```bash
|
||||
# Without venv (simpler)
|
||||
python3 -m nanobot.cli.commands agent -m "analyze /root/.nanobot/workspace/bakery_inventory.xlsx file and calculate total inventory value"
|
||||
|
||||
# Or with venv
|
||||
cd /root/code/nanobot && source .venv/bin/activate && python3 -m nanobot.cli.commands agent -m "analyze /root/.nanobot/workspace/bakery_inventory.xlsx file and calculate total inventory value"
|
||||
```
|
||||
|
||||
## 4. Complete Startup Sequence
|
||||
|
||||
Here's the complete sequence to get everything running:
|
||||
|
||||
```bash
|
||||
# 1. Start Ollama with GPU support
|
||||
OLLAMA_NUM_GPU=1 OLLAMA_MODELS=/mnt/data/ollama ollama serve > /tmp/ollama.log 2>&1 &
|
||||
|
||||
# 2. Wait a few seconds for Ollama to start
|
||||
sleep 3
|
||||
|
||||
# 3. Verify Ollama is running
|
||||
curl http://localhost:11434/api/tags
|
||||
|
||||
# 4. Run nanobot (no venv needed)
|
||||
python3 -m nanobot.cli.commands agent -m "hello"
|
||||
|
||||
# Or with venv (optional):
|
||||
# cd /root/code/nanobot
|
||||
# source .venv/bin/activate
|
||||
# python3 -m nanobot.cli.commands agent -m "hello"
|
||||
```
|
||||
```
|
||||
|
||||
## 5. Troubleshooting
|
||||
|
||||
### Nanobot Hangs or "Thinking Too Long"
|
||||
|
||||
- **Check Ollama**: Ensure Ollama is running and responding
|
||||
```bash
|
||||
curl http://localhost:11434/api/tags
|
||||
```
|
||||
|
||||
- **Check GPU**: Verify Ollama is using GPU (should show GPU memory usage in `nvidia-smi`)
|
||||
```bash
|
||||
nvidia-smi
|
||||
```
|
||||
|
||||
- **Check Timeout**: The CustomProvider has a 120-second timeout. If requests take longer, Ollama may be overloaded.
|
||||
|
||||
### Python Command Not Found
|
||||
|
||||
If nanobot uses `python` instead of `python3`:
|
||||
|
||||
```bash
|
||||
# Create symlink
|
||||
sudo ln -sf /usr/bin/python3 /usr/local/bin/python
|
||||
```
|
||||
|
||||
### Pandas/Openpyxl Not Available
|
||||
|
||||
If nanobot needs to analyze Excel files:
|
||||
|
||||
```bash
|
||||
# Install in system Python (for exec tool)
|
||||
pip3 install pandas openpyxl --break-system-packages
|
||||
|
||||
# Or ensure python symlink exists (see above)
|
||||
```
|
||||
|
||||
### Virtual Environment Issues
|
||||
|
||||
If `.venv` doesn't exist or is corrupted:
|
||||
|
||||
```bash
|
||||
cd /root/code/nanobot
|
||||
python3 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
## 6. File Locations
|
||||
|
||||
- **Nanobot code**: `/root/code/nanobot`
|
||||
- **Nanobot config**: `~/.nanobot/config.json`
|
||||
- **Nanobot workspace**: `~/.nanobot/workspace`
|
||||
- **Ollama models**: `/mnt/data/ollama`
|
||||
- **Ollama logs**: `/tmp/ollama.log`
|
||||
|
||||
## 7. Environment Variables
|
||||
|
||||
### Ollama
|
||||
|
||||
- `OLLAMA_NUM_GPU=1` - Enable GPU support
|
||||
- `OLLAMA_MODELS=/mnt/data/ollama` - Custom models directory
|
||||
- `OLLAMA_HOST=http://127.0.0.1:11434` - Server address
|
||||
|
||||
### Nanobot
|
||||
|
||||
- Uses `~/.nanobot/config.json` for configuration
|
||||
- Workspace defaults to `~/.nanobot/workspace`
|
||||
|
||||
## 8. Performance Tips
|
||||
|
||||
1. **Always use GPU**: Start Ollama with `OLLAMA_NUM_GPU=1` for much faster responses
|
||||
2. **Keep models loaded**: Ollama keeps frequently used models in GPU memory
|
||||
3. **Use appropriate model size**: Smaller models (like llama3.1:8b) are faster than larger ones
|
||||
4. **Monitor GPU usage**: Use `nvidia-smi` to check if GPU is being utilized
|
||||
|
||||
## 9. Quick Reference
|
||||
|
||||
```bash
|
||||
# Start Ollama
|
||||
OLLAMA_NUM_GPU=1 OLLAMA_MODELS=/mnt/data/ollama ollama serve > /tmp/ollama.log 2>&1 &
|
||||
|
||||
# Run nanobot (no venv needed)
|
||||
python3 -m nanobot.cli.commands agent -m "message"
|
||||
|
||||
# Or with venv (optional):
|
||||
# cd /root/code/nanobot && source .venv/bin/activate && python3 -m nanobot.cli.commands agent -m "message"
|
||||
|
||||
# Check status
|
||||
nvidia-smi # GPU usage
|
||||
curl http://localhost:11434/api/tags # Ollama models
|
||||
ps aux | grep ollama # Ollama process
|
||||
```
|
||||
|
||||
## 10. Common Commands
|
||||
|
||||
```bash
|
||||
# Stop Ollama
|
||||
pkill ollama
|
||||
|
||||
# Restart Ollama with GPU
|
||||
pkill ollama && OLLAMA_NUM_GPU=1 OLLAMA_MODELS=/mnt/data/ollama ollama serve > /tmp/ollama.log 2>&1 &
|
||||
|
||||
# Check Ollama logs
|
||||
tail -f /tmp/ollama.log
|
||||
|
||||
# Test Ollama directly
|
||||
curl http://localhost:11434/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"model":"llama3.1:8b","messages":[{"role":"user","content":"hello"}],"max_tokens":10}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-02-23
|
||||
**Tested with**: Ollama 0.13.5, Python 3.11.2, nanobot 0.1.4
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user