fix-cron-scheduled-tasks #1

Merged
tanyar09 merged 11 commits from fix-cron-scheduled-tasks into feature/cleanup-providers-llama-only 2026-03-04 12:04:57 -05:00
20 changed files with 1415 additions and 94 deletions

View File

@ -0,0 +1,219 @@
---
alwaysApply: true
description: Security rules and restrictions for nanobot to prevent unauthorized access and dangerous operations
---
# Nanobot Security Rules
## CRITICAL: What Nanobot CANNOT Do
### 1. System-Level Restrictions
**NEVER allow nanobot to:**
- Execute destructive system commands (`rm -rf /`, `format`, `mkfs`, `dd`, `shutdown`, `reboot`, `poweroff`)
- Access files outside the configured workspace when `restrict_to_workspace` is enabled
- Modify system configuration files (`/etc/*`, `/root/.ssh/*`, `/root/.bashrc`, `/root/.zshrc`)
- Access or modify files in `~/.nanobot/config.json` or other nanobot configuration files
- Execute commands that could compromise system security (privilege escalation, network scanning, etc.)
- Access sensitive directories like `/etc/passwd`, `/etc/shadow`, `/proc/sys/*`, `/sys/*`
- Modify or delete files in `/usr/bin`, `/usr/local/bin`, `/bin`, `/sbin`, or other system directories
- Install or uninstall system packages without explicit user permission
- Modify firewall rules or network configuration
- Access or modify Docker containers or images without explicit permission
### 2. Network Security Restrictions
**NEVER allow nanobot to:**
- Make outbound network connections to unauthorized endpoints
- Expose internal services to external networks
- Bypass authentication on network services
- Access localhost-only services from external networks
- Modify network routing or firewall rules
### 3. Authentication & Access Control
**MUST enforce:**
- All channels MUST have `allowFrom` lists configured in production
- Empty `allowFrom` lists allow ALL users (security risk in production)
- Authentication failures MUST be logged
- API keys MUST be stored securely (not in code, use `~/.nanobot/config.json` with `chmod 600`)
- Never commit API keys or tokens to version control
### 4. File System Security
**Restrictions:**
- When `restrict_to_workspace` is enabled, all file operations MUST stay within the workspace directory
- Path traversal attempts (`../`, `..\\`) MUST be blocked
- File operations on sensitive paths MUST be blocked:
- `~/.nanobot/config.json` (read-only for configuration, never modify)
- `~/.ssh/*` (SSH keys)
- `/etc/*` (system configuration)
- `/root/.bashrc`, `/root/.zshrc` (shell configuration)
- System binaries in `/usr/bin`, `/bin`, `/sbin`
### 5. Command Execution Security
**Blocked command patterns (already implemented in [shell.py](mdc:nanobot/agent/tools/shell.py)):**
- `rm -rf`, `rm -r`, `rm -f` (recursive deletion)
- `format`, `mkfs.*` (disk formatting)
- `dd if=` (raw disk writes)
- `shutdown`, `reboot`, `poweroff` (system power control)
- Fork bombs (`:(){ :|:& };:`)
- Commands writing to `/dev/sd*` (raw disk access)
**Additional restrictions to enforce:**
- Commands that modify system packages (`apt install`, `pip install --break-system-packages` without explicit permission)
- Commands that modify system services (`systemctl`, `service`)
- Commands accessing `/proc/sys/*` or `/sys/*` (kernel parameters)
- Commands that could leak sensitive information (`cat /etc/passwd`, `env`, `history`)
### 6. Data Privacy & Confidentiality
**NEVER allow nanobot to:**
- Expose API keys, tokens, or credentials in logs or responses
- Share sensitive user data with external services without explicit permission
- Store sensitive data in plain text (use encryption or secure storage)
- Log sensitive information (passwords, API keys, personal data)
## Security Configuration Requirements
### Production Deployment Checklist
Before deploying nanobot in production, verify:
1. **API Key Security**
```bash
chmod 600 ~/.nanobot/config.json
```
- API keys stored in config file (not hardcoded)
- Config file permissions set to `0600`
- Consider using environment variables or OS keyring for sensitive keys
2. **Channel Access Control**
```json
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_TOKEN",
"allowFrom": ["123456789"] // MUST be configured in production
}
}
}
```
- All channels have `allowFrom` lists configured
- Empty `allowFrom` = ALLOW ALL (security risk)
3. **Workspace Restrictions**
```json
{
"agents": {
"defaults": {
"restrictToWorkspace": true // Recommended for production
}
}
}
```
- Enable `restrictToWorkspace` to limit file operations
- Set workspace to a dedicated directory with proper permissions
4. **User Account**
- Run nanobot as a dedicated non-root user
- Use `sudo useradd -m -s /bin/bash nanobot`
- Never run as root user
5. **File Permissions**
```bash
chmod 700 ~/.nanobot
chmod 600 ~/.nanobot/config.json
chmod 700 ~/.nanobot/whatsapp-auth # if using WhatsApp
```
6. **Network Security**
- WhatsApp bridge binds to `127.0.0.1:3001` (localhost only)
- Set `bridgeToken` in config for shared-secret authentication
- Use firewall to restrict outbound connections if needed
## Security Monitoring
### Log Monitoring
Monitor logs for security events:
```bash
# Check for access denials
grep "Access denied" ~/.nanobot/logs/nanobot.log
# Check for blocked commands
grep "blocked by safety guard" ~/.nanobot/logs/nanobot.log
# Review all tool executions
grep "ExecTool:" ~/.nanobot/logs/nanobot.log
```
### Regular Security Audits
1. Review all tool usage in agent logs
2. Check for unexpected file modifications
3. Monitor API key usage for anomalies
4. Review channel access logs
5. Update dependencies regularly (`pip-audit`, `npm audit`)
## Incident Response
If security breach is suspected:
1. **Immediately revoke compromised API keys**
2. **Review logs for unauthorized access**
3. **Check for unexpected file modifications**
4. **Rotate all credentials**
5. **Update to latest version**
6. **Report to maintainers** (xubinrencs@gmail.com)
## Code Security Guidelines
When modifying nanobot code:
1. **Never remove security checks** from [shell.py](mdc:nanobot/agent/tools/shell.py)
2. **Always validate user input** before processing
3. **Enforce path restrictions** in filesystem tools ([filesystem.py](mdc:nanobot/agent/tools/filesystem.py))
4. **Check `allowFrom` lists** in channel handlers ([base.py](mdc:nanobot/channels/base.py))
5. **Log security events** (access denials, blocked commands)
6. **Never expose sensitive data** in error messages or logs
7. **Use parameterized queries** if adding database functionality
8. **Validate file paths** to prevent path traversal attacks
9. **Sanitize command inputs** before execution
10. **Rate limit** API calls to prevent abuse
## Tool-Specific Security Rules
### ExecTool Security
- Commands MUST be validated against deny patterns
- Timeout MUST be enforced (default 60s, configurable)
- Output MUST be truncated (10KB limit)
- Working directory MUST be restricted when `restrict_to_workspace` is enabled
### Filesystem Tools Security
- Path resolution MUST check against `allowed_dir` when set
- Path traversal (`../`, `..\\`) MUST be blocked
- File operations MUST respect workspace restrictions
- Sensitive file paths MUST be blocked (config files, SSH keys, system files)
### Web Tools Security
- HTTP requests MUST have timeouts (10-30s)
- URLs MUST be validated before fetching
- Content MUST be truncated (50KB limit for web_fetch)
- External API calls MUST use HTTPS
### Channel Security
- `is_allowed()` MUST be called before processing messages
- Access denials MUST be logged
- Empty `allowFrom` lists MUST be documented as "allow all"
- Authentication tokens MUST be stored securely
## References
- Security documentation: [SECURITY.md](mdc:SECURITY.md)
- Shell tool implementation: [nanobot/agent/tools/shell.py](mdc:nanobot/agent/tools/shell.py)
- Filesystem tools: [nanobot/agent/tools/filesystem.py](mdc:nanobot/agent/tools/filesystem.py)
- Channel base class: [nanobot/channels/base.py](mdc:nanobot/channels/base.py)
- Configuration schema: [nanobot/config/schema.py](mdc:nanobot/config/schema.py)

290
SECURITY_CONFIGURATION.md Normal file
View File

@ -0,0 +1,290 @@
# Nanobot Security Configuration Guide
This guide provides step-by-step instructions for securing your nanobot installation.
## Quick Security Setup
### 1. Secure Configuration File
```bash
# Set proper permissions on config file
chmod 600 ~/.nanobot/config.json
# Set proper permissions on nanobot directory
chmod 700 ~/.nanobot
```
### 2. Configure Channel Access Control
**CRITICAL**: Empty `allowFrom` lists allow ALL users. Always configure this in production!
#### Telegram Example
```json
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["123456789", "987654321"]
}
}
}
```
To find your Telegram user ID:
1. Message `@userinfobot` on Telegram
2. Copy your user ID
3. Add it to the `allowFrom` list
#### WhatsApp Example
```json
{
"channels": {
"whatsapp": {
"enabled": true,
"allowFrom": ["+1234567890", "+0987654321"]
}
}
}
```
Use full phone numbers with country code (e.g., `+1` for US).
#### Email Example
```json
{
"channels": {
"email": {
"enabled": true,
"allowFrom": ["user@example.com", "admin@example.com"]
}
}
}
```
### 3. Enable Workspace Restrictions
Restrict file operations to a specific directory:
```json
{
"agents": {
"defaults": {
"restrictToWorkspace": true
}
}
}
```
This ensures nanobot can only access files within `~/.nanobot/workspace`.
### 4. Run as Non-Root User
**NEVER run nanobot as root!**
```bash
# Create dedicated user
sudo useradd -m -s /bin/bash nanobot
# Switch to nanobot user
sudo -u nanobot bash
# Run nanobot
python3 -m nanobot.cli.commands agent -m "hello"
```
### 5. Configure Command Timeouts
Limit command execution time:
```json
{
"agents": {
"defaults": {
"execConfig": {
"timeout": 30
}
}
}
}
```
Default is 60 seconds. Reduce for stricter security.
## Advanced Security Configuration
### 1. Custom Command Blocking
You can add custom blocked command patterns by modifying the ExecTool initialization, but this requires code changes. The default patterns block:
- `rm -rf`, `rm -r`, `rm -f`
- `format`, `mkfs.*`
- `dd if=`
- `shutdown`, `reboot`, `poweroff`
- Fork bombs
### 2. Network Security
#### Restrict Outbound Connections
Use a firewall to restrict what nanobot can access:
```bash
# Example: Only allow HTTPS to specific domains
sudo ufw allow out 443/tcp
sudo ufw deny out 80/tcp # Block HTTP
```
#### WhatsApp Bridge Security
The WhatsApp bridge binds to `127.0.0.1:3001` (localhost only) by default. For additional security:
```json
{
"channels": {
"whatsapp": {
"enabled": true,
"bridgeToken": "your-secret-token-here"
}
}
}
```
Set a `bridgeToken` to enable shared-secret authentication between Python and Node.js components.
### 3. Log Monitoring
Set up log monitoring to detect security issues:
```bash
# Monitor access denials
tail -f ~/.nanobot/logs/nanobot.log | grep "Access denied"
# Monitor blocked commands
tail -f ~/.nanobot/logs/nanobot.log | grep "blocked by safety guard"
# Monitor all tool executions
tail -f ~/.nanobot/logs/nanobot.log | grep "ExecTool:"
```
### 4. Regular Security Audits
#### Check Dependencies
```bash
# Python dependencies
pip install pip-audit
pip-audit
# Node.js dependencies (for WhatsApp bridge)
cd bridge
npm audit
npm audit fix
```
#### Review Logs
```bash
# Check for suspicious activity
grep -i "error\|denied\|blocked" ~/.nanobot/logs/nanobot.log | tail -100
# Check file operations
grep "write_file\|edit_file" ~/.nanobot/logs/nanobot.log | tail -100
```
### 5. API Key Rotation
Rotate API keys regularly:
1. Generate new API keys from your provider
2. Update `~/.nanobot/config.json`
3. Restart nanobot
4. Revoke old keys after confirming new ones work
### 6. Environment Isolation
Run nanobot in a container or VM for better isolation:
```bash
# Using Docker (if Dockerfile exists)
docker build -t nanobot .
docker run --rm -it \
-v ~/.nanobot:/root/.nanobot \
-v ~/.nanobot/workspace:/root/.nanobot/workspace \
nanobot
```
## Security Checklist
Before deploying nanobot in production:
- [ ] Config file permissions set to `0600`
- [ ] Nanobot directory permissions set to `700`
- [ ] All channels have `allowFrom` lists configured
- [ ] Running as non-root user
- [ ] `restrictToWorkspace` enabled
- [ ] Command timeout configured
- [ ] API keys stored securely (not in code)
- [ ] Logs monitored for security events
- [ ] Dependencies updated and audited
- [ ] Firewall rules configured (if needed)
- [ ] Backup and disaster recovery plan in place
## What Nanobot Cannot Do (Built-in Protections)
Nanobot has built-in protections that prevent:
1. **Destructive Commands**: `rm -rf /`, `format`, `mkfs`, `dd`, `shutdown`, etc.
2. **Path Traversal**: `../` and `..\\` are blocked when workspace restrictions are enabled
3. **System File Access**: When restricted, cannot access files outside workspace
4. **Unlimited Execution**: Commands timeout after configured limit (default 60s)
5. **Unlimited Output**: Command output truncated at 10KB
6. **Unauthorized Access**: Channels check `allowFrom` lists before processing messages
## Incident Response
If you suspect a security breach:
1. **Immediately revoke compromised API keys**
```bash
# Update config.json with new keys
nano ~/.nanobot/config.json
```
2. **Review logs for unauthorized access**
```bash
grep "Access denied" ~/.nanobot/logs/nanobot.log
```
3. **Check for unexpected file modifications**
```bash
find ~/.nanobot/workspace -type f -mtime -1 -ls
```
4. **Rotate all credentials**
- Update API keys
- Update channel tokens
- Update bridge tokens (if using WhatsApp)
5. **Update to latest version**
```bash
pip install --upgrade nanobot-ai
```
6. **Report the incident**
- Email: xubinrencs@gmail.com
- Include: Description, steps to reproduce, potential impact
## Additional Resources
- [SECURITY.md](SECURITY.md) - Full security policy and best practices
- [SETUP_GUIDE.md](SETUP_GUIDE.md) - Setup and configuration guide
- [README.md](README.md) - General documentation
## Questions?
If you have security concerns or questions:
- Review [SECURITY.md](SECURITY.md)
- Check nanobot logs for errors
- Contact maintainers: xubinrencs@gmail.com

324
SETUP_GUIDE.md Normal file
View File

@ -0,0 +1,324 @@
# Nanobot Setup Guide
This guide documents how to set up and run Ollama, the virtual environment, and nanobot.
## Prerequisites
- Python 3.11+
- NVIDIA GPU (for GPU acceleration)
- Ollama installed (`/usr/local/bin/ollama`)
## 1. Running Ollama with GPU Support
Ollama must be started with GPU support to ensure fast responses. The models are stored in `/mnt/data/ollama`.
### Start Ollama with GPU
```bash
# Stop any existing Ollama processes
pkill ollama
# Start Ollama with GPU support and custom models path
OLLAMA_NUM_GPU=1 OLLAMA_MODELS=/mnt/data/ollama ollama serve > /tmp/ollama.log 2>&1 &
```
### Verify Ollama is Running
```bash
# Check if Ollama is responding
curl http://localhost:11434/api/tags
# Check GPU usage (should show Ollama using GPU memory)
nvidia-smi
# Check if models are available
curl http://localhost:11434/api/tags | python3 -m json.tool
```
### Make Ollama Permanent (Systemd Service)
To make Ollama start automatically with GPU support:
```bash
# Edit the systemd service
sudo systemctl edit ollama
# Add this content:
[Service]
Environment="OLLAMA_NUM_GPU=1"
Environment="OLLAMA_MODELS=/mnt/data/ollama"
# Reload and restart
sudo systemctl daemon-reload
sudo systemctl restart ollama
sudo systemctl enable ollama
```
### Troubleshooting Ollama
- **Not using GPU**: Check `nvidia-smi` - if no Ollama process is using GPU memory, restart with `OLLAMA_NUM_GPU=1`
- **Models not found**: Ensure `OLLAMA_MODELS=/mnt/data/ollama` is set
- **Port already in use**: Stop existing Ollama with `pkill ollama` or `sudo systemctl stop ollama`
## 2. Virtual Environment Setup (Optional)
**Note**: Nanobot is installed in system Python and can run without a venv. However, if you prefer isolation or are developing, you can use the venv.
### Option A: Run Without Venv (Recommended)
Nanobot is already installed in system Python:
```bash
# Just run directly
python3 -m nanobot.cli.commands agent -m "your message"
```
### Option B: Use Virtual Environment
If you want to use the venv:
```bash
cd /root/code/nanobot
source .venv/bin/activate
python3 -m nanobot.cli.commands agent -m "your message"
```
### Install/Update Dependencies
If dependencies are missing in system Python:
```bash
pip3 install -e /root/code/nanobot --break-system-packages
```
Or in venv:
```bash
cd /root/code/nanobot
source .venv/bin/activate
pip install -e .
```
## 3. Running Nanobot
### Basic Usage (Without Venv)
```bash
python3 -m nanobot.cli.commands agent -m "your message here"
```
### Basic Usage (With Venv)
```bash
cd /root/code/nanobot
source .venv/bin/activate
python3 -m nanobot.cli.commands agent -m "your message here"
```
### Configuration
Nanobot configuration is stored in `~/.nanobot/config.json`.
Example configuration for Ollama:
```json
{
"providers": {
"custom": {
"apiKey": "no-key",
"apiBase": "http://localhost:11434/v1"
}
},
"agents": {
"defaults": {
"model": "llama3.1:8b"
}
}
}
```
### Quick Start Script
Create an alias for convenience. Add one of these to your `~/.zshrc` or `~/.bashrc`:
**Option 1: Without venv (Recommended - simpler)**
```bash
alias nanobot='python3 -m nanobot.cli.commands'
```
**Option 2: With venv (if you prefer isolation)**
```bash
alias nanobot='cd /root/code/nanobot && source .venv/bin/activate && python3 -m nanobot.cli.commands'
```
**After adding the alias:**
```bash
# Reload your shell configuration
source ~/.zshrc # or source ~/.bashrc
# Now you can use the shorter command:
nanobot agent -m "your message here"
```
**Example usage with alias:**
```bash
# Simple message
nanobot agent -m "hello"
# Analyze Excel file
nanobot agent -m "analyze /root/.nanobot/workspace/bakery_inventory.xlsx file and calculate total inventory value"
# Start new session
nanobot agent -m "/new"
```
### Example: Analyze Excel File
```bash
# Without venv (simpler)
python3 -m nanobot.cli.commands agent -m "analyze /root/.nanobot/workspace/bakery_inventory.xlsx file and calculate total inventory value"
# Or with venv
cd /root/code/nanobot && source .venv/bin/activate && python3 -m nanobot.cli.commands agent -m "analyze /root/.nanobot/workspace/bakery_inventory.xlsx file and calculate total inventory value"
```
## 4. Complete Startup Sequence
Here's the complete sequence to get everything running:
```bash
# 1. Start Ollama with GPU support
OLLAMA_NUM_GPU=1 OLLAMA_MODELS=/mnt/data/ollama ollama serve > /tmp/ollama.log 2>&1 &
# 2. Wait a few seconds for Ollama to start
sleep 3
# 3. Verify Ollama is running
curl http://localhost:11434/api/tags
# 4. Run nanobot (no venv needed)
python3 -m nanobot.cli.commands agent -m "hello"
# Or with venv (optional):
# cd /root/code/nanobot
# source .venv/bin/activate
# python3 -m nanobot.cli.commands agent -m "hello"
```
```
## 5. Troubleshooting
### Nanobot Hangs or "Thinking Too Long"
- **Check Ollama**: Ensure Ollama is running and responding
```bash
curl http://localhost:11434/api/tags
```
- **Check GPU**: Verify Ollama is using GPU (should show GPU memory usage in `nvidia-smi`)
```bash
nvidia-smi
```
- **Check Timeout**: The CustomProvider has a 120-second timeout. If requests take longer, Ollama may be overloaded.
### Python Command Not Found
If nanobot uses `python` instead of `python3`:
```bash
# Create symlink
sudo ln -sf /usr/bin/python3 /usr/local/bin/python
```
### Pandas/Openpyxl Not Available
If nanobot needs to analyze Excel files:
```bash
# Install in system Python (for exec tool)
pip3 install pandas openpyxl --break-system-packages
# Or ensure python symlink exists (see above)
```
### Virtual Environment Issues
If `.venv` doesn't exist or is corrupted:
```bash
cd /root/code/nanobot
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
```
## 6. File Locations
- **Nanobot code**: `/root/code/nanobot`
- **Nanobot config**: `~/.nanobot/config.json`
- **Nanobot workspace**: `~/.nanobot/workspace`
- **Ollama models**: `/mnt/data/ollama`
- **Ollama logs**: `/tmp/ollama.log`
## 7. Environment Variables
### Ollama
- `OLLAMA_NUM_GPU=1` - Enable GPU support
- `OLLAMA_MODELS=/mnt/data/ollama` - Custom models directory
- `OLLAMA_HOST=http://127.0.0.1:11434` - Server address
### Nanobot
- Uses `~/.nanobot/config.json` for configuration
- Workspace defaults to `~/.nanobot/workspace`
## 8. Performance Tips
1. **Always use GPU**: Start Ollama with `OLLAMA_NUM_GPU=1` for much faster responses
2. **Keep models loaded**: Ollama keeps frequently used models in GPU memory
3. **Use appropriate model size**: Smaller models (like llama3.1:8b) are faster than larger ones
4. **Monitor GPU usage**: Use `nvidia-smi` to check if GPU is being utilized
## 9. Quick Reference
```bash
# Start Ollama
OLLAMA_NUM_GPU=1 OLLAMA_MODELS=/mnt/data/ollama ollama serve > /tmp/ollama.log 2>&1 &
# Run nanobot (no venv needed)
python3 -m nanobot.cli.commands agent -m "message"
# Or with venv (optional):
# cd /root/code/nanobot && source .venv/bin/activate && python3 -m nanobot.cli.commands agent -m "message"
# Check status
nvidia-smi # GPU usage
curl http://localhost:11434/api/tags # Ollama models
ps aux | grep ollama # Ollama process
```
## 10. Common Commands
```bash
# Stop Ollama
pkill ollama
# Restart Ollama with GPU
pkill ollama && OLLAMA_NUM_GPU=1 OLLAMA_MODELS=/mnt/data/ollama ollama serve > /tmp/ollama.log 2>&1 &
# Check Ollama logs
tail -f /tmp/ollama.log
# Test Ollama directly
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"llama3.1:8b","messages":[{"role":"user","content":"hello"}],"max_tokens":10}'
```
---
**Last Updated**: 2026-02-23
**Tested with**: Ollama 0.13.5, Python 3.11.2, nanobot 0.1.4

View File

@ -102,8 +102,10 @@ Your workspace is at: {workspace_path}
- Custom skills: {workspace_path}/skills/{{skill-name}}/SKILL.md
IMPORTANT: When responding to direct questions or conversations, reply directly with your text response.
Only use the 'message' tool when you need to send a message to a specific chat channel (like WhatsApp).
For normal conversation, just respond with text - do not call the message tool.
Only use the 'message' tool when the user explicitly asks you to send a message to someone else or to a different channel.
For normal conversation, acknowledgments (Thanks, OK, etc.), or when the user is talking to YOU, just respond with text - do NOT call the message tool.
For simple acknowledgments like "Thanks", "OK", "You're welcome", "Got it", etc., respond naturally and conversationally - just say "You're welcome!", "No problem!", "Happy to help!", etc. Do not explain your reasoning or mention tools. Just be friendly and brief.
Always be helpful, accurate, and concise. Before calling tools, briefly tell the user what you're about to do (one short sentence in the user's language).
When remembering something important, write to {workspace_path}/memory/MEMORY.md

View File

@ -186,14 +186,26 @@ class AgentLoop:
while iteration < self.max_iterations:
iteration += 1
logger.debug(f"Agent loop iteration {iteration}/{self.max_iterations}, calling LLM provider...")
response = await self.provider.chat(
messages=messages,
tools=self.tools.get_definitions(),
model=self.model,
temperature=self.temperature,
max_tokens=self.max_tokens,
)
try:
response = await asyncio.wait_for(
self.provider.chat(
messages=messages,
tools=self.tools.get_definitions(),
model=self.model,
temperature=self.temperature,
max_tokens=self.max_tokens,
),
timeout=120.0 # 2 minute timeout per LLM call
)
logger.debug(f"LLM provider returned response, has_tool_calls={response.has_tool_calls}")
except asyncio.TimeoutError:
logger.error(f"LLM provider call timed out after 120 seconds")
return "Error: Request timed out. The LLM provider may be slow or unresponsive.", tools_used
except Exception as e:
logger.error(f"LLM provider error: {e}")
return f"Error calling LLM: {str(e)}", tools_used
if response.has_tool_calls:
if on_progress:
@ -221,13 +233,19 @@ class AgentLoop:
args_str = json.dumps(tool_call.arguments, ensure_ascii=False)
logger.info(f"Tool call: {tool_call.name}({args_str[:200]})")
result = await self.tools.execute(tool_call.name, tool_call.arguments)
logger.info(f"Tool result length: {len(result) if result else 0}, preview: {result[:200] if result else 'None'}")
messages = self.context.add_tool_result(
messages, tool_call.id, tool_call.name, result
)
logger.debug(f"Added tool result to messages. Total messages: {len(messages)}")
else:
final_content = self._strip_think(response.content)
logger.info(f"Final response generated. Content length: {len(final_content) if final_content else 0}")
break
if final_content is None and iteration >= self.max_iterations:
logger.warning(f"Max iterations ({self.max_iterations}) reached without final response. Last tool calls: {tools_used[-3:] if len(tools_used) >= 3 else tools_used}")
return final_content, tools_used
async def run(self) -> None:
@ -318,8 +336,21 @@ class AgentLoop:
return OutboundMessage(channel=msg.channel, chat_id=msg.chat_id,
content="🐈 nanobot commands:\n/new — Start a new conversation\n/help — Show available commands")
if len(session.messages) > self.memory_window:
asyncio.create_task(self._consolidate_memory(session))
# Skip memory consolidation for CLI mode to avoid blocking/hanging
# Memory consolidation can be slow and CLI users want fast responses
if len(session.messages) > self.memory_window and msg.channel != "cli":
# Start memory consolidation in background with timeout protection
async def _consolidate_with_timeout():
try:
await asyncio.wait_for(
self._consolidate_memory(session),
timeout=120.0 # 2 minute timeout for memory consolidation
)
except asyncio.TimeoutError:
logger.warning(f"Memory consolidation timed out for session {session.key}")
except Exception as e:
logger.error(f"Memory consolidation error: {e}")
asyncio.create_task(_consolidate_with_timeout())
self._set_tool_context(msg.channel, msg.chat_id)
initial_messages = self.context.build_messages(
@ -454,12 +485,16 @@ class AgentLoop:
Respond with ONLY valid JSON, no markdown fences."""
try:
response = await self.provider.chat(
messages=[
{"role": "system", "content": "You are a memory consolidation agent. Respond only with valid JSON."},
{"role": "user", "content": prompt},
],
model=self.model,
# Add timeout to memory consolidation LLM call
response = await asyncio.wait_for(
self.provider.chat(
messages=[
{"role": "system", "content": "You are a memory consolidation agent. Respond only with valid JSON."},
{"role": "user", "content": prompt},
],
model=self.model,
),
timeout=120.0 # 2 minute timeout for consolidation LLM call
)
text = (response.content or "").strip()
if not text:
@ -473,8 +508,14 @@ Respond with ONLY valid JSON, no markdown fences."""
return
if entry := result.get("history_entry"):
# Convert to string if LLM returned a non-string (e.g., dict)
if not isinstance(entry, str):
entry = str(entry)
memory.append_history(entry)
if update := result.get("memory_update"):
# Convert to string if LLM returned a non-string (e.g., dict)
if not isinstance(update, str):
update = str(update)
if update != current_memory:
memory.write_long_term(update)

View File

@ -52,6 +52,36 @@ class Tool(ABC):
"""
pass
def coerce_params(self, params: dict[str, Any]) -> dict[str, Any]:
"""Coerce parameter types based on schema before validation."""
schema = self.parameters or {}
if schema.get("type", "object") != "object":
return params
coerced = params.copy()
props = schema.get("properties", {})
for key, value in list(coerced.items()): # Use list() to avoid modification during iteration
if key in props:
prop_schema = props[key]
param_type = prop_schema.get("type")
# Coerce types if value is not already the correct type
if param_type == "integer" and isinstance(value, str):
try:
coerced[key] = int(value)
except (ValueError, TypeError):
pass # Let validation catch the error
elif param_type == "number" and isinstance(value, str):
try:
coerced[key] = float(value)
except (ValueError, TypeError):
pass
elif param_type == "boolean" and isinstance(value, str):
coerced[key] = value.lower() in ("true", "1", "yes", "on")
return coerced
def validate_params(self, params: dict[str, Any]) -> list[str]:
"""Validate tool parameters against JSON schema. Returns error list (empty if valid)."""
schema = self.parameters or {}
@ -61,6 +91,9 @@ class Tool(ABC):
def _validate(self, val: Any, schema: dict[str, Any], path: str) -> list[str]:
t, label = schema.get("type"), path or "parameter"
# Allow None/null for optional parameters (not in required list)
if val is None:
return []
if t in self._TYPE_MAP and not isinstance(val, self._TYPE_MAP[t]):
return [f"{label} should be {t}"]

View File

@ -26,7 +26,7 @@ class CronTool(Tool):
@property
def description(self) -> str:
return "Schedule reminders and recurring tasks. Actions: add, list, remove."
return "Schedule reminders and recurring tasks. REQUIRED: Always include 'action' parameter ('add', 'list', or 'remove'). For reminders, use action='add' with message and timing (in_seconds, at, every_seconds, or cron_expr)."
@property
def parameters(self) -> dict[str, Any]:
@ -36,7 +36,7 @@ class CronTool(Tool):
"action": {
"type": "string",
"enum": ["add", "list", "remove"],
"description": "Action to perform"
"description": "REQUIRED: Action to perform. Use 'add' to create a reminder, 'list' to see all jobs, or 'remove' to delete a job."
},
"message": {
"type": "string",
@ -56,7 +56,15 @@ class CronTool(Tool):
},
"at": {
"type": "string",
"description": "ISO datetime for one-time execution (e.g. '2026-02-12T10:30:00')"
"description": "ISO datetime string for one-time execution. Format: YYYY-MM-DDTHH:MM:SS (e.g. '2026-03-03T12:19:30'). You MUST calculate this from the current time shown in your system prompt plus the requested seconds/minutes, then format as ISO string."
},
"in_seconds": {
"type": "integer",
"description": "Alternative to 'at': Schedule reminder in N seconds from now. Use this instead of calculating 'at' manually. Example: in_seconds=25 for 'remind me in 25 seconds'."
},
"reminder": {
"type": "boolean",
"description": "If true, this is a simple reminder (message sent directly to user). If false or omitted, this is a task (agent executes the message). Use reminder=true for 'remind me to X', reminder=false for 'schedule a task to do X'."
},
"job_id": {
"type": "string",
@ -74,11 +82,18 @@ class CronTool(Tool):
cron_expr: str | None = None,
tz: str | None = None,
at: str | None = None,
in_seconds: int | None = None,
reminder: bool = False,
job_id: str | None = None,
**kwargs: Any
) -> str:
from loguru import logger
logger.debug(f"CronTool.execute: action={action}, message={message[:50] if message else None}, every_seconds={every_seconds}, at={at}, in_seconds={in_seconds}, reminder={reminder}, channel={self._channel}, chat_id={self._chat_id}")
if action == "add":
return self._add_job(message, every_seconds, cron_expr, tz, at)
result = self._add_job(message, every_seconds, cron_expr, tz, at, in_seconds, reminder)
logger.debug(f"CronTool._add_job result: {result}")
return result
elif action == "list":
return self._list_jobs()
elif action == "remove":
@ -92,45 +107,103 @@ class CronTool(Tool):
cron_expr: str | None,
tz: str | None,
at: str | None,
in_seconds: int | None = None,
reminder: bool = False,
) -> str:
if not message:
return "Error: message is required for add"
if not self._channel or not self._chat_id:
return "Error: no session context (channel/chat_id)"
if tz and not cron_expr:
return "Error: tz can only be used with cron_expr"
if tz:
# Use defaults for CLI mode if context not set
channel = self._channel or "cli"
chat_id = self._chat_id or "direct"
# Validate timezone only if used with cron_expr
if tz and cron_expr:
from zoneinfo import ZoneInfo
try:
ZoneInfo(tz)
except (KeyError, Exception):
return f"Error: unknown timezone '{tz}'"
elif tz and not cron_expr:
# Ignore tz if not used with cron_expr (common mistake)
tz = None
# Build schedule
# Build schedule - prioritize 'in_seconds' for relative time, then 'at' for absolute time
delete_after = False
if every_seconds:
# Handle relative time (in_seconds) - compute datetime automatically
if in_seconds is not None:
from datetime import datetime, timedelta
from time import time as _time
future_time = datetime.now() + timedelta(seconds=in_seconds)
at = future_time.isoformat()
# Fall through to 'at' handling below
if at:
# One-time reminder at specific time
from datetime import datetime
try:
# Check if agent passed description text, Python code, or other invalid values
if "iso datetime" in at.lower() or "e.g." in at.lower() or "example" in at.lower() or at.startswith("("):
return f"Error: You passed description text '{at}' instead of an actual datetime string. You must: 1) Read current time from system prompt (e.g. '2026-03-03 12:19:04'), 2) Add requested seconds/minutes to it, 3) Format as ISO string like '2026-03-03T12:19:29'. Do NOT use description text or examples."
if "datetime.now()" in at or "timedelta" in at:
return f"Error: You passed Python code '{at}' instead of an actual datetime string. You must compute the datetime value first, then pass the ISO format string (e.g. '2026-03-03T12:19:29')."
dt = datetime.fromisoformat(at)
# If datetime is naive (no timezone), assume local timezone
if dt.tzinfo is None:
import time
# Get local timezone offset
local_offset = time.timezone if (time.daylight == 0) else time.altzone
# Convert naive datetime to UTC-aware for consistent timestamp calculation
dt = dt.replace(tzinfo=None)
# Calculate timestamp assuming local time
at_ms = int(dt.timestamp() * 1000)
else:
at_ms = int(dt.timestamp() * 1000)
# Validate that the time is in the future (allow 5 second buffer for processing)
from time import time as _time
from datetime import datetime as _dt
now_ms = int(_time() * 1000)
buffer_ms = 5000 # 5 second buffer for processing time
if at_ms <= (now_ms + buffer_ms):
now_str = _dt.now().strftime("%Y-%m-%d %H:%M:%S")
scheduled_str = _dt.fromtimestamp(at_ms / 1000).strftime("%Y-%m-%d %H:%M:%S")
diff_sec = (now_ms - at_ms) / 1000
if diff_sec > 0:
return f"Error: scheduled time ({scheduled_str}) is in the past by {diff_sec:.0f} seconds. Current time is {now_str}. You must ADD the requested seconds to the current time. Example: if current time is 12:21:46 and user wants reminder in 25 seconds, calculate 12:21:46 + 25 seconds = 12:22:11, then pass '2026-03-03T12:22:11'."
else:
return f"Error: scheduled time ({scheduled_str}) is too close to current time ({now_str}). You must ADD the requested seconds to the current time. Example: if current time is 12:21:46 and user wants reminder in 25 seconds, calculate 12:21:46 + 25 seconds = 12:22:11, then pass '2026-03-03T12:22:11'."
schedule = CronSchedule(kind="at", at_ms=at_ms)
delete_after = True
except (ValueError, Exception) as e:
return f"Error: invalid datetime format for 'at': {str(e)}. Expected ISO format like '2026-03-03T12:05:30', not Python code."
elif every_seconds:
# Recurring reminder
schedule = CronSchedule(kind="every", every_ms=every_seconds * 1000)
elif cron_expr:
# Cron expression
schedule = CronSchedule(kind="cron", expr=cron_expr, tz=tz)
elif at:
from datetime import datetime
dt = datetime.fromisoformat(at)
at_ms = int(dt.timestamp() * 1000)
schedule = CronSchedule(kind="at", at_ms=at_ms)
delete_after = True
else:
return "Error: either every_seconds, cron_expr, or at is required"
job = self._cron.add_job(
name=message[:30],
schedule=schedule,
message=message,
deliver=True,
channel=self._channel,
to=self._chat_id,
delete_after_run=delete_after,
)
return f"Created job '{job.name}' (id: {job.id})"
try:
job = self._cron.add_job(
name=message[:30],
schedule=schedule,
message=message,
deliver=True,
channel=channel,
to=chat_id,
delete_after_run=delete_after,
reminder=reminder,
)
return f"Created job '{job.name}' (id: {job.id})"
except Exception as e:
return f"Error creating cron job: {str(e)}"
def _list_jobs(self) -> str:
jobs = self._cron.list_jobs()

View File

@ -1,5 +1,7 @@
"""File system tools: read, write, edit."""
import asyncio
import subprocess
from pathlib import Path
from typing import Any
@ -26,7 +28,14 @@ class ReadFileTool(Tool):
@property
def description(self) -> str:
return "Read the contents of a file at the given path."
return """Read the contents of a file at the given path.
ALWAYS use this tool to read files - it supports:
- Text files (plain text, code, markdown, etc.)
- PDF files (automatically extracts text using pdftotext)
- Binary files will return an error
For reading files, use read_file FIRST. Only use exec for complex data processing AFTER reading the file content."""
@property
def parameters(self) -> dict[str, Any]:
@ -49,8 +58,45 @@ class ReadFileTool(Tool):
if not file_path.is_file():
return f"Error: Not a file: {path}"
# Check if file is a PDF and extract text if so
if file_path.suffix.lower() == '.pdf':
try:
# Use -layout flag to preserve table structure (makes quantities, prices, etc. easier to see)
process = await asyncio.create_subprocess_exec(
'pdftotext', '-layout', str(file_path), '-',
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await asyncio.wait_for(process.communicate(), timeout=30.0)
if process.returncode == 0 and stdout:
return stdout.decode('utf-8', errors='replace')
# Fall back to reading as binary and checking PDF header
if stderr:
error_msg = stderr.decode('utf-8', errors='replace')
if 'pdftotext' not in error_msg.lower():
return f"Error extracting PDF text: {error_msg}"
except FileNotFoundError:
# pdftotext not available, try to read and detect PDF
pass
except asyncio.TimeoutError:
return "Error: PDF extraction timed out"
except Exception as e:
return f"Error extracting PDF text: {str(e)}"
# For non-PDF files or if PDF extraction failed, read as text
content = file_path.read_text(encoding="utf-8")
return content
except UnicodeDecodeError:
# If UTF-8 fails, try to detect if it's a PDF by reading first bytes
try:
file_path = _resolve_path(path, self._allowed_dir)
with open(file_path, 'rb') as f:
header = f.read(4)
if header == b'%PDF':
return f"Error: PDF file detected but text extraction failed. Install 'poppler-utils' (pdftotext) to read PDF files."
except:
pass
return f"Error: File appears to be binary or not UTF-8 encoded. Cannot read as text."
except PermissionError as e:
return f"Error: {e}"
except Exception as e:

View File

@ -54,10 +54,12 @@ class ToolRegistry:
return f"Error: Tool '{name}' not found"
try:
errors = tool.validate_params(params)
# Coerce parameter types before validation
coerced_params = tool.coerce_params(params)
errors = tool.validate_params(coerced_params)
if errors:
return f"Error: Invalid parameters for tool '{name}': " + "; ".join(errors)
return await tool.execute(**params)
return await tool.execute(**coerced_params)
except Exception as e:
return f"Error executing {name}: {str(e)}"

View File

@ -41,7 +41,19 @@ class ExecTool(Tool):
@property
def description(self) -> str:
return "Execute a shell command and return its output. Use with caution."
return """Execute a shell command and return its output. Use with caution.
IMPORTANT:
- For READING files (including PDFs, text files, etc.), ALWAYS use read_file FIRST. Do NOT use exec to read files.
- Only use exec for complex data processing AFTER you have already read the file content using read_file.
For data analysis tasks (Excel, CSV, JSON files), use Python with pandas:
- Excel files: python3 -c "import pandas as pd; df = pd.read_excel('file.xlsx'); result = df['Column Name'].sum(); print(result)"
- CSV files: python3 -c "import pandas as pd; df = pd.read_csv('file.csv'); result = df['Column Name'].sum(); print(result)"
- NEVER use pandas/openpyxl as command-line tools (they don't exist)
- NEVER use non-existent tools like csvcalc, xlsxcalc, etc.
- For calculations: Use pandas operations like .sum(), .mean(), .max(), etc.
- For total inventory value: (df['Unit Price'] * df['Quantity']).sum()"""
@property
def parameters(self) -> dict[str, Any]:
@ -50,7 +62,7 @@ class ExecTool(Tool):
"properties": {
"command": {
"type": "string",
"description": "The shell command to execute"
"description": "The shell command to execute. For data analysis, use: python3 -c \"import pandas as pd; df = pd.read_csv('file.csv'); print(df['Column'].sum())\""
},
"working_dir": {
"type": "string",
@ -66,6 +78,10 @@ class ExecTool(Tool):
if guard_error:
return guard_error
# DEBUG: Log command details
from loguru import logger
logger.debug(f"ExecTool: command={command[:200]}, cwd={cwd}, working_dir={working_dir}")
try:
process = await asyncio.create_subprocess_shell(
command,
@ -86,18 +102,60 @@ class ExecTool(Tool):
output_parts = []
if stdout:
output_parts.append(stdout.decode("utf-8", errors="replace"))
stdout_text = stdout.decode("utf-8", errors="replace")
output_parts.append(stdout_text)
logger.debug(f"ExecTool stdout: {stdout_text[:200]}")
if stderr:
stderr_text = stderr.decode("utf-8", errors="replace")
if stderr_text.strip():
output_parts.append(f"STDERR:\n{stderr_text}")
logger.debug(f"ExecTool stderr: {stderr_text[:200]}")
if process.returncode != 0:
output_parts.append(f"\nExit code: {process.returncode}")
logger.warning(f"ExecTool: Command failed with exit code {process.returncode}")
result = "\n".join(output_parts) if output_parts else "(no output)"
# DEBUG: For Excel operations, verify file was actually modified
if "to_excel" in command or ".xlsx" in command:
import re
import time
xlsx_matches = re.findall(r"['\"]([^'\"]*\.xlsx)['\"]", command)
if xlsx_matches:
file_path = Path(xlsx_matches[0]).expanduser()
logger.debug(f"ExecTool: Checking Excel file {file_path}")
if file_path.exists():
file_mtime = file_path.stat().st_mtime
time_since_mod = time.time() - file_mtime
logger.debug(f"ExecTool: File mtime={file_mtime}, time_since_mod={time_since_mod}")
if time_since_mod < 5:
result += f"\n✅ Verified: File {file_path} was modified {time_since_mod:.2f}s ago"
else:
result += f"\n⚠️ WARNING: File {file_path} was NOT recently modified (last modified {time_since_mod:.2f}s ago). Command may not have saved changes."
logger.warning(f"ExecTool: Excel file {file_path} was not recently modified!")
else:
logger.warning(f"ExecTool: Excel file {file_path} does not exist!")
result += f"\n⚠️ WARNING: File {file_path} does not exist!"
# Verify file operations for Excel files (common issue: pandas to_excel not saving)
# Check if command mentions Excel file operations
if "to_excel" in command or ".xlsx" in command:
import re
# Try to extract file path from command
xlsx_matches = re.findall(r"['\"]([^'\"]*\.xlsx)['\"]", command)
if xlsx_matches:
file_path = Path(xlsx_matches[0]).expanduser()
if file_path.exists():
# Check if file was recently modified (within last 5 seconds)
import time
file_mtime = file_path.stat().st_mtime
if time.time() - file_mtime < 5:
result += f"\n✅ Verified: File {file_path} was modified"
else:
result += f"\n⚠️ Warning: File {file_path} exists but wasn't recently modified. Command may not have saved changes."
# Truncate very long output
max_len = 10000
if len(result) > max_len:

View File

@ -38,10 +38,12 @@ class ChannelManager:
if self.config.channels.telegram.enabled:
try:
from nanobot.channels.telegram import TelegramChannel
# Get groq API key if configured (optional, used for voice transcription)
groq_api_key = getattr(self.config.providers.groq, "api_key", "") or ""
self.channels["telegram"] = TelegramChannel(
self.config.channels.telegram,
self.bus,
groq_api_key=self.config.providers.groq.api_key,
groq_api_key=groq_api_key,
)
logger.info("Telegram channel enabled")
except ImportError as e:

View File

@ -420,20 +420,34 @@ def gateway(
# Set cron callback (needs agent)
async def on_cron_job(job: CronJob) -> str | None:
"""Execute a cron job through the agent."""
response = await agent.process_direct(
job.payload.message,
session_key=f"cron:{job.id}",
channel=job.payload.channel or "cli",
chat_id=job.payload.to or "direct",
)
if job.payload.deliver and job.payload.to:
from nanobot.bus.events import OutboundMessage
await bus.publish_outbound(OutboundMessage(
# Check if this is a simple reminder or a task
if job.payload.reminder:
# Simple reminder - send message directly without agent processing
if job.payload.deliver and job.payload.to:
from nanobot.bus.events import OutboundMessage
await bus.publish_outbound(OutboundMessage(
channel=job.payload.channel or "cli",
chat_id=job.payload.to,
content=job.payload.message,
metadata={"source": "cron_reminder", "job_id": job.id} # Mark as reminder
))
return job.payload.message
else:
# Task mode - process through agent
response = await agent.process_direct(
job.payload.message,
session_key=f"cron:{job.id}",
channel=job.payload.channel or "cli",
chat_id=job.payload.to,
content=response or ""
))
return response
chat_id=job.payload.to or "direct",
)
if job.payload.deliver and job.payload.to:
from nanobot.bus.events import OutboundMessage
await bus.publish_outbound(OutboundMessage(
channel=job.payload.channel or "cli",
chat_id=job.payload.to,
content=response or ""
))
return response
cron.on_job = on_cron_job
# Create heartbeat service

View File

@ -221,6 +221,7 @@ class ProvidersConfig(Base):
siliconflow: ProviderConfig = Field(default_factory=ProviderConfig) # SiliconFlow (硅基流动) API gateway
openai_codex: ProviderConfig = Field(default_factory=ProviderConfig) # OpenAI Codex (OAuth)
github_copilot: ProviderConfig = Field(default_factory=ProviderConfig) # Github Copilot (OAuth)
groq: ProviderConfig = Field(default_factory=ProviderConfig) # Groq (for voice transcription)
class GatewayConfig(Base):

View File

@ -57,6 +57,7 @@ class CronService:
self.on_job = on_job # Callback to execute job, returns response text
self._store: CronStore | None = None
self._timer_task: asyncio.Task | None = None
self._start_task: asyncio.Task | None = None
self._running = False
def _load_store(self) -> CronStore:
@ -86,6 +87,7 @@ class CronService:
deliver=j["payload"].get("deliver", False),
channel=j["payload"].get("channel"),
to=j["payload"].get("to"),
reminder=j["payload"].get("reminder", False),
),
state=CronJobState(
next_run_at_ms=j.get("state", {}).get("nextRunAtMs"),
@ -133,6 +135,7 @@ class CronService:
"deliver": j.payload.deliver,
"channel": j.payload.channel,
"to": j.payload.to,
"reminder": j.payload.reminder,
},
"state": {
"nextRunAtMs": j.state.next_run_at_ms,
@ -165,6 +168,9 @@ class CronService:
if self._timer_task:
self._timer_task.cancel()
self._timer_task = None
if self._start_task:
self._start_task.cancel()
self._start_task = None
def _recompute_next_runs(self) -> None:
"""Recompute next run times for all enabled jobs."""
@ -189,9 +195,30 @@ class CronService:
self._timer_task.cancel()
next_wake = self._get_next_wake_ms()
if not next_wake or not self._running:
if not next_wake:
return
# Auto-start if not running and there's an event loop
if not self._running:
try:
loop = asyncio.get_running_loop()
# Schedule start in the background (only if not already starting)
if not self._start_task or self._start_task.done():
async def auto_start():
try:
if not self._running:
await self.start()
except Exception as e:
logger.error(f"Failed to auto-start cron service: {e}")
finally:
self._start_task = None
self._start_task = loop.create_task(auto_start())
return # Will be re-armed after start
except RuntimeError:
# No event loop running, can't start
logger.warning("Cron service not started and no event loop available. Timer will not run.")
return
delay_ms = max(0, next_wake - _now_ms())
delay_s = delay_ms / 1000
@ -269,6 +296,7 @@ class CronService:
channel: str | None = None,
to: str | None = None,
delete_after_run: bool = False,
reminder: bool = False,
) -> CronJob:
"""Add a new job."""
store = self._load_store()
@ -285,6 +313,7 @@ class CronService:
deliver=deliver,
channel=channel,
to=to,
reminder=reminder,
),
state=CronJobState(next_run_at_ms=_compute_next_run(schedule, now)),
created_at_ms=now,

View File

@ -27,6 +27,9 @@ class CronPayload:
deliver: bool = False
channel: str | None = None # e.g. "whatsapp"
to: str | None = None # e.g. phone number
# If True, this is a simple reminder (send message directly)
# If False, this is a task (agent executes the message)
reminder: bool = False
@dataclass

View File

@ -15,7 +15,20 @@ class CustomProvider(LLMProvider):
def __init__(self, api_key: str = "no-key", api_base: str = "http://localhost:8000/v1", default_model: str = "default"):
super().__init__(api_key, api_base)
self.default_model = default_model
self._client = AsyncOpenAI(api_key=api_key, base_url=api_base)
# Set longer timeout for Ollama (especially with GPU, first load can be slow)
from openai import Timeout
# Set separate timeouts: connect, read, write, pool
# Ollama can be slow, especially on first request
self._client = AsyncOpenAI(
api_key=api_key,
base_url=api_base,
timeout=Timeout(
connect=60.0, # Connection timeout
read=600.0, # Read timeout (10 min for slow Ollama responses)
write=60.0, # Write timeout
pool=60.0 # Pool timeout
)
)
async def chat(self, messages: list[dict[str, Any]], tools: list[dict[str, Any]] | None = None,
model: str | None = None, max_tokens: int = 4096, temperature: float = 0.7) -> LLMResponse:
@ -24,21 +37,91 @@ class CustomProvider(LLMProvider):
if tools:
kwargs.update(tools=tools, tool_choice="auto")
try:
return self._parse(await self._client.chat.completions.create(**kwargs))
import asyncio
# Add explicit timeout wrapper (longer for Ollama)
return self._parse(await asyncio.wait_for(
self._client.chat.completions.create(**kwargs),
timeout=310.0 # Slightly longer than client timeout (300s)
))
except asyncio.TimeoutError:
return LLMResponse(content="Error: Request timed out after 310 seconds", finish_reason="error")
except Exception as e:
return LLMResponse(content=f"Error: {e}", finish_reason="error")
def _parse(self, response: Any) -> LLMResponse:
choice = response.choices[0]
msg = choice.message
# First, try to get structured tool calls
tool_calls = [
ToolCallRequest(id=tc.id, name=tc.function.name,
arguments=json_repair.loads(tc.function.arguments) if isinstance(tc.function.arguments, str) else tc.function.arguments)
for tc in (msg.tool_calls or [])
]
# If no structured tool calls, try to parse from content (Ollama sometimes returns JSON in content)
# Only parse if content looks like it contains a tool call JSON (to avoid false positives)
content = msg.content or ""
if not tool_calls and content and '"name"' in content and '"parameters"' in content:
import re
# Look for JSON tool call patterns: {"name": "exec", "parameters": {...}}
# Find complete JSON objects by matching braces
pattern = r'\{\s*"name"\s*:\s*"(\w+)"'
start_pos = 0
max_iterations = 5 # Safety limit
iteration = 0
while iteration < max_iterations:
iteration += 1
match = re.search(pattern, content[start_pos:])
if not match:
break
json_start = start_pos + match.start()
name = match.group(1)
# Find the matching closing brace by counting braces
brace_count = 0
json_end = json_start
found_end = False
for i, char in enumerate(content[json_start:], json_start):
if char == '{':
brace_count += 1
elif char == '}':
brace_count -= 1
if brace_count == 0:
json_end = i + 1
found_end = True
break
if found_end:
# Try to parse the complete JSON object
try:
json_str = content[json_start:json_end]
tool_obj = json_repair.loads(json_str)
# Only accept if it has both name and parameters, and name is a valid tool name
valid_tools = ["exec", "read_file", "write_file", "list_dir", "web_search"]
if (isinstance(tool_obj, dict) and
"name" in tool_obj and
"parameters" in tool_obj and
isinstance(tool_obj["name"], str) and
tool_obj["name"] in valid_tools):
tool_calls.append(ToolCallRequest(
id=f"call_{len(tool_calls)}",
name=tool_obj["name"],
arguments=tool_obj["parameters"] if isinstance(tool_obj["parameters"], dict) else {"raw": str(tool_obj["parameters"])}
))
# Remove the tool call from content
content = content[:json_start] + content[json_end:].strip()
start_pos = json_start # Stay at same position since we removed text
continue
except Exception:
pass # If parsing fails, skip this match
start_pos = json_start + 1 # Move past this match
u = response.usage
return LLMResponse(
content=msg.content, tool_calls=tool_calls, finish_reason=choice.finish_reason or "stop",
content=content, tool_calls=tool_calls, finish_reason=choice.finish_reason or "stop",
usage={"prompt_tokens": u.prompt_tokens, "completion_tokens": u.completion_tokens, "total_tokens": u.total_tokens} if u else {},
reasoning_content=getattr(msg, "reasoning_content", None),
)

View File

@ -3,11 +3,37 @@
import json
import json_repair
import os
import sys
from typing import Any
# Workaround for litellm's os.getcwd() issue during import
# litellm/proxy/proxy_cli.py does sys.path.append(os.getcwd()) which can fail
# if the current directory was deleted. Patch os.getcwd() to handle this gracefully.
_original_getcwd = os.getcwd
def _safe_getcwd():
try:
cwd = _original_getcwd()
# Verify the directory actually exists
if not os.path.exists(cwd):
raise FileNotFoundError(f"Current directory does not exist: {cwd}")
return cwd
except (FileNotFoundError, OSError):
# Return a safe fallback directory (home directory)
fallback = os.path.expanduser("~")
# Ensure fallback exists
if not os.path.exists(fallback):
fallback = "/tmp"
return fallback
# Patch os.getcwd before importing litellm
os.getcwd = _safe_getcwd
import litellm
from litellm import acompletion
# Restore original getcwd after import
os.getcwd = _original_getcwd
from nanobot.providers.base import LLMProvider, LLMResponse, ToolCallRequest
from nanobot.providers.registry import find_by_model, find_gateway

View File

@ -45,12 +45,28 @@ cron(action="remove", job_id="abc123")
| User says | Parameters |
|-----------|------------|
| remind me in 20 seconds | **in_seconds: 20** (RECOMMENDED - tool computes datetime automatically) |
| remind me in 5 minutes | **in_seconds: 300** (5 minutes = 300 seconds) |
| remind me in 1 hour | **in_seconds: 3600** (1 hour = 3600 seconds) |
| every 20 minutes | every_seconds: 1200 |
| every hour | every_seconds: 3600 |
| every day at 8am | cron_expr: "0 8 * * *" |
| weekdays at 5pm | cron_expr: "0 17 * * 1-5" |
| 9am Vancouver time daily | cron_expr: "0 9 * * *", tz: "America/Vancouver" |
| at a specific time | at: ISO datetime string (compute from current time) |
| at a specific time | at: ISO datetime string (e.g. "2026-03-03T14:30:00") |
**IMPORTANT**: For "remind me in X seconds/minutes", use `in_seconds` parameter instead of calculating `at` manually!
**Examples:**
- "remind me in 25 seconds" → `cron(action="add", message="...", in_seconds=25)`
- "remind me in 5 minutes" → `cron(action="add", message="...", in_seconds=300)` (5 * 60 = 300)
- "remind me in 1 hour" → `cron(action="add", message="...", in_seconds=3600)` (60 * 60 = 3600)
The `in_seconds` parameter automatically computes the correct future datetime - you don't need to calculate it yourself!
**Only use `at` when:**
- User specifies an exact time like "at 3pm" or "at 2026-03-03 14:30"
- You need to schedule for a specific absolute datetime
## Timezone

View File

@ -9,6 +9,30 @@ You are a helpful AI assistant. Be concise, accurate, and friendly.
- Use tools to help accomplish tasks
- Remember important information in your memory files
## When NOT to Use Tools
**For simple acknowledgments, respond naturally and conversationally - no tools needed.**
When the user says things like:
- "Thanks", "Thank you", "Thanks!"
- "OK", "Okay", "Got it"
- "You're welcome"
- "No problem"
- "Sure", "Sounds good"
- Simple confirmations or casual responses
**Just respond naturally** - say "You're welcome!", "No problem!", "Happy to help!", etc. Be brief, friendly, and conversational. Do not explain your reasoning, mention tools, or add meta-commentary. Just respond as a normal person would.
**Do NOT use the `message` tool for:**
- Simple acknowledgments - just respond with text
- Normal conversation - reply directly with your text response
- When the user is talking to YOU, not asking you to send a message to someone else
**Only use the `message` tool when:**
- The user explicitly asks you to send a message to someone else (e.g., "send a message to John")
- You need to send a message to a different chat channel (like WhatsApp) that the user isn't currently using
- The user explicitly requests messaging functionality
## Tools Available
You have access to:
@ -17,21 +41,36 @@ You have access to:
- Web access (search, fetch)
- Messaging (message)
- Background tasks (spawn)
- Scheduled tasks (cron) - for reminders and delayed actions
## Memory
- `memory/MEMORY.md` — long-term facts (preferences, context, relationships)
- `memory/HISTORY.md` — append-only event log, search with grep to recall past events
## Scheduled Reminders
## Scheduled Tasks and Reminders
When user asks for a reminder at a specific time, use `exec` to run:
```
nanobot cron add --name "reminder" --message "Your message" --at "YYYY-MM-DDTHH:MM:SS" --deliver --to "USER_ID" --channel "CHANNEL"
```
Get USER_ID and CHANNEL from the current session (e.g., `8281248569` and `telegram` from `telegram:8281248569`).
Use the `cron` tool to schedule tasks and reminders. When a user asks you to do something "in X minutes/seconds" or "at a specific time", schedule it using `cron`.
**Do NOT just write reminders to MEMORY.md** — that won't trigger actual notifications.
**Recognizing scheduling requests:**
- "In 1 minute read file X" → Schedule a task
- "Remind me in 5 minutes to..." → Schedule a reminder
- "At 3pm, check..." → Schedule a task
- "Every hour, do..." → Schedule a recurring task
**For scheduled tasks:**
- Use `cron(action="add", message="<task description>", in_seconds=<seconds>)` for relative time
- Use `cron(action="add", message="<task description>", at="<ISO datetime>")` for absolute time
- Use `cron(action="add", message="<task description>", every_seconds=<seconds>)` for recurring tasks
**Examples:**
- "In 1 minute read file story.txt and tell me its content" → `cron(action="add", message="Read story.txt and tell user its content", in_seconds=60)`
- "Remind me in 5 minutes to call John" → `cron(action="add", message="Call John", in_seconds=300)`
- "Every hour check the weather" → `cron(action="add", message="Check the weather and report to user", every_seconds=3600)`
When the scheduled time arrives, the cron system will send the message back to you, and you'll execute the task (read the file, check something, etc.) and respond to the user.
**Do NOT just write reminders to MEMORY.md** — that won't trigger actual notifications. Use the `cron` tool.
## Heartbeat Tasks

View File

@ -83,28 +83,48 @@ Use for complex or time-consuming tasks that can run independently. The subagent
## Scheduled Reminders (Cron)
Use the `exec` tool to create scheduled reminders with `nanobot cron add`:
### Set a recurring reminder
```bash
# Every day at 9am
nanobot cron add --name "morning" --message "Good morning! ☀️" --cron "0 9 * * *"
# Every 2 hours
nanobot cron add --name "water" --message "Drink water! 💧" --every 7200
### cron
Schedule reminders and recurring tasks. **REQUIRED: Always include 'action' parameter.**
```
cron(action: str, message: str = None, in_seconds: int = None, at: str = None, every_seconds: int = None, cron_expr: str = None, tz: str = None, job_id: str = None) -> str
```
### Set a one-time reminder
```bash
# At a specific time (ISO format)
nanobot cron add --name "meeting" --message "Meeting starts now!" --at "2025-01-31T15:00:00"
**Actions:**
- `action="add"` - Create a new reminder or recurring task
- `action="list"` - List all scheduled jobs
- `action="remove"` - Remove a job by ID
**Examples:**
Reminder in N seconds (recommended for relative time):
```
cron(action="add", message="Send a text to your son", in_seconds=25)
cron(action="add", message="Take a break", in_seconds=300) # 5 minutes
```
### Manage reminders
```bash
nanobot cron list # List all jobs
nanobot cron remove <job_id> # Remove a job
One-time reminder at specific time:
```
cron(action="add", message="Meeting starts now!", at="2025-01-31T15:00:00")
```
Recurring reminder:
```
cron(action="add", message="Drink water! 💧", every_seconds=7200) # Every 2 hours
```
Scheduled task with cron expression:
```
cron(action="add", message="Good morning! ☀️", cron_expr="0 9 * * *") # Daily at 9am
cron(action="add", message="Standup", cron_expr="0 9 * * 1-5", tz="America/Vancouver") # Weekdays 9am Vancouver time
```
List or remove:
```
cron(action="list")
cron(action="remove", job_id="abc123")
```
**Important:** Always include `action` parameter. For "remind me in X seconds/minutes", use `in_seconds` instead of calculating `at` manually.
## Heartbeat Task Management