Merge origin/main into feature branch
- Merged latest 166 commits from origin/main - Resolved conflicts in .gitignore, commands.py, schema.py, providers/__init__.py, and registry.py - Kept both local providers (Ollama, AirLLM) and new providers from main - Preserved transformers 4.39.3 compatibility fixes - Combined error handling improvements with new features
This commit is contained in:
commit
e6b5ead3fd
3
.gitignore
vendored
3
.gitignore
vendored
@ -15,8 +15,9 @@ docs/
|
||||
*.pyzz
|
||||
.venv/
|
||||
vllm-env/
|
||||
venv/
|
||||
__pycache__/
|
||||
poetry.lock
|
||||
.pytest_cache/
|
||||
tests/
|
||||
botpy.log
|
||||
tests/
|
||||
|
||||
245
README.md
245
README.md
@ -16,20 +16,33 @@
|
||||
|
||||
⚡️ Delivers core agent functionality in just **~4,000** lines of code — **99% smaller** than Clawdbot's 430k+ lines.
|
||||
|
||||
📏 Real-time line count: **3,510 lines** (run `bash core_agent_lines.sh` to verify anytime)
|
||||
📏 Real-time line count: **3,761 lines** (run `bash core_agent_lines.sh` to verify anytime)
|
||||
|
||||
## 📢 News
|
||||
|
||||
- **2026-02-10** 🎉 Released v0.1.3.post6 with improvements! Check the updates [notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post6) and our [roadmap](https://github.com/HKUDS/nanobot/discussions/431).
|
||||
- **2026-02-17** 🎉 Released **v0.1.4** — MCP support, progress streaming, new providers, and multiple channel improvements. Please see [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4) for details.
|
||||
- **2026-02-16** 🦞 nanobot now integrates a [ClawHub](https://clawhub.ai) skill — search and install public agent skills.
|
||||
- **2026-02-15** 🔑 nanobot now supports OpenAI Codex provider with OAuth login support.
|
||||
- **2026-02-14** 🔌 nanobot now supports MCP! See [MCP section](#mcp-model-context-protocol) for details.
|
||||
- **2026-02-13** 🎉 Released **v0.1.3.post7** — includes security hardening and multiple improvements. **Please upgrade to the latest version to address security issues**. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post7) for more details.
|
||||
- **2026-02-12** 🧠 Redesigned memory system — Less code, more reliable. Join the [discussion](https://github.com/HKUDS/nanobot/discussions/566) about it!
|
||||
- **2026-02-11** ✨ Enhanced CLI experience and added MiniMax support!
|
||||
- **2026-02-10** 🎉 Released **v0.1.3.post6** with improvements! Check the updates [notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post6) and our [roadmap](https://github.com/HKUDS/nanobot/discussions/431).
|
||||
- **2026-02-09** 💬 Added Slack, Email, and QQ support — nanobot now supports multiple chat platforms!
|
||||
- **2026-02-08** 🔧 Refactored Providers—adding a new LLM provider now takes just 2 simple steps! Check [here](#providers).
|
||||
- **2026-02-07** 🚀 Released v0.1.3.post5 with Qwen support & several key improvements! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post5) for details.
|
||||
|
||||
<details>
|
||||
<summary>Earlier news</summary>
|
||||
|
||||
- **2026-02-07** 🚀 Released **v0.1.3.post5** with Qwen support & several key improvements! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post5) for details.
|
||||
- **2026-02-06** ✨ Added Moonshot/Kimi provider, Discord integration, and enhanced security hardening!
|
||||
- **2026-02-05** ✨ Added Feishu channel, DeepSeek provider, and enhanced scheduled tasks support!
|
||||
- **2026-02-04** 🚀 Released v0.1.3.post4 with multi-provider & Docker support! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post4) for details.
|
||||
- **2026-02-04** 🚀 Released **v0.1.3.post4** with multi-provider & Docker support! Check [here](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post4) for details.
|
||||
- **2026-02-03** ⚡ Integrated vLLM for local LLM support and improved natural language task scheduling!
|
||||
- **2026-02-02** 🎉 nanobot officially launched! Welcome to try 🐈 nanobot!
|
||||
|
||||
</details>
|
||||
|
||||
## Key Features of nanobot:
|
||||
|
||||
🪶 **Ultra-Lightweight**: Just ~4,000 lines of core agent code — 99% smaller than Clawdbot.
|
||||
@ -105,14 +118,22 @@ nanobot onboard
|
||||
|
||||
**2. Configure** (`~/.nanobot/config.json`)
|
||||
|
||||
For OpenRouter - recommended for global users:
|
||||
Add or merge these **two parts** into your config (other options have defaults).
|
||||
|
||||
*Set your API key* (e.g. OpenRouter, recommended for global users):
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"openrouter": {
|
||||
"apiKey": "sk-or-v1-xxx"
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
*Set your model*:
|
||||
```json
|
||||
{
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "anthropic/claude-opus-4-5"
|
||||
@ -124,63 +145,26 @@ For OpenRouter - recommended for global users:
|
||||
**3. Chat**
|
||||
|
||||
```bash
|
||||
nanobot agent -m "What is 2+2?"
|
||||
nanobot agent
|
||||
```
|
||||
|
||||
That's it! You have a working AI assistant in 2 minutes.
|
||||
|
||||
## 🖥️ Local Models (vLLM)
|
||||
|
||||
Run nanobot with your own local models using vLLM or any OpenAI-compatible server.
|
||||
|
||||
**1. Start your vLLM server**
|
||||
|
||||
```bash
|
||||
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
|
||||
```
|
||||
|
||||
**2. Configure** (`~/.nanobot/config.json`)
|
||||
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"vllm": {
|
||||
"apiKey": "dummy",
|
||||
"apiBase": "http://localhost:8000/v1"
|
||||
}
|
||||
},
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "meta-llama/Llama-3.1-8B-Instruct"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**3. Chat**
|
||||
|
||||
```bash
|
||||
nanobot agent -m "Hello from my local LLM!"
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> The `apiKey` can be any non-empty string for local servers that don't require authentication.
|
||||
|
||||
## 💬 Chat Apps
|
||||
|
||||
Talk to your nanobot through Telegram, Discord, WhatsApp, Feishu, Mochat, DingTalk, Slack, Email, or QQ — anytime, anywhere.
|
||||
Connect nanobot to your favorite chat platform.
|
||||
|
||||
| Channel | Setup |
|
||||
|---------|-------|
|
||||
| **Telegram** | Easy (just a token) |
|
||||
| **Discord** | Easy (bot token + intents) |
|
||||
| **WhatsApp** | Medium (scan QR) |
|
||||
| **Feishu** | Medium (app credentials) |
|
||||
| **Mochat** | Medium (claw token + websocket) |
|
||||
| **DingTalk** | Medium (app credentials) |
|
||||
| **Slack** | Medium (bot + app tokens) |
|
||||
| **Email** | Medium (IMAP/SMTP credentials) |
|
||||
| **QQ** | Easy (app credentials) |
|
||||
| Channel | What you need |
|
||||
|---------|---------------|
|
||||
| **Telegram** | Bot token from @BotFather |
|
||||
| **Discord** | Bot token + Message Content intent |
|
||||
| **WhatsApp** | QR code scan |
|
||||
| **Feishu** | App ID + App Secret |
|
||||
| **Mochat** | Claw token (auto-setup available) |
|
||||
| **DingTalk** | App Key + App Secret |
|
||||
| **Slack** | Bot token + App-Level token |
|
||||
| **Email** | IMAP/SMTP credentials |
|
||||
| **QQ** | App ID + App Secret |
|
||||
|
||||
<details>
|
||||
<summary><b>Telegram</b> (Recommended)</summary>
|
||||
@ -597,6 +581,7 @@ Config file: `~/.nanobot/config.json`
|
||||
|
||||
| Provider | Purpose | Get API Key |
|
||||
|----------|---------|-------------|
|
||||
| `custom` | Any OpenAI-compatible endpoint (direct, no LiteLLM) | — |
|
||||
| `openrouter` | LLM (recommended, access to all models) | [openrouter.ai](https://openrouter.ai) |
|
||||
| `anthropic` | LLM (Claude direct) | [console.anthropic.com](https://console.anthropic.com) |
|
||||
| `openai` | LLM (GPT direct) | [platform.openai.com](https://platform.openai.com) |
|
||||
@ -605,10 +590,105 @@ Config file: `~/.nanobot/config.json`
|
||||
| `gemini` | LLM (Gemini direct) | [aistudio.google.com](https://aistudio.google.com) |
|
||||
| `minimax` | LLM (MiniMax direct) | [platform.minimax.io](https://platform.minimax.io) |
|
||||
| `aihubmix` | LLM (API gateway, access to all models) | [aihubmix.com](https://aihubmix.com) |
|
||||
| `siliconflow` | LLM (SiliconFlow/硅基流动, API gateway) | [siliconflow.cn](https://siliconflow.cn) |
|
||||
| `dashscope` | LLM (Qwen) | [dashscope.console.aliyun.com](https://dashscope.console.aliyun.com) |
|
||||
| `moonshot` | LLM (Moonshot/Kimi) | [platform.moonshot.cn](https://platform.moonshot.cn) |
|
||||
| `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https://open.bigmodel.cn) |
|
||||
| `vllm` | LLM (local, any OpenAI-compatible server) | — |
|
||||
| `openai_codex` | LLM (Codex, OAuth) | `nanobot provider login openai-codex` |
|
||||
| `github_copilot` | LLM (GitHub Copilot, OAuth) | `nanobot provider login github-copilot` |
|
||||
|
||||
<details>
|
||||
<summary><b>OpenAI Codex (OAuth)</b></summary>
|
||||
|
||||
Codex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account.
|
||||
|
||||
**1. Login:**
|
||||
```bash
|
||||
nanobot provider login openai-codex
|
||||
```
|
||||
|
||||
**2. Set model** (merge into `~/.nanobot/config.json`):
|
||||
```json
|
||||
{
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "openai-codex/gpt-5.1-codex"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**3. Chat:**
|
||||
```bash
|
||||
nanobot agent -m "Hello!"
|
||||
```
|
||||
|
||||
> Docker users: use `docker run -it` for interactive OAuth login.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><b>Custom Provider (Any OpenAI-compatible API)</b></summary>
|
||||
|
||||
Connects directly to any OpenAI-compatible endpoint — LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI, or any self-hosted server. Bypasses LiteLLM; model name is passed as-is.
|
||||
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"custom": {
|
||||
"apiKey": "your-api-key",
|
||||
"apiBase": "https://api.your-provider.com/v1"
|
||||
}
|
||||
},
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "your-model-name"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> For local servers that don't require a key, set `apiKey` to any non-empty string (e.g. `"no-key"`).
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><b>vLLM (local / OpenAI-compatible)</b></summary>
|
||||
|
||||
Run your own model with vLLM or any OpenAI-compatible server, then add to config:
|
||||
|
||||
**1. Start the server** (example):
|
||||
```bash
|
||||
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
|
||||
```
|
||||
|
||||
**2. Add to config** (partial — merge into `~/.nanobot/config.json`):
|
||||
|
||||
*Provider (key can be any non-empty string for local):*
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"vllm": {
|
||||
"apiKey": "dummy",
|
||||
"apiBase": "http://localhost:8000/v1"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
*Model:*
|
||||
```json
|
||||
{
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "meta-llama/Llama-3.1-8B-Instruct"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><b>Adding a New Provider (Developer Guide)</b></summary>
|
||||
@ -655,8 +735,43 @@ That's it! Environment variables, model prefixing, config matching, and `nanobot
|
||||
</details>
|
||||
|
||||
|
||||
### MCP (Model Context Protocol)
|
||||
|
||||
> [!TIP]
|
||||
> The config format is compatible with Claude Desktop / Cursor. You can copy MCP server configs directly from any MCP server's README.
|
||||
|
||||
nanobot supports [MCP](https://modelcontextprotocol.io/) — connect external tool servers and use them as native agent tools.
|
||||
|
||||
Add MCP servers to your `config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"tools": {
|
||||
"mcpServers": {
|
||||
"filesystem": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Two transport modes are supported:
|
||||
|
||||
| Mode | Config | Example |
|
||||
|------|--------|---------|
|
||||
| **Stdio** | `command` + `args` | Local process via `npx` / `uvx` |
|
||||
| **HTTP** | `url` | Remote endpoint (`https://mcp.example.com/sse`) |
|
||||
|
||||
MCP tools are automatically discovered and registered on startup. The LLM can use them alongside built-in tools — no extra configuration needed.
|
||||
|
||||
|
||||
|
||||
|
||||
### Security
|
||||
|
||||
> [!TIP]
|
||||
> For production deployments, set `"restrictToWorkspace": true` in your config to sandbox the agent.
|
||||
|
||||
| Option | Default | Description |
|
||||
@ -676,6 +791,7 @@ That's it! Environment variables, model prefixing, config matching, and `nanobot
|
||||
| `nanobot agent --logs` | Show runtime logs during chat |
|
||||
| `nanobot gateway` | Start the gateway |
|
||||
| `nanobot status` | Show status |
|
||||
| `nanobot provider login openai-codex` | OAuth login for providers |
|
||||
| `nanobot channels login` | Link WhatsApp (scan QR) |
|
||||
| `nanobot channels status` | Show channel status |
|
||||
|
||||
@ -703,7 +819,21 @@ nanobot cron remove <job_id>
|
||||
> [!TIP]
|
||||
> The `-v ~/.nanobot:/root/.nanobot` flag mounts your local config directory into the container, so your config and workspace persist across container restarts.
|
||||
|
||||
Build and run nanobot in a container:
|
||||
### Docker Compose
|
||||
|
||||
```bash
|
||||
docker compose run --rm nanobot-cli onboard # first-time setup
|
||||
vim ~/.nanobot/config.json # add API keys
|
||||
docker compose up -d nanobot-gateway # start gateway
|
||||
```
|
||||
|
||||
```bash
|
||||
docker compose run --rm nanobot-cli agent -m "Hello!" # run CLI
|
||||
docker compose logs -f nanobot-gateway # view logs
|
||||
docker compose down # stop
|
||||
```
|
||||
|
||||
### Docker
|
||||
|
||||
```bash
|
||||
# Build the image
|
||||
@ -751,7 +881,6 @@ PRs welcome! The codebase is intentionally small and readable. 🤗
|
||||
|
||||
**Roadmap** — Pick an item and [open a PR](https://github.com/HKUDS/nanobot/pulls)!
|
||||
|
||||
- [x] **Voice Transcription** — Support for Groq Whisper (Issue #13)
|
||||
- [ ] **Multi-modal** — See and hear (images, voice, video)
|
||||
- [ ] **Long-term memory** — Never forget important context
|
||||
- [ ] **Better reasoning** — Multi-step planning and reflection
|
||||
|
||||
@ -5,7 +5,7 @@
|
||||
If you discover a security vulnerability in nanobot, please report it by:
|
||||
|
||||
1. **DO NOT** open a public GitHub issue
|
||||
2. Create a private security advisory on GitHub or contact the repository maintainers
|
||||
2. Create a private security advisory on GitHub or contact the repository maintainers (xubinrencs@gmail.com)
|
||||
3. Include:
|
||||
- Description of the vulnerability
|
||||
- Steps to reproduce
|
||||
@ -95,8 +95,8 @@ File operations have path traversal protection, but:
|
||||
- Consider using a firewall to restrict outbound connections if needed
|
||||
|
||||
**WhatsApp Bridge:**
|
||||
- The bridge runs on `localhost:3001` by default
|
||||
- If exposing to network, use proper authentication and TLS
|
||||
- The bridge binds to `127.0.0.1:3001` (localhost only, not accessible from external network)
|
||||
- Set `bridgeToken` in config to enable shared-secret authentication between Python and Node.js
|
||||
- Keep authentication data in `~/.nanobot/whatsapp-auth` secure (mode 0700)
|
||||
|
||||
### 6. Dependency Security
|
||||
@ -224,7 +224,7 @@ If you suspect a security breach:
|
||||
✅ **Secure Communication**
|
||||
- HTTPS for all external API calls
|
||||
- TLS for Telegram API
|
||||
- WebSocket security for WhatsApp bridge
|
||||
- WhatsApp bridge: localhost-only binding + optional token auth
|
||||
|
||||
## Known Limitations
|
||||
|
||||
|
||||
105
SETUP_LLAMA.md
Normal file
105
SETUP_LLAMA.md
Normal file
@ -0,0 +1,105 @@
|
||||
# Setting Up Llama Models with AirLLM
|
||||
|
||||
This guide will help you configure nanobot to use Llama models with AirLLM.
|
||||
|
||||
## Quick Setup
|
||||
|
||||
Run the setup script:
|
||||
|
||||
```bash
|
||||
python3 setup_llama_airllm.py
|
||||
```
|
||||
|
||||
The script will:
|
||||
1. Create/update your `~/.nanobot/config.json` file
|
||||
2. Configure Llama-3.2-3B-Instruct as the default model
|
||||
3. Guide you through getting a Hugging Face token
|
||||
|
||||
## Manual Setup
|
||||
|
||||
### Step 1: Get a Hugging Face Token
|
||||
|
||||
Llama models are "gated" (require license acceptance), so you need a Hugging Face token:
|
||||
|
||||
1. Go to: https://huggingface.co/settings/tokens
|
||||
2. Click **"New token"**
|
||||
3. Give it a name (e.g., "nanobot")
|
||||
4. Select **"Read"** permission
|
||||
5. Click **"Generate token"**
|
||||
6. **Copy the token** (starts with `hf_...`)
|
||||
|
||||
### Step 2: Accept Llama License
|
||||
|
||||
1. Go to: https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
|
||||
2. Click **"Agree and access repository"**
|
||||
3. Accept the license terms
|
||||
|
||||
### Step 3: Configure nanobot
|
||||
|
||||
Edit `~/.nanobot/config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"airllm": {
|
||||
"apiKey": "meta-llama/Llama-3.2-3B-Instruct",
|
||||
"extraHeaders": {
|
||||
"hf_token": "hf_YOUR_TOKEN_HERE"
|
||||
}
|
||||
}
|
||||
},
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "meta-llama/Llama-3.2-3B-Instruct"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Replace `hf_YOUR_TOKEN_HERE` with your actual Hugging Face token.
|
||||
|
||||
### Step 4: Test It
|
||||
|
||||
```bash
|
||||
nanobot agent -m "Hello, what is 2+5?"
|
||||
```
|
||||
|
||||
## Recommended Llama Models
|
||||
|
||||
### Small Models (Faster, Less Memory)
|
||||
- **Llama-3.2-3B-Instruct** (Recommended - fast, minimal memory)
|
||||
- Model: `meta-llama/Llama-3.2-3B-Instruct`
|
||||
- Best for limited GPU memory
|
||||
|
||||
- **Llama-3.1-8B-Instruct** (Good balance of performance and speed)
|
||||
- Model: `meta-llama/Llama-3.1-8B-Instruct`
|
||||
- Good balance of performance and speed
|
||||
|
||||
## Why Llama with AirLLM?
|
||||
|
||||
- **Excellent AirLLM Compatibility**: Llama models work very well with AirLLM's chunking mechanism
|
||||
- **Proven Stability**: Llama models have been tested extensively with AirLLM
|
||||
- **Good Performance**: Llama models provide excellent quality while working efficiently with AirLLM
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Model not found" error
|
||||
- Make sure you've accepted the Llama license on Hugging Face
|
||||
- Verify your HF token has read permissions
|
||||
- Check that the token is correctly set in `extraHeaders.hf_token`
|
||||
|
||||
### "Out of memory" error
|
||||
- Try a smaller model (Llama-3.2-3B-Instruct)
|
||||
- Use compression: set `apiBase` to `"4bit"` or `"8bit"` in the airllm config
|
||||
|
||||
### Still having issues?
|
||||
- Check the config file format is valid JSON
|
||||
- Verify file permissions: `chmod 600 ~/.nanobot/config.json`
|
||||
- Check logs for detailed error messages
|
||||
|
||||
## Config File Location
|
||||
|
||||
- **Path**: `~/.nanobot/config.json`
|
||||
- **Permissions**: Should be `600` (read/write for owner only)
|
||||
- **Backup**: Always backup before editing!
|
||||
|
||||
148
WHY_HUGGINGFACE.md
Normal file
148
WHY_HUGGINGFACE.md
Normal file
@ -0,0 +1,148 @@
|
||||
# Why Hugging Face? Can We Avoid It?
|
||||
|
||||
## Short Answer
|
||||
|
||||
**You don't HAVE to use Hugging Face**, but it's the easiest way. Here's why it's commonly used and what alternatives exist.
|
||||
|
||||
## Why Hugging Face is Used
|
||||
|
||||
### 1. **Model Distribution Platform**
|
||||
- Hugging Face Hub is where most open-source models (Llama, etc.) are hosted
|
||||
- When you specify `"meta-llama/Llama-3.1-8B-Instruct"`, AirLLM automatically downloads it from Hugging Face
|
||||
- It's the standard repository that everyone uses
|
||||
|
||||
### 2. **Gated Models (Like Llama)**
|
||||
- Llama models are "gated" - they require:
|
||||
- Accepting Meta's license terms
|
||||
- A Hugging Face account
|
||||
- A token to authenticate
|
||||
- This is **Meta's requirement**, not Hugging Face's
|
||||
- The token proves you've accepted the license
|
||||
|
||||
### 3. **Convenience**
|
||||
- Automatic downloads
|
||||
- Version management
|
||||
- Easy model discovery
|
||||
|
||||
## Alternatives: How to Avoid Hugging Face
|
||||
|
||||
### Option 1: Use Local Model Files (No HF Token Needed!)
|
||||
|
||||
If you already have the model downloaded locally, you can use it directly:
|
||||
|
||||
**1. Download the model manually** (one-time, can use `git lfs` or `huggingface-cli`):
|
||||
```bash
|
||||
# Using huggingface-cli (still needs token, but only once)
|
||||
huggingface-cli download meta-llama/Llama-3.1-8B-Instruct --local-dir ~/models/llama-3.1-8b
|
||||
|
||||
# Or using git lfs
|
||||
git lfs clone https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct ~/models/llama-3.1-8b
|
||||
```
|
||||
|
||||
**2. Use local path in config**:
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"airllm": {
|
||||
"apiKey": "/home/youruser/models/llama-3.1-8b"
|
||||
}
|
||||
},
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "/home/youruser/models/llama-3.1-8b"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: AirLLM's `AutoModel.from_pretrained()` accepts local paths! Just use the full path instead of the model ID.
|
||||
|
||||
### Option 2: Use Ollama (No HF at All!)
|
||||
|
||||
Ollama manages models for you and doesn't require Hugging Face:
|
||||
|
||||
**1. Install Ollama**: https://ollama.ai
|
||||
|
||||
**2. Pull a model**:
|
||||
```bash
|
||||
ollama pull llama3.1:8b
|
||||
```
|
||||
|
||||
**3. Configure nanobot**:
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"ollama": {
|
||||
"apiKey": "dummy",
|
||||
"apiBase": "http://localhost:11434/v1"
|
||||
}
|
||||
},
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "llama3.1:8b"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Option 3: Use vLLM (Local Server)
|
||||
|
||||
**1. Download model once** (with or without HF token):
|
||||
```bash
|
||||
# With HF token
|
||||
huggingface-cli download meta-llama/Llama-3.1-8B-Instruct --local-dir ~/models/llama-3.1-8b
|
||||
|
||||
# Or manually download from other sources
|
||||
```
|
||||
|
||||
**2. Start vLLM server**:
|
||||
```bash
|
||||
vllm serve ~/models/llama-3.1-8b --port 8000
|
||||
```
|
||||
|
||||
**3. Configure nanobot**:
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"vllm": {
|
||||
"apiKey": "dummy",
|
||||
"apiBase": "http://localhost:8000/v1"
|
||||
}
|
||||
},
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "llama-3.1-8b"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Why You Might Still Need HF Token
|
||||
|
||||
Even if you want to avoid Hugging Face long-term, you might need it **once** to:
|
||||
- Download the model initially
|
||||
- Accept the license for gated models (Llama)
|
||||
|
||||
After that, you can use the local files and never touch Hugging Face again!
|
||||
|
||||
## Recommendation
|
||||
|
||||
**For Llama models specifically:**
|
||||
1. **Get HF token once** (5 minutes) - just to download and accept license
|
||||
2. **Download model locally** - use `huggingface-cli` or `git lfs`
|
||||
3. **Use local path** - configure nanobot to use the local directory
|
||||
4. **Never need HF again** - the model runs completely offline
|
||||
|
||||
This gives you:
|
||||
- ✅ No ongoing dependency on Hugging Face
|
||||
- ✅ Faster startup (no downloads)
|
||||
- ✅ Works offline
|
||||
- ✅ Full control
|
||||
|
||||
## Summary
|
||||
|
||||
- **Hugging Face is required** for: Downloading models initially, accessing gated models
|
||||
- **Hugging Face is NOT required** for: Running models after download, using local files, using Ollama/vLLM
|
||||
- **Best approach**: Download once with HF token, then use local files forever
|
||||
|
||||
@ -25,11 +25,12 @@ import { join } from 'path';
|
||||
|
||||
const PORT = parseInt(process.env.BRIDGE_PORT || '3001', 10);
|
||||
const AUTH_DIR = process.env.AUTH_DIR || join(homedir(), '.nanobot', 'whatsapp-auth');
|
||||
const TOKEN = process.env.BRIDGE_TOKEN || undefined;
|
||||
|
||||
console.log('🐈 nanobot WhatsApp Bridge');
|
||||
console.log('========================\n');
|
||||
|
||||
const server = new BridgeServer(PORT, AUTH_DIR);
|
||||
const server = new BridgeServer(PORT, AUTH_DIR, TOKEN);
|
||||
|
||||
// Handle graceful shutdown
|
||||
process.on('SIGINT', async () => {
|
||||
|
||||
@ -1,5 +1,6 @@
|
||||
/**
|
||||
* WebSocket server for Python-Node.js bridge communication.
|
||||
* Security: binds to 127.0.0.1 only; optional BRIDGE_TOKEN auth.
|
||||
*/
|
||||
|
||||
import { WebSocketServer, WebSocket } from 'ws';
|
||||
@ -21,12 +22,13 @@ export class BridgeServer {
|
||||
private wa: WhatsAppClient | null = null;
|
||||
private clients: Set<WebSocket> = new Set();
|
||||
|
||||
constructor(private port: number, private authDir: string) {}
|
||||
constructor(private port: number, private authDir: string, private token?: string) {}
|
||||
|
||||
async start(): Promise<void> {
|
||||
// Create WebSocket server
|
||||
this.wss = new WebSocketServer({ port: this.port });
|
||||
console.log(`🌉 Bridge server listening on ws://localhost:${this.port}`);
|
||||
// Bind to localhost only — never expose to external network
|
||||
this.wss = new WebSocketServer({ host: '127.0.0.1', port: this.port });
|
||||
console.log(`🌉 Bridge server listening on ws://127.0.0.1:${this.port}`);
|
||||
if (this.token) console.log('🔒 Token authentication enabled');
|
||||
|
||||
// Initialize WhatsApp client
|
||||
this.wa = new WhatsAppClient({
|
||||
@ -38,35 +40,58 @@ export class BridgeServer {
|
||||
|
||||
// Handle WebSocket connections
|
||||
this.wss.on('connection', (ws) => {
|
||||
console.log('🔗 Python client connected');
|
||||
this.clients.add(ws);
|
||||
|
||||
ws.on('message', async (data) => {
|
||||
try {
|
||||
const cmd = JSON.parse(data.toString()) as SendCommand;
|
||||
await this.handleCommand(cmd);
|
||||
ws.send(JSON.stringify({ type: 'sent', to: cmd.to }));
|
||||
} catch (error) {
|
||||
console.error('Error handling command:', error);
|
||||
ws.send(JSON.stringify({ type: 'error', error: String(error) }));
|
||||
}
|
||||
});
|
||||
|
||||
ws.on('close', () => {
|
||||
console.log('🔌 Python client disconnected');
|
||||
this.clients.delete(ws);
|
||||
});
|
||||
|
||||
ws.on('error', (error) => {
|
||||
console.error('WebSocket error:', error);
|
||||
this.clients.delete(ws);
|
||||
});
|
||||
if (this.token) {
|
||||
// Require auth handshake as first message
|
||||
const timeout = setTimeout(() => ws.close(4001, 'Auth timeout'), 5000);
|
||||
ws.once('message', (data) => {
|
||||
clearTimeout(timeout);
|
||||
try {
|
||||
const msg = JSON.parse(data.toString());
|
||||
if (msg.type === 'auth' && msg.token === this.token) {
|
||||
console.log('🔗 Python client authenticated');
|
||||
this.setupClient(ws);
|
||||
} else {
|
||||
ws.close(4003, 'Invalid token');
|
||||
}
|
||||
} catch {
|
||||
ws.close(4003, 'Invalid auth message');
|
||||
}
|
||||
});
|
||||
} else {
|
||||
console.log('🔗 Python client connected');
|
||||
this.setupClient(ws);
|
||||
}
|
||||
});
|
||||
|
||||
// Connect to WhatsApp
|
||||
await this.wa.connect();
|
||||
}
|
||||
|
||||
private setupClient(ws: WebSocket): void {
|
||||
this.clients.add(ws);
|
||||
|
||||
ws.on('message', async (data) => {
|
||||
try {
|
||||
const cmd = JSON.parse(data.toString()) as SendCommand;
|
||||
await this.handleCommand(cmd);
|
||||
ws.send(JSON.stringify({ type: 'sent', to: cmd.to }));
|
||||
} catch (error) {
|
||||
console.error('Error handling command:', error);
|
||||
ws.send(JSON.stringify({ type: 'error', error: String(error) }));
|
||||
}
|
||||
});
|
||||
|
||||
ws.on('close', () => {
|
||||
console.log('🔌 Python client disconnected');
|
||||
this.clients.delete(ws);
|
||||
});
|
||||
|
||||
ws.on('error', (error) => {
|
||||
console.error('WebSocket error:', error);
|
||||
this.clients.delete(ws);
|
||||
});
|
||||
}
|
||||
|
||||
private async handleCommand(cmd: SendCommand): Promise<void> {
|
||||
if (cmd.type === 'send' && this.wa) {
|
||||
await this.wa.sendMessage(cmd.to, cmd.text);
|
||||
|
||||
102
configure_llama3.2_local.sh
Normal file
102
configure_llama3.2_local.sh
Normal file
@ -0,0 +1,102 @@
|
||||
#!/bin/bash
|
||||
# Configure llama3.2 with AirLLM using local path (no tokens after download)
|
||||
|
||||
CONFIG_FILE="$HOME/.nanobot/config.json"
|
||||
MODEL_DIR="$HOME/.local/models/llama3.2-3b-instruct"
|
||||
MODEL_NAME="meta-llama/Llama-3.2-3B-Instruct"
|
||||
|
||||
echo "======================================================================"
|
||||
echo "LLAMA3.2 + AIRLLM LOCAL SETUP (NO TOKENS AFTER DOWNLOAD)"
|
||||
echo "======================================================================"
|
||||
echo ""
|
||||
|
||||
# Create config directory if it doesn't exist
|
||||
mkdir -p "$(dirname "$CONFIG_FILE")"
|
||||
|
||||
# Load existing config or create new one
|
||||
if [ -f "$CONFIG_FILE" ]; then
|
||||
echo "Found existing config at: $CONFIG_FILE"
|
||||
# Create backup
|
||||
cp "$CONFIG_FILE" "$CONFIG_FILE.backup"
|
||||
echo "✓ Backup created: $CONFIG_FILE.backup"
|
||||
CONFIG=$(cat "$CONFIG_FILE")
|
||||
else
|
||||
CONFIG="{}"
|
||||
echo "Creating new config at: $CONFIG_FILE"
|
||||
fi
|
||||
|
||||
# Use Python to update JSON config
|
||||
python3 << EOF
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
config_file = Path("$CONFIG_FILE")
|
||||
model_dir = "$MODEL_DIR"
|
||||
|
||||
# Load config
|
||||
try:
|
||||
with open(config_file) as f:
|
||||
config = json.load(f)
|
||||
except:
|
||||
config = {}
|
||||
|
||||
# Ensure structure
|
||||
if "providers" not in config:
|
||||
config["providers"] = {}
|
||||
if "agents" not in config:
|
||||
config["agents"] = {}
|
||||
if "defaults" not in config["agents"]:
|
||||
config["agents"]["defaults"] = {}
|
||||
|
||||
# Configure airllm with local path
|
||||
config["providers"]["airllm"] = {
|
||||
"apiKey": model_dir, # Local path - no tokens needed!
|
||||
"apiBase": None,
|
||||
"extraHeaders": {}
|
||||
}
|
||||
|
||||
# Set default model to local path
|
||||
config["agents"]["defaults"]["model"] = model_dir
|
||||
|
||||
# Save config
|
||||
config_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(config_file, 'w') as f:
|
||||
json.dump(config, f, indent=2)
|
||||
|
||||
import os
|
||||
os.chmod(config_file, 0o600)
|
||||
|
||||
print(f"✓ Configuration updated!")
|
||||
print(f" Model path: {model_dir}")
|
||||
print(f" Config file: {config_file}")
|
||||
EOF
|
||||
|
||||
echo ""
|
||||
echo "======================================================================"
|
||||
echo "CONFIGURATION COMPLETE!"
|
||||
echo "======================================================================"
|
||||
echo ""
|
||||
echo "✓ Config updated to use local model path: $MODEL_DIR"
|
||||
echo "✓ No tokens needed - will use local model!"
|
||||
echo ""
|
||||
|
||||
# Check if model exists
|
||||
if [ -d "$MODEL_DIR" ] && [ -f "$MODEL_DIR/config.json" ]; then
|
||||
echo "✓ Model found at: $MODEL_DIR"
|
||||
echo ""
|
||||
echo "You're all set! Test it with:"
|
||||
echo " nanobot agent -m 'Hello, what is 2+5?'"
|
||||
else
|
||||
echo "⚠ Model not found at: $MODEL_DIR"
|
||||
echo ""
|
||||
echo "To download the model (one-time, requires HF token):"
|
||||
echo " 1. Get a Hugging Face token: https://huggingface.co/settings/tokens"
|
||||
echo " 2. Accept Llama license: https://huggingface.co/$MODEL_NAME"
|
||||
echo " 3. Download model:"
|
||||
echo " huggingface-cli download $MODEL_NAME --local-dir $MODEL_DIR"
|
||||
echo ""
|
||||
echo "After download, no tokens will be needed!"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
102
debug_nanobot.py
Executable file
102
debug_nanobot.py
Executable file
@ -0,0 +1,102 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Debug script to test nanobot AirLLM setup"""
|
||||
import sys
|
||||
import asyncio
|
||||
import traceback
|
||||
|
||||
# Ensure output is not buffered
|
||||
sys.stdout.reconfigure(line_buffering=True)
|
||||
sys.stderr.reconfigure(line_buffering=True)
|
||||
|
||||
def log(msg):
|
||||
print(msg, flush=True)
|
||||
sys.stderr.write(f"{msg}\n")
|
||||
sys.stderr.flush()
|
||||
|
||||
log("=" * 60)
|
||||
log("NANOBOT AIRLLM DEBUG SCRIPT")
|
||||
log("=" * 60)
|
||||
|
||||
try:
|
||||
log("\n[1/6] Adding current directory to path...")
|
||||
import os
|
||||
sys.path.insert(0, os.getcwd())
|
||||
|
||||
log("[2/6] Importing nanobot modules...")
|
||||
from nanobot.config.loader import load_config
|
||||
from nanobot.bus.queue import MessageBus
|
||||
from nanobot.agent.loop import AgentLoop
|
||||
from nanobot.cli.commands import _make_provider
|
||||
|
||||
log("[3/6] Loading configuration...")
|
||||
config = load_config()
|
||||
log(f" Provider name: {config.get_provider_name()}")
|
||||
log(f" Default model: {config.agents.defaults.model}")
|
||||
|
||||
log("[4/6] Creating provider...")
|
||||
provider = _make_provider(config)
|
||||
log(f" Provider type: {type(provider).__name__}")
|
||||
log(f" Default model: {provider.get_default_model()}")
|
||||
|
||||
log("[5/6] Creating agent loop...")
|
||||
bus = MessageBus()
|
||||
agent_loop = AgentLoop(
|
||||
bus=bus,
|
||||
provider=provider,
|
||||
workspace=config.workspace_path,
|
||||
brave_api_key=config.tools.web.search.api_key or None,
|
||||
exec_config=config.tools.exec,
|
||||
restrict_to_workspace=config.tools.restrict_to_workspace,
|
||||
)
|
||||
log(" Agent loop created successfully")
|
||||
|
||||
log("[6/6] Processing test message...")
|
||||
log(" Message: 'Hello, what is 2+5?'")
|
||||
|
||||
async def process():
|
||||
try:
|
||||
response = await agent_loop.process_direct("Hello, what is 2+5?", "cli:debug")
|
||||
log(f"\n{'='*60}")
|
||||
log("RESPONSE RECEIVED")
|
||||
log(f"{'='*60}")
|
||||
log(f"Response object: {response}")
|
||||
log(f"Response type: {type(response)}")
|
||||
if response:
|
||||
log(f"Response content: {repr(response.content)}")
|
||||
log(f"Content length: {len(response.content) if response.content else 0}")
|
||||
log(f"\n{'='*60}")
|
||||
log("FINAL OUTPUT:")
|
||||
log(f"{'='*60}")
|
||||
print(response.content or "(empty response)")
|
||||
else:
|
||||
log("ERROR: Response is None!")
|
||||
return 1
|
||||
except Exception as e:
|
||||
log(f"\n{'='*60}")
|
||||
log("ERROR DURING PROCESSING")
|
||||
log(f"{'='*60}")
|
||||
log(f"Exception: {e}")
|
||||
log(f"Type: {type(e).__name__}")
|
||||
traceback.print_exc()
|
||||
return 1
|
||||
return 0
|
||||
|
||||
exit_code = asyncio.run(process())
|
||||
sys.exit(exit_code)
|
||||
|
||||
except ImportError as e:
|
||||
log(f"\n{'='*60}")
|
||||
log("IMPORT ERROR")
|
||||
log(f"{'='*60}")
|
||||
log(f"Failed to import: {e}")
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
except Exception as e:
|
||||
log(f"\n{'='*60}")
|
||||
log("UNEXPECTED ERROR")
|
||||
log(f"{'='*60}")
|
||||
log(f"Exception: {e}")
|
||||
log(f"Type: {type(e).__name__}")
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
31
docker-compose.yml
Normal file
31
docker-compose.yml
Normal file
@ -0,0 +1,31 @@
|
||||
x-common-config: &common-config
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
volumes:
|
||||
- ~/.nanobot:/root/.nanobot
|
||||
|
||||
services:
|
||||
nanobot-gateway:
|
||||
container_name: nanobot-gateway
|
||||
<<: *common-config
|
||||
command: ["gateway"]
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- 18790:18790
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '1'
|
||||
memory: 1G
|
||||
reservations:
|
||||
cpus: '0.25'
|
||||
memory: 256M
|
||||
|
||||
nanobot-cli:
|
||||
<<: *common-config
|
||||
profiles:
|
||||
- cli
|
||||
command: ["status"]
|
||||
stdin_open: true
|
||||
tty: true
|
||||
123
download_llama3.2.py
Normal file
123
download_llama3.2.py
Normal file
@ -0,0 +1,123 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Download llama3.2 using Hugging Face token - easier to use than shell script
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
MODEL_NAME = "meta-llama/Llama-3.2-3B-Instruct"
|
||||
MODEL_DIR = Path.home() / ".local" / "models" / "llama3.2-3b-instruct"
|
||||
|
||||
def main():
|
||||
print("="*70)
|
||||
print("DOWNLOADING LLAMA3.2 FOR AIRLLM")
|
||||
print("="*70)
|
||||
print()
|
||||
print(f"This will download {MODEL_NAME} to:")
|
||||
print(f" {MODEL_DIR}")
|
||||
print()
|
||||
print("After download, no tokens will be needed!")
|
||||
print()
|
||||
|
||||
# Check if model already exists
|
||||
if MODEL_DIR.exists() and (MODEL_DIR / "config.json").exists():
|
||||
print(f"✓ Model already exists at: {MODEL_DIR}")
|
||||
print(" You're all set! No download needed.")
|
||||
return
|
||||
|
||||
# Check if huggingface_hub is installed
|
||||
try:
|
||||
from huggingface_hub import snapshot_download
|
||||
except ImportError:
|
||||
print("Installing huggingface_hub...")
|
||||
os.system("pip install -q huggingface_hub")
|
||||
try:
|
||||
from huggingface_hub import snapshot_download
|
||||
except ImportError:
|
||||
print("⚠ Error: Could not install huggingface_hub")
|
||||
print("Try: pip install huggingface_hub")
|
||||
return
|
||||
|
||||
# Get token - try multiple methods
|
||||
hf_token = None
|
||||
|
||||
# Method 1: Command line argument
|
||||
if len(sys.argv) > 1:
|
||||
hf_token = sys.argv[1]
|
||||
print(f"Using token from command line argument")
|
||||
# Method 2: Environment variable
|
||||
elif os.environ.get("HF_TOKEN"):
|
||||
hf_token = os.environ.get("HF_TOKEN")
|
||||
print(f"Using token from HF_TOKEN environment variable")
|
||||
# Method 3: Interactive input
|
||||
else:
|
||||
print("Enter your Hugging Face token (starts with 'hf_'):")
|
||||
print("(You can also pass it as: python3 download_llama3.2.py YOUR_TOKEN)")
|
||||
print("(Or set environment variable: export HF_TOKEN=YOUR_TOKEN)")
|
||||
print()
|
||||
hf_token = input("Token: ").strip()
|
||||
|
||||
if not hf_token:
|
||||
print("⚠ Error: Token is required")
|
||||
return
|
||||
|
||||
if not hf_token.startswith("hf_"):
|
||||
print("⚠ Warning: Token should start with 'hf_'")
|
||||
confirm = input("Continue anyway? (y/n): ").strip().lower()
|
||||
if confirm != 'y':
|
||||
return
|
||||
|
||||
print()
|
||||
print("Downloading model (this may take a while depending on your connection)...")
|
||||
print("Model size: ~2GB")
|
||||
print()
|
||||
|
||||
# Create directory
|
||||
MODEL_DIR.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
try:
|
||||
# Download using huggingface_hub
|
||||
snapshot_download(
|
||||
repo_id=MODEL_NAME,
|
||||
local_dir=str(MODEL_DIR),
|
||||
token=hf_token,
|
||||
local_dir_use_symlinks=False
|
||||
)
|
||||
|
||||
print()
|
||||
print("="*70)
|
||||
print("✓ DOWNLOAD COMPLETE!")
|
||||
print("="*70)
|
||||
print()
|
||||
print(f"Model downloaded to: {MODEL_DIR}")
|
||||
print()
|
||||
print("🎉 No tokens needed anymore - using local model!")
|
||||
print()
|
||||
print("Your config is already set up. Test it with:")
|
||||
print(" nanobot agent -m 'Hello, what is 2+5?'")
|
||||
print()
|
||||
print("You can now delete your Hugging Face token from the config")
|
||||
print("since the model is stored locally.")
|
||||
|
||||
except Exception as e:
|
||||
print()
|
||||
print("⚠ Download failed!")
|
||||
print(f"Error: {e}")
|
||||
print()
|
||||
print("Common issues:")
|
||||
print(" 1. Make sure you accepted the Llama license:")
|
||||
print(f" https://huggingface.co/{MODEL_NAME}")
|
||||
print(" 2. Check your token is valid")
|
||||
print(" 3. Check your internet connection")
|
||||
print()
|
||||
print("Try again with:")
|
||||
print(f" python3 download_llama3.2.py YOUR_TOKEN")
|
||||
return 1
|
||||
|
||||
return 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
|
||||
88
download_llama3.2.sh
Normal file
88
download_llama3.2.sh
Normal file
@ -0,0 +1,88 @@
|
||||
#!/bin/bash
|
||||
# Download llama3.2 using your Hugging Face token
|
||||
|
||||
MODEL_NAME="meta-llama/Llama-3.2-3B-Instruct"
|
||||
MODEL_DIR="$HOME/.local/models/llama3.2-3b-instruct"
|
||||
|
||||
echo "======================================================================"
|
||||
echo "DOWNLOADING LLAMA3.2 FOR AIRLLM"
|
||||
echo "======================================================================"
|
||||
echo ""
|
||||
echo "This will download $MODEL_NAME to:"
|
||||
echo " $MODEL_DIR"
|
||||
echo ""
|
||||
echo "After download, no tokens will be needed!"
|
||||
echo ""
|
||||
|
||||
# Check if model already exists
|
||||
if [ -d "$MODEL_DIR" ] && [ -f "$MODEL_DIR/config.json" ]; then
|
||||
echo "✓ Model already exists at: $MODEL_DIR"
|
||||
echo " You're all set! No download needed."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Check if huggingface-cli is available
|
||||
if ! command -v huggingface-cli &> /dev/null; then
|
||||
echo "Installing huggingface_hub..."
|
||||
pip install -q huggingface_hub
|
||||
fi
|
||||
|
||||
# Get token
|
||||
echo "Enter your Hugging Face token (starts with 'hf_'):"
|
||||
read -s HF_TOKEN
|
||||
echo ""
|
||||
|
||||
if [ -z "$HF_TOKEN" ]; then
|
||||
echo "⚠ Error: Token is required"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! "$HF_TOKEN" =~ ^hf_ ]]; then
|
||||
echo "⚠ Warning: Token should start with 'hf_'"
|
||||
read -p "Continue anyway? (y/n): " confirm
|
||||
if [ "$confirm" != "y" ]; then
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Downloading model (this may take a while depending on your connection)..."
|
||||
echo "Model size: ~2GB"
|
||||
echo ""
|
||||
|
||||
# Create directory
|
||||
mkdir -p "$MODEL_DIR"
|
||||
|
||||
# Download using huggingface-cli
|
||||
huggingface-cli download "$MODEL_NAME" \
|
||||
--local-dir "$MODEL_DIR" \
|
||||
--token "$HF_TOKEN" \
|
||||
--local-dir-use-symlinks False
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo ""
|
||||
echo "======================================================================"
|
||||
echo "✓ DOWNLOAD COMPLETE!"
|
||||
echo "======================================================================"
|
||||
echo ""
|
||||
echo "Model downloaded to: $MODEL_DIR"
|
||||
echo ""
|
||||
echo "🎉 No tokens needed anymore - using local model!"
|
||||
echo ""
|
||||
echo "Your config is already set up. Test it with:"
|
||||
echo " nanobot agent -m 'Hello, what is 2+5?'"
|
||||
echo ""
|
||||
echo "You can now delete your Hugging Face token from the config"
|
||||
echo "since the model is stored locally."
|
||||
else
|
||||
echo ""
|
||||
echo "⚠ Download failed. Common issues:"
|
||||
echo " 1. Make sure you accepted the Llama license:"
|
||||
echo " https://huggingface.co/$MODEL_NAME"
|
||||
echo " 2. Check your token is valid"
|
||||
echo " 3. Check your internet connection"
|
||||
echo ""
|
||||
echo "Try again with:"
|
||||
echo " huggingface-cli download $MODEL_NAME --local-dir $MODEL_DIR --token YOUR_TOKEN"
|
||||
fi
|
||||
|
||||
66
download_llama3.2_local.sh
Normal file
66
download_llama3.2_local.sh
Normal file
@ -0,0 +1,66 @@
|
||||
#!/bin/bash
|
||||
# Download llama3.2 in Hugging Face format to local directory (one-time token needed)
|
||||
|
||||
MODEL_NAME="meta-llama/Llama-3.2-3B-Instruct"
|
||||
MODEL_DIR="$HOME/.local/models/llama3.2-3b-instruct"
|
||||
|
||||
echo "======================================================================"
|
||||
echo "DOWNLOAD LLAMA3.2 FOR AIRLLM (ONE-TIME TOKEN NEEDED)"
|
||||
echo "======================================================================"
|
||||
echo ""
|
||||
echo "This will download $MODEL_NAME to:"
|
||||
echo " $MODEL_DIR"
|
||||
echo ""
|
||||
echo "After download, no tokens will be needed!"
|
||||
echo ""
|
||||
|
||||
# Check if model already exists
|
||||
if [ -d "$MODEL_DIR" ] && [ -f "$MODEL_DIR/config.json" ]; then
|
||||
echo "✓ Model already exists at: $MODEL_DIR"
|
||||
echo " You're all set! No download needed."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Check if huggingface-cli is available
|
||||
if ! command -v huggingface-cli &> /dev/null; then
|
||||
echo "⚠ huggingface-cli not found. Installing..."
|
||||
pip install -q huggingface_hub
|
||||
fi
|
||||
|
||||
echo "You'll need a Hugging Face token (one-time only):"
|
||||
echo " 1. Get token: https://huggingface.co/settings/tokens"
|
||||
echo " 2. Accept license: https://huggingface.co/$MODEL_NAME"
|
||||
echo ""
|
||||
read -p "Enter your Hugging Face token (or press Enter to skip): " HF_TOKEN
|
||||
|
||||
if [ -z "$HF_TOKEN" ]; then
|
||||
echo ""
|
||||
echo "Skipping download. To download later, run:"
|
||||
echo " huggingface-cli download $MODEL_NAME --local-dir $MODEL_DIR"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Downloading model (this may take a while)..."
|
||||
mkdir -p "$MODEL_DIR"
|
||||
|
||||
huggingface-cli download "$MODEL_NAME" \
|
||||
--local-dir "$MODEL_DIR" \
|
||||
--token "$HF_TOKEN" \
|
||||
--local-dir-use-symlinks False
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo ""
|
||||
echo "✓ Model downloaded successfully!"
|
||||
echo " Location: $MODEL_DIR"
|
||||
echo ""
|
||||
echo "🎉 No tokens needed anymore - using local model!"
|
||||
echo ""
|
||||
echo "Test it with:"
|
||||
echo " nanobot agent -m 'Hello, what is 2+5?'"
|
||||
else
|
||||
echo ""
|
||||
echo "⚠ Download failed. You can try again with:"
|
||||
echo " huggingface-cli download $MODEL_NAME --local-dir $MODEL_DIR --token YOUR_TOKEN"
|
||||
fi
|
||||
|
||||
77
find_llama_model_files.py
Normal file
77
find_llama_model_files.py
Normal file
@ -0,0 +1,77 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Check if we can find llama3.2 model files that could work with AirLLM.
|
||||
Looks in various locations where models might be stored.
|
||||
"""
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
def check_directory(path, description):
|
||||
"""Check if a directory exists and contains model files."""
|
||||
path_obj = Path(path)
|
||||
if not path_obj.exists():
|
||||
return False, f"{description}: Not found"
|
||||
|
||||
# Look for common model files
|
||||
model_files = ['config.json', 'tokenizer.json', 'model.safetensors', 'pytorch_model.bin']
|
||||
found_files = [f for f in model_files if (path_obj / f).exists()]
|
||||
|
||||
if found_files:
|
||||
return True, f"{description}: Found {len(found_files)} model files: {', '.join(found_files)}"
|
||||
else:
|
||||
# Check subdirectories
|
||||
subdirs = [d for d in path_obj.iterdir() if d.is_dir()]
|
||||
if subdirs:
|
||||
return True, f"{description}: Found {len(subdirs)} subdirectories (might contain model files)"
|
||||
return False, f"{description}: No model files found"
|
||||
|
||||
print("="*70)
|
||||
print("SEARCHING FOR LLAMA3.2 MODEL FILES")
|
||||
print("="*70)
|
||||
print()
|
||||
|
||||
# Check common locations
|
||||
locations = [
|
||||
("~/.ollama/models", "Ollama models directory"),
|
||||
("~/.cache/huggingface/hub", "Hugging Face cache"),
|
||||
("~/.local/share/ollama", "Ollama data directory"),
|
||||
("~/models", "User models directory"),
|
||||
("/usr/local/share/ollama", "System Ollama directory"),
|
||||
]
|
||||
|
||||
found_any = False
|
||||
for path, desc in locations:
|
||||
expanded = os.path.expanduser(path)
|
||||
exists, message = check_directory(expanded, desc)
|
||||
print(f" {message}")
|
||||
if exists:
|
||||
found_any = True
|
||||
print(f" Path: {expanded}")
|
||||
|
||||
print()
|
||||
print("="*70)
|
||||
if found_any:
|
||||
print("OPTIONS:")
|
||||
print("="*70)
|
||||
print()
|
||||
print("1. If you found model files in Hugging Face format:")
|
||||
print(" - Use that path directly in your config (no token needed!)")
|
||||
print()
|
||||
print("2. If you only have Ollama format:")
|
||||
print(" - Ollama uses a different format, can't be used directly")
|
||||
print(" - You'd need to get the model in Hugging Face format")
|
||||
print()
|
||||
print("3. Alternative: Get model files from someone else")
|
||||
print(" - If someone has downloaded llama3.2 in HF format,")
|
||||
print(" you can copy their files and use them directly")
|
||||
print()
|
||||
else:
|
||||
print("No model files found in common locations.")
|
||||
print()
|
||||
print("To use AirLLM with llama3.2 without a Hugging Face account:")
|
||||
print(" 1. Get the model files from someone else (in HF format)")
|
||||
print(" 2. Place them in: ~/.local/models/llama3.2-3b-instruct")
|
||||
print(" 3. Your config is already set to use that path!")
|
||||
print()
|
||||
|
||||
@ -2,5 +2,5 @@
|
||||
nanobot - A lightweight AI agent framework
|
||||
"""
|
||||
|
||||
__version__ = "0.1.0"
|
||||
__version__ = "0.1.4"
|
||||
__logo__ = "🐈"
|
||||
|
||||
@ -73,7 +73,9 @@ Skills with available="false" need dependencies installed first - you can try in
|
||||
def _get_identity(self) -> str:
|
||||
"""Get the core identity section."""
|
||||
from datetime import datetime
|
||||
import time as _time
|
||||
now = datetime.now().strftime("%Y-%m-%d %H:%M (%A)")
|
||||
tz = _time.strftime("%Z") or "UTC"
|
||||
workspace_path = str(self.workspace.expanduser().resolve())
|
||||
system = platform.system()
|
||||
runtime = f"{'macOS' if system == 'Darwin' else system} {platform.machine()}, Python {platform.python_version()}"
|
||||
@ -88,23 +90,24 @@ You are nanobot, a helpful AI assistant. You have access to tools that allow you
|
||||
- Spawn subagents for complex background tasks
|
||||
|
||||
## Current Time
|
||||
{now}
|
||||
{now} ({tz})
|
||||
|
||||
## Runtime
|
||||
{runtime}
|
||||
|
||||
## Workspace
|
||||
Your workspace is at: {workspace_path}
|
||||
- Memory files: {workspace_path}/memory/MEMORY.md
|
||||
- Daily notes: {workspace_path}/memory/YYYY-MM-DD.md
|
||||
- Long-term memory: {workspace_path}/memory/MEMORY.md
|
||||
- History log: {workspace_path}/memory/HISTORY.md (grep-searchable)
|
||||
- Custom skills: {workspace_path}/skills/{{skill-name}}/SKILL.md
|
||||
|
||||
IMPORTANT: When responding to direct questions or conversations, reply directly with your text response.
|
||||
Only use the 'message' tool when you need to send a message to a specific chat channel (like WhatsApp).
|
||||
For normal conversation, just respond with text - do not call the message tool.
|
||||
|
||||
Always be helpful, accurate, and concise. When using tools, explain what you're doing.
|
||||
When remembering something, write to {workspace_path}/memory/MEMORY.md"""
|
||||
Always be helpful, accurate, and concise. Before calling tools, briefly tell the user what you're about to do (one short sentence in the user's language).
|
||||
When remembering something important, write to {workspace_path}/memory/MEMORY.md
|
||||
To recall past events, grep {workspace_path}/memory/HISTORY.md"""
|
||||
|
||||
def _load_bootstrap_files(self) -> str:
|
||||
"""Load all bootstrap files from workspace."""
|
||||
@ -222,14 +225,18 @@ When remembering something, write to {workspace_path}/memory/MEMORY.md"""
|
||||
Returns:
|
||||
Updated message list.
|
||||
"""
|
||||
msg: dict[str, Any] = {"role": "assistant", "content": content or ""}
|
||||
|
||||
msg: dict[str, Any] = {"role": "assistant"}
|
||||
|
||||
# Omit empty content — some backends reject empty text blocks
|
||||
if content:
|
||||
msg["content"] = content
|
||||
|
||||
if tool_calls:
|
||||
msg["tool_calls"] = tool_calls
|
||||
|
||||
# Thinking models reject history without this
|
||||
|
||||
# Include reasoning content when provided (required by some thinking models)
|
||||
if reasoning_content:
|
||||
msg["reasoning_content"] = reasoning_content
|
||||
|
||||
|
||||
messages.append(msg)
|
||||
return messages
|
||||
|
||||
@ -1,9 +1,12 @@
|
||||
"""Agent loop: the core processing engine."""
|
||||
|
||||
import asyncio
|
||||
from contextlib import AsyncExitStack
|
||||
import json
|
||||
import json_repair
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
import re
|
||||
from typing import Any, Awaitable, Callable
|
||||
|
||||
from loguru import logger
|
||||
|
||||
@ -18,14 +21,15 @@ from nanobot.agent.tools.web import WebSearchTool, WebFetchTool
|
||||
from nanobot.agent.tools.message import MessageTool
|
||||
from nanobot.agent.tools.spawn import SpawnTool
|
||||
from nanobot.agent.tools.cron import CronTool
|
||||
from nanobot.agent.memory import MemoryStore
|
||||
from nanobot.agent.subagent import SubagentManager
|
||||
from nanobot.session.manager import SessionManager
|
||||
from nanobot.session.manager import Session, SessionManager
|
||||
|
||||
|
||||
class AgentLoop:
|
||||
"""
|
||||
The agent loop is the core processing engine.
|
||||
|
||||
|
||||
It:
|
||||
1. Receives messages from the bus
|
||||
2. Builds context with history, memory, skills
|
||||
@ -33,7 +37,7 @@ class AgentLoop:
|
||||
4. Executes tool calls
|
||||
5. Sends responses back
|
||||
"""
|
||||
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
bus: MessageBus,
|
||||
@ -41,11 +45,15 @@ class AgentLoop:
|
||||
workspace: Path,
|
||||
model: str | None = None,
|
||||
max_iterations: int = 20,
|
||||
temperature: float = 0.7,
|
||||
max_tokens: int = 4096,
|
||||
memory_window: int = 50,
|
||||
brave_api_key: str | None = None,
|
||||
exec_config: "ExecToolConfig | None" = None,
|
||||
cron_service: "CronService | None" = None,
|
||||
restrict_to_workspace: bool = False,
|
||||
session_manager: SessionManager | None = None,
|
||||
mcp_servers: dict | None = None,
|
||||
):
|
||||
from nanobot.config.schema import ExecToolConfig
|
||||
from nanobot.cron.service import CronService
|
||||
@ -54,11 +62,14 @@ class AgentLoop:
|
||||
self.workspace = workspace
|
||||
self.model = model or provider.get_default_model()
|
||||
self.max_iterations = max_iterations
|
||||
self.temperature = temperature
|
||||
self.max_tokens = max_tokens
|
||||
self.memory_window = memory_window
|
||||
self.brave_api_key = brave_api_key
|
||||
self.exec_config = exec_config or ExecToolConfig()
|
||||
self.cron_service = cron_service
|
||||
self.restrict_to_workspace = restrict_to_workspace
|
||||
|
||||
|
||||
self.context = ContextBuilder(workspace)
|
||||
self.sessions = session_manager or SessionManager(workspace)
|
||||
self.tools = ToolRegistry()
|
||||
@ -67,12 +78,17 @@ class AgentLoop:
|
||||
workspace=workspace,
|
||||
bus=bus,
|
||||
model=self.model,
|
||||
temperature=self.temperature,
|
||||
max_tokens=self.max_tokens,
|
||||
brave_api_key=brave_api_key,
|
||||
exec_config=self.exec_config,
|
||||
restrict_to_workspace=restrict_to_workspace,
|
||||
)
|
||||
|
||||
self._running = False
|
||||
self._mcp_servers = mcp_servers or {}
|
||||
self._mcp_stack: AsyncExitStack | None = None
|
||||
self._mcp_connected = False
|
||||
self._register_default_tools()
|
||||
|
||||
def _register_default_tools(self) -> None:
|
||||
@ -107,107 +123,90 @@ class AgentLoop:
|
||||
if self.cron_service:
|
||||
self.tools.register(CronTool(self.cron_service))
|
||||
|
||||
async def run(self) -> None:
|
||||
"""Run the agent loop, processing messages from the bus."""
|
||||
self._running = True
|
||||
logger.info("Agent loop started")
|
||||
|
||||
while self._running:
|
||||
try:
|
||||
# Wait for next message
|
||||
msg = await asyncio.wait_for(
|
||||
self.bus.consume_inbound(),
|
||||
timeout=1.0
|
||||
)
|
||||
|
||||
# Process it
|
||||
try:
|
||||
response = await self._process_message(msg)
|
||||
if response:
|
||||
await self.bus.publish_outbound(response)
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing message: {e}")
|
||||
# Send error response
|
||||
await self.bus.publish_outbound(OutboundMessage(
|
||||
channel=msg.channel,
|
||||
chat_id=msg.chat_id,
|
||||
content=f"Sorry, I encountered an error: {str(e)}"
|
||||
))
|
||||
except asyncio.TimeoutError:
|
||||
continue
|
||||
|
||||
def stop(self) -> None:
|
||||
"""Stop the agent loop."""
|
||||
self._running = False
|
||||
logger.info("Agent loop stopping")
|
||||
|
||||
async def _process_message(self, msg: InboundMessage) -> OutboundMessage | None:
|
||||
async def _connect_mcp(self) -> None:
|
||||
"""Connect to configured MCP servers (one-time, lazy)."""
|
||||
if self._mcp_connected or not self._mcp_servers:
|
||||
return
|
||||
self._mcp_connected = True
|
||||
from nanobot.agent.tools.mcp import connect_mcp_servers
|
||||
self._mcp_stack = AsyncExitStack()
|
||||
await self._mcp_stack.__aenter__()
|
||||
await connect_mcp_servers(self._mcp_servers, self.tools, self._mcp_stack)
|
||||
|
||||
def _set_tool_context(self, channel: str, chat_id: str) -> None:
|
||||
"""Update context for all tools that need routing info."""
|
||||
if message_tool := self.tools.get("message"):
|
||||
if isinstance(message_tool, MessageTool):
|
||||
message_tool.set_context(channel, chat_id)
|
||||
|
||||
if spawn_tool := self.tools.get("spawn"):
|
||||
if isinstance(spawn_tool, SpawnTool):
|
||||
spawn_tool.set_context(channel, chat_id)
|
||||
|
||||
if cron_tool := self.tools.get("cron"):
|
||||
if isinstance(cron_tool, CronTool):
|
||||
cron_tool.set_context(channel, chat_id)
|
||||
|
||||
@staticmethod
|
||||
def _strip_think(text: str | None) -> str | None:
|
||||
"""Remove <think>…</think> blocks that some models embed in content."""
|
||||
if not text:
|
||||
return None
|
||||
return re.sub(r"<think>[\s\S]*?</think>", "", text).strip() or None
|
||||
|
||||
@staticmethod
|
||||
def _tool_hint(tool_calls: list) -> str:
|
||||
"""Format tool calls as concise hint, e.g. 'web_search("query")'."""
|
||||
def _fmt(tc):
|
||||
val = next(iter(tc.arguments.values()), None) if tc.arguments else None
|
||||
if not isinstance(val, str):
|
||||
return tc.name
|
||||
return f'{tc.name}("{val[:40]}…")' if len(val) > 40 else f'{tc.name}("{val}")'
|
||||
return ", ".join(_fmt(tc) for tc in tool_calls)
|
||||
|
||||
async def _run_agent_loop(
|
||||
self,
|
||||
initial_messages: list[dict],
|
||||
on_progress: Callable[[str], Awaitable[None]] | None = None,
|
||||
) -> tuple[str | None, list[str]]:
|
||||
"""
|
||||
Process a single inbound message.
|
||||
|
||||
Run the agent iteration loop.
|
||||
|
||||
Args:
|
||||
msg: The inbound message to process.
|
||||
|
||||
initial_messages: Starting messages for the LLM conversation.
|
||||
on_progress: Optional callback to push intermediate content to the user.
|
||||
|
||||
Returns:
|
||||
The response message, or None if no response needed.
|
||||
Tuple of (final_content, list_of_tools_used).
|
||||
"""
|
||||
# Handle system messages (subagent announces)
|
||||
# The chat_id contains the original "channel:chat_id" to route back to
|
||||
if msg.channel == "system":
|
||||
return await self._process_system_message(msg)
|
||||
|
||||
preview = msg.content[:80] + "..." if len(msg.content) > 80 else msg.content
|
||||
logger.info(f"Processing message from {msg.channel}:{msg.sender_id}: {preview}")
|
||||
|
||||
# Get or create session
|
||||
session = self.sessions.get_or_create(msg.session_key)
|
||||
|
||||
# Update tool contexts
|
||||
message_tool = self.tools.get("message")
|
||||
if isinstance(message_tool, MessageTool):
|
||||
message_tool.set_context(msg.channel, msg.chat_id)
|
||||
|
||||
spawn_tool = self.tools.get("spawn")
|
||||
if isinstance(spawn_tool, SpawnTool):
|
||||
spawn_tool.set_context(msg.channel, msg.chat_id)
|
||||
|
||||
cron_tool = self.tools.get("cron")
|
||||
if isinstance(cron_tool, CronTool):
|
||||
cron_tool.set_context(msg.channel, msg.chat_id)
|
||||
|
||||
# Build initial messages (use get_history for LLM-formatted messages)
|
||||
messages = self.context.build_messages(
|
||||
history=session.get_history(),
|
||||
current_message=msg.content,
|
||||
media=msg.media if msg.media else None,
|
||||
channel=msg.channel,
|
||||
chat_id=msg.chat_id,
|
||||
)
|
||||
|
||||
# Agent loop
|
||||
messages = initial_messages
|
||||
iteration = 0
|
||||
final_content = None
|
||||
|
||||
tools_used: list[str] = []
|
||||
|
||||
while iteration < self.max_iterations:
|
||||
iteration += 1
|
||||
|
||||
# Call LLM
|
||||
|
||||
response = await self.provider.chat(
|
||||
messages=messages,
|
||||
tools=self.tools.get_definitions(),
|
||||
model=self.model
|
||||
model=self.model,
|
||||
temperature=self.temperature,
|
||||
max_tokens=self.max_tokens,
|
||||
)
|
||||
|
||||
# Handle tool calls
|
||||
|
||||
if response.has_tool_calls:
|
||||
# Add assistant message with tool calls
|
||||
if on_progress:
|
||||
clean = self._strip_think(response.content)
|
||||
await on_progress(clean or self._tool_hint(response.tool_calls))
|
||||
|
||||
tool_call_dicts = [
|
||||
{
|
||||
"id": tc.id,
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": tc.name,
|
||||
"arguments": json.dumps(tc.arguments) # Must be JSON string
|
||||
"arguments": json.dumps(tc.arguments)
|
||||
}
|
||||
}
|
||||
for tc in response.tool_calls
|
||||
@ -216,9 +215,9 @@ class AgentLoop:
|
||||
messages, response.content, tool_call_dicts,
|
||||
reasoning_content=response.reasoning_content,
|
||||
)
|
||||
|
||||
# Execute tools
|
||||
|
||||
for tool_call in response.tool_calls:
|
||||
tools_used.append(tool_call.name)
|
||||
args_str = json.dumps(tool_call.arguments, ensure_ascii=False)
|
||||
logger.info(f"Tool call: {tool_call.name}({args_str[:200]})")
|
||||
result = await self.tools.execute(tool_call.name, tool_call.arguments)
|
||||
@ -226,20 +225,130 @@ class AgentLoop:
|
||||
messages, tool_call.id, tool_call.name, result
|
||||
)
|
||||
else:
|
||||
# No tool calls, we're done
|
||||
final_content = response.content
|
||||
final_content = self._strip_think(response.content)
|
||||
break
|
||||
|
||||
return final_content, tools_used
|
||||
|
||||
async def run(self) -> None:
|
||||
"""Run the agent loop, processing messages from the bus."""
|
||||
self._running = True
|
||||
await self._connect_mcp()
|
||||
logger.info("Agent loop started")
|
||||
|
||||
while self._running:
|
||||
try:
|
||||
msg = await asyncio.wait_for(
|
||||
self.bus.consume_inbound(),
|
||||
timeout=1.0
|
||||
)
|
||||
try:
|
||||
response = await self._process_message(msg)
|
||||
if response:
|
||||
await self.bus.publish_outbound(response)
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing message: {e}")
|
||||
await self.bus.publish_outbound(OutboundMessage(
|
||||
channel=msg.channel,
|
||||
chat_id=msg.chat_id,
|
||||
content=f"Sorry, I encountered an error: {str(e)}"
|
||||
))
|
||||
except asyncio.TimeoutError:
|
||||
continue
|
||||
|
||||
async def close_mcp(self) -> None:
|
||||
"""Close MCP connections."""
|
||||
if self._mcp_stack:
|
||||
try:
|
||||
await self._mcp_stack.aclose()
|
||||
except (RuntimeError, BaseExceptionGroup):
|
||||
pass # MCP SDK cancel scope cleanup is noisy but harmless
|
||||
self._mcp_stack = None
|
||||
|
||||
def stop(self) -> None:
|
||||
"""Stop the agent loop."""
|
||||
self._running = False
|
||||
logger.info("Agent loop stopping")
|
||||
|
||||
async def _process_message(
|
||||
self,
|
||||
msg: InboundMessage,
|
||||
session_key: str | None = None,
|
||||
on_progress: Callable[[str], Awaitable[None]] | None = None,
|
||||
) -> OutboundMessage | None:
|
||||
"""
|
||||
Process a single inbound message.
|
||||
|
||||
Args:
|
||||
msg: The inbound message to process.
|
||||
session_key: Override session key (used by process_direct).
|
||||
on_progress: Optional callback for intermediate output (defaults to bus publish).
|
||||
|
||||
Returns:
|
||||
The response message, or None if no response needed.
|
||||
"""
|
||||
# System messages route back via chat_id ("channel:chat_id")
|
||||
if msg.channel == "system":
|
||||
return await self._process_system_message(msg)
|
||||
|
||||
preview = msg.content[:80] + "..." if len(msg.content) > 80 else msg.content
|
||||
logger.info(f"Processing message from {msg.channel}:{msg.sender_id}: {preview}")
|
||||
|
||||
key = session_key or msg.session_key
|
||||
session = self.sessions.get_or_create(key)
|
||||
|
||||
# Handle slash commands
|
||||
cmd = msg.content.strip().lower()
|
||||
if cmd == "/new":
|
||||
# Capture messages before clearing (avoid race condition with background task)
|
||||
messages_to_archive = session.messages.copy()
|
||||
session.clear()
|
||||
self.sessions.save(session)
|
||||
self.sessions.invalidate(session.key)
|
||||
|
||||
async def _consolidate_and_cleanup():
|
||||
temp_session = Session(key=session.key)
|
||||
temp_session.messages = messages_to_archive
|
||||
await self._consolidate_memory(temp_session, archive_all=True)
|
||||
|
||||
asyncio.create_task(_consolidate_and_cleanup())
|
||||
return OutboundMessage(channel=msg.channel, chat_id=msg.chat_id,
|
||||
content="New session started. Memory consolidation in progress.")
|
||||
if cmd == "/help":
|
||||
return OutboundMessage(channel=msg.channel, chat_id=msg.chat_id,
|
||||
content="🐈 nanobot commands:\n/new — Start a new conversation\n/help — Show available commands")
|
||||
|
||||
if len(session.messages) > self.memory_window:
|
||||
asyncio.create_task(self._consolidate_memory(session))
|
||||
|
||||
self._set_tool_context(msg.channel, msg.chat_id)
|
||||
initial_messages = self.context.build_messages(
|
||||
history=session.get_history(max_messages=self.memory_window),
|
||||
current_message=msg.content,
|
||||
media=msg.media if msg.media else None,
|
||||
channel=msg.channel,
|
||||
chat_id=msg.chat_id,
|
||||
)
|
||||
|
||||
async def _bus_progress(content: str) -> None:
|
||||
await self.bus.publish_outbound(OutboundMessage(
|
||||
channel=msg.channel, chat_id=msg.chat_id, content=content,
|
||||
metadata=msg.metadata or {},
|
||||
))
|
||||
|
||||
final_content, tools_used = await self._run_agent_loop(
|
||||
initial_messages, on_progress=on_progress or _bus_progress,
|
||||
)
|
||||
|
||||
if final_content is None:
|
||||
final_content = "I've completed processing but have no response to give."
|
||||
|
||||
# Log response preview
|
||||
preview = final_content[:120] + "..." if len(final_content) > 120 else final_content
|
||||
logger.info(f"Response to {msg.channel}:{msg.sender_id}: {preview}")
|
||||
|
||||
# Save to session
|
||||
session.add_message("user", msg.content)
|
||||
session.add_message("assistant", final_content)
|
||||
session.add_message("assistant", final_content,
|
||||
tools_used=tools_used if tools_used else None)
|
||||
self.sessions.save(session)
|
||||
|
||||
return OutboundMessage(
|
||||
@ -268,76 +377,20 @@ class AgentLoop:
|
||||
origin_channel = "cli"
|
||||
origin_chat_id = msg.chat_id
|
||||
|
||||
# Use the origin session for context
|
||||
session_key = f"{origin_channel}:{origin_chat_id}"
|
||||
session = self.sessions.get_or_create(session_key)
|
||||
|
||||
# Update tool contexts
|
||||
message_tool = self.tools.get("message")
|
||||
if isinstance(message_tool, MessageTool):
|
||||
message_tool.set_context(origin_channel, origin_chat_id)
|
||||
|
||||
spawn_tool = self.tools.get("spawn")
|
||||
if isinstance(spawn_tool, SpawnTool):
|
||||
spawn_tool.set_context(origin_channel, origin_chat_id)
|
||||
|
||||
cron_tool = self.tools.get("cron")
|
||||
if isinstance(cron_tool, CronTool):
|
||||
cron_tool.set_context(origin_channel, origin_chat_id)
|
||||
|
||||
# Build messages with the announce content
|
||||
messages = self.context.build_messages(
|
||||
history=session.get_history(),
|
||||
self._set_tool_context(origin_channel, origin_chat_id)
|
||||
initial_messages = self.context.build_messages(
|
||||
history=session.get_history(max_messages=self.memory_window),
|
||||
current_message=msg.content,
|
||||
channel=origin_channel,
|
||||
chat_id=origin_chat_id,
|
||||
)
|
||||
|
||||
# Agent loop (limited for announce handling)
|
||||
iteration = 0
|
||||
final_content = None
|
||||
|
||||
while iteration < self.max_iterations:
|
||||
iteration += 1
|
||||
|
||||
response = await self.provider.chat(
|
||||
messages=messages,
|
||||
tools=self.tools.get_definitions(),
|
||||
model=self.model
|
||||
)
|
||||
|
||||
if response.has_tool_calls:
|
||||
tool_call_dicts = [
|
||||
{
|
||||
"id": tc.id,
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": tc.name,
|
||||
"arguments": json.dumps(tc.arguments)
|
||||
}
|
||||
}
|
||||
for tc in response.tool_calls
|
||||
]
|
||||
messages = self.context.add_assistant_message(
|
||||
messages, response.content, tool_call_dicts,
|
||||
reasoning_content=response.reasoning_content,
|
||||
)
|
||||
|
||||
for tool_call in response.tool_calls:
|
||||
args_str = json.dumps(tool_call.arguments, ensure_ascii=False)
|
||||
logger.info(f"Tool call: {tool_call.name}({args_str[:200]})")
|
||||
result = await self.tools.execute(tool_call.name, tool_call.arguments)
|
||||
messages = self.context.add_tool_result(
|
||||
messages, tool_call.id, tool_call.name, result
|
||||
)
|
||||
else:
|
||||
final_content = response.content
|
||||
break
|
||||
|
||||
final_content, _ = await self._run_agent_loop(initial_messages)
|
||||
|
||||
if final_content is None:
|
||||
final_content = "Background task completed."
|
||||
|
||||
# Save to session (mark as system message in history)
|
||||
session.add_message("user", f"[System: {msg.sender_id}] {msg.content}")
|
||||
session.add_message("assistant", final_content)
|
||||
self.sessions.save(session)
|
||||
@ -348,25 +401,113 @@ class AgentLoop:
|
||||
content=final_content
|
||||
)
|
||||
|
||||
async def _consolidate_memory(self, session, archive_all: bool = False) -> None:
|
||||
"""Consolidate old messages into MEMORY.md + HISTORY.md.
|
||||
|
||||
Args:
|
||||
archive_all: If True, clear all messages and reset session (for /new command).
|
||||
If False, only write to files without modifying session.
|
||||
"""
|
||||
memory = MemoryStore(self.workspace)
|
||||
|
||||
if archive_all:
|
||||
old_messages = session.messages
|
||||
keep_count = 0
|
||||
logger.info(f"Memory consolidation (archive_all): {len(session.messages)} total messages archived")
|
||||
else:
|
||||
keep_count = self.memory_window // 2
|
||||
if len(session.messages) <= keep_count:
|
||||
logger.debug(f"Session {session.key}: No consolidation needed (messages={len(session.messages)}, keep={keep_count})")
|
||||
return
|
||||
|
||||
messages_to_process = len(session.messages) - session.last_consolidated
|
||||
if messages_to_process <= 0:
|
||||
logger.debug(f"Session {session.key}: No new messages to consolidate (last_consolidated={session.last_consolidated}, total={len(session.messages)})")
|
||||
return
|
||||
|
||||
old_messages = session.messages[session.last_consolidated:-keep_count]
|
||||
if not old_messages:
|
||||
return
|
||||
logger.info(f"Memory consolidation started: {len(session.messages)} total, {len(old_messages)} new to consolidate, {keep_count} keep")
|
||||
|
||||
lines = []
|
||||
for m in old_messages:
|
||||
if not m.get("content"):
|
||||
continue
|
||||
tools = f" [tools: {', '.join(m['tools_used'])}]" if m.get("tools_used") else ""
|
||||
lines.append(f"[{m.get('timestamp', '?')[:16]}] {m['role'].upper()}{tools}: {m['content']}")
|
||||
conversation = "\n".join(lines)
|
||||
current_memory = memory.read_long_term()
|
||||
|
||||
prompt = f"""You are a memory consolidation agent. Process this conversation and return a JSON object with exactly two keys:
|
||||
|
||||
1. "history_entry": A paragraph (2-5 sentences) summarizing the key events/decisions/topics. Start with a timestamp like [YYYY-MM-DD HH:MM]. Include enough detail to be useful when found by grep search later.
|
||||
|
||||
2. "memory_update": The updated long-term memory content. Add any new facts: user location, preferences, personal info, habits, project context, technical decisions, tools/services used. If nothing new, return the existing content unchanged.
|
||||
|
||||
## Current Long-term Memory
|
||||
{current_memory or "(empty)"}
|
||||
|
||||
## Conversation to Process
|
||||
{conversation}
|
||||
|
||||
Respond with ONLY valid JSON, no markdown fences."""
|
||||
|
||||
try:
|
||||
response = await self.provider.chat(
|
||||
messages=[
|
||||
{"role": "system", "content": "You are a memory consolidation agent. Respond only with valid JSON."},
|
||||
{"role": "user", "content": prompt},
|
||||
],
|
||||
model=self.model,
|
||||
)
|
||||
text = (response.content or "").strip()
|
||||
if not text:
|
||||
logger.warning("Memory consolidation: LLM returned empty response, skipping")
|
||||
return
|
||||
if text.startswith("```"):
|
||||
text = text.split("\n", 1)[-1].rsplit("```", 1)[0].strip()
|
||||
result = json_repair.loads(text)
|
||||
if not isinstance(result, dict):
|
||||
logger.warning(f"Memory consolidation: unexpected response type, skipping. Response: {text[:200]}")
|
||||
return
|
||||
|
||||
if entry := result.get("history_entry"):
|
||||
memory.append_history(entry)
|
||||
if update := result.get("memory_update"):
|
||||
if update != current_memory:
|
||||
memory.write_long_term(update)
|
||||
|
||||
if archive_all:
|
||||
session.last_consolidated = 0
|
||||
else:
|
||||
session.last_consolidated = len(session.messages) - keep_count
|
||||
logger.info(f"Memory consolidation done: {len(session.messages)} messages, last_consolidated={session.last_consolidated}")
|
||||
except Exception as e:
|
||||
logger.error(f"Memory consolidation failed: {e}")
|
||||
|
||||
async def process_direct(
|
||||
self,
|
||||
content: str,
|
||||
session_key: str = "cli:direct",
|
||||
channel: str = "cli",
|
||||
chat_id: str = "direct",
|
||||
on_progress: Callable[[str], Awaitable[None]] | None = None,
|
||||
) -> str:
|
||||
"""
|
||||
Process a message directly (for CLI or cron usage).
|
||||
|
||||
Args:
|
||||
content: The message content.
|
||||
session_key: Session identifier.
|
||||
channel: Source channel (for context).
|
||||
chat_id: Source chat ID (for context).
|
||||
session_key: Session identifier (overrides channel:chat_id for session lookup).
|
||||
channel: Source channel (for tool context routing).
|
||||
chat_id: Source chat ID (for tool context routing).
|
||||
on_progress: Optional callback for intermediate output.
|
||||
|
||||
Returns:
|
||||
The agent's response.
|
||||
"""
|
||||
await self._connect_mcp()
|
||||
msg = InboundMessage(
|
||||
channel=channel,
|
||||
sender_id="user",
|
||||
@ -374,5 +515,5 @@ class AgentLoop:
|
||||
content=content
|
||||
)
|
||||
|
||||
response = await self._process_message(msg)
|
||||
response = await self._process_message(msg, session_key=session_key, on_progress=on_progress)
|
||||
return response.content if response else ""
|
||||
|
||||
@ -1,109 +1,30 @@
|
||||
"""Memory system for persistent agent memory."""
|
||||
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
|
||||
from nanobot.utils.helpers import ensure_dir, today_date
|
||||
from nanobot.utils.helpers import ensure_dir
|
||||
|
||||
|
||||
class MemoryStore:
|
||||
"""
|
||||
Memory system for the agent.
|
||||
|
||||
Supports daily notes (memory/YYYY-MM-DD.md) and long-term memory (MEMORY.md).
|
||||
"""
|
||||
|
||||
"""Two-layer memory: MEMORY.md (long-term facts) + HISTORY.md (grep-searchable log)."""
|
||||
|
||||
def __init__(self, workspace: Path):
|
||||
self.workspace = workspace
|
||||
self.memory_dir = ensure_dir(workspace / "memory")
|
||||
self.memory_file = self.memory_dir / "MEMORY.md"
|
||||
|
||||
def get_today_file(self) -> Path:
|
||||
"""Get path to today's memory file."""
|
||||
return self.memory_dir / f"{today_date()}.md"
|
||||
|
||||
def read_today(self) -> str:
|
||||
"""Read today's memory notes."""
|
||||
today_file = self.get_today_file()
|
||||
if today_file.exists():
|
||||
return today_file.read_text(encoding="utf-8")
|
||||
return ""
|
||||
|
||||
def append_today(self, content: str) -> None:
|
||||
"""Append content to today's memory notes."""
|
||||
today_file = self.get_today_file()
|
||||
|
||||
if today_file.exists():
|
||||
existing = today_file.read_text(encoding="utf-8")
|
||||
content = existing + "\n" + content
|
||||
else:
|
||||
# Add header for new day
|
||||
header = f"# {today_date()}\n\n"
|
||||
content = header + content
|
||||
|
||||
today_file.write_text(content, encoding="utf-8")
|
||||
|
||||
self.history_file = self.memory_dir / "HISTORY.md"
|
||||
|
||||
def read_long_term(self) -> str:
|
||||
"""Read long-term memory (MEMORY.md)."""
|
||||
if self.memory_file.exists():
|
||||
return self.memory_file.read_text(encoding="utf-8")
|
||||
return ""
|
||||
|
||||
|
||||
def write_long_term(self, content: str) -> None:
|
||||
"""Write to long-term memory (MEMORY.md)."""
|
||||
self.memory_file.write_text(content, encoding="utf-8")
|
||||
|
||||
def get_recent_memories(self, days: int = 7) -> str:
|
||||
"""
|
||||
Get memories from the last N days.
|
||||
|
||||
Args:
|
||||
days: Number of days to look back.
|
||||
|
||||
Returns:
|
||||
Combined memory content.
|
||||
"""
|
||||
from datetime import timedelta
|
||||
|
||||
memories = []
|
||||
today = datetime.now().date()
|
||||
|
||||
for i in range(days):
|
||||
date = today - timedelta(days=i)
|
||||
date_str = date.strftime("%Y-%m-%d")
|
||||
file_path = self.memory_dir / f"{date_str}.md"
|
||||
|
||||
if file_path.exists():
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
memories.append(content)
|
||||
|
||||
return "\n\n---\n\n".join(memories)
|
||||
|
||||
def list_memory_files(self) -> list[Path]:
|
||||
"""List all memory files sorted by date (newest first)."""
|
||||
if not self.memory_dir.exists():
|
||||
return []
|
||||
|
||||
files = list(self.memory_dir.glob("????-??-??.md"))
|
||||
return sorted(files, reverse=True)
|
||||
|
||||
|
||||
def append_history(self, entry: str) -> None:
|
||||
with open(self.history_file, "a", encoding="utf-8") as f:
|
||||
f.write(entry.rstrip() + "\n\n")
|
||||
|
||||
def get_memory_context(self) -> str:
|
||||
"""
|
||||
Get memory context for the agent.
|
||||
|
||||
Returns:
|
||||
Formatted memory context including long-term and recent memories.
|
||||
"""
|
||||
parts = []
|
||||
|
||||
# Long-term memory
|
||||
long_term = self.read_long_term()
|
||||
if long_term:
|
||||
parts.append("## Long-term Memory\n" + long_term)
|
||||
|
||||
# Today's notes
|
||||
today = self.read_today()
|
||||
if today:
|
||||
parts.append("## Today's Notes\n" + today)
|
||||
|
||||
return "\n\n".join(parts) if parts else ""
|
||||
return f"## Long-term Memory\n{long_term}" if long_term else ""
|
||||
|
||||
@ -167,10 +167,10 @@ class SkillsLoader:
|
||||
return content
|
||||
|
||||
def _parse_nanobot_metadata(self, raw: str) -> dict:
|
||||
"""Parse nanobot metadata JSON from frontmatter."""
|
||||
"""Parse skill metadata JSON from frontmatter (supports nanobot and openclaw keys)."""
|
||||
try:
|
||||
data = json.loads(raw)
|
||||
return data.get("nanobot", {}) if isinstance(data, dict) else {}
|
||||
return data.get("nanobot", data.get("openclaw", {})) if isinstance(data, dict) else {}
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
return {}
|
||||
|
||||
|
||||
@ -12,7 +12,7 @@ from nanobot.bus.events import InboundMessage
|
||||
from nanobot.bus.queue import MessageBus
|
||||
from nanobot.providers.base import LLMProvider
|
||||
from nanobot.agent.tools.registry import ToolRegistry
|
||||
from nanobot.agent.tools.filesystem import ReadFileTool, WriteFileTool, ListDirTool
|
||||
from nanobot.agent.tools.filesystem import ReadFileTool, WriteFileTool, EditFileTool, ListDirTool
|
||||
from nanobot.agent.tools.shell import ExecTool
|
||||
from nanobot.agent.tools.web import WebSearchTool, WebFetchTool
|
||||
|
||||
@ -32,6 +32,8 @@ class SubagentManager:
|
||||
workspace: Path,
|
||||
bus: MessageBus,
|
||||
model: str | None = None,
|
||||
temperature: float = 0.7,
|
||||
max_tokens: int = 4096,
|
||||
brave_api_key: str | None = None,
|
||||
exec_config: "ExecToolConfig | None" = None,
|
||||
restrict_to_workspace: bool = False,
|
||||
@ -41,6 +43,8 @@ class SubagentManager:
|
||||
self.workspace = workspace
|
||||
self.bus = bus
|
||||
self.model = model or provider.get_default_model()
|
||||
self.temperature = temperature
|
||||
self.max_tokens = max_tokens
|
||||
self.brave_api_key = brave_api_key
|
||||
self.exec_config = exec_config or ExecToolConfig()
|
||||
self.restrict_to_workspace = restrict_to_workspace
|
||||
@ -101,6 +105,7 @@ class SubagentManager:
|
||||
allowed_dir = self.workspace if self.restrict_to_workspace else None
|
||||
tools.register(ReadFileTool(allowed_dir=allowed_dir))
|
||||
tools.register(WriteFileTool(allowed_dir=allowed_dir))
|
||||
tools.register(EditFileTool(allowed_dir=allowed_dir))
|
||||
tools.register(ListDirTool(allowed_dir=allowed_dir))
|
||||
tools.register(ExecTool(
|
||||
working_dir=str(self.workspace),
|
||||
@ -129,6 +134,8 @@ class SubagentManager:
|
||||
messages=messages,
|
||||
tools=tools.get_definitions(),
|
||||
model=self.model,
|
||||
temperature=self.temperature,
|
||||
max_tokens=self.max_tokens,
|
||||
)
|
||||
|
||||
if response.has_tool_calls:
|
||||
@ -210,12 +217,17 @@ Summarize this naturally for the user. Keep it brief (1-2 sentences). Do not men
|
||||
|
||||
def _build_subagent_prompt(self, task: str) -> str:
|
||||
"""Build a focused system prompt for the subagent."""
|
||||
from datetime import datetime
|
||||
import time as _time
|
||||
now = datetime.now().strftime("%Y-%m-%d %H:%M (%A)")
|
||||
tz = _time.strftime("%Z") or "UTC"
|
||||
|
||||
return f"""# Subagent
|
||||
|
||||
You are a subagent spawned by the main agent to complete a specific task.
|
||||
## Current Time
|
||||
{now} ({tz})
|
||||
|
||||
## Your Task
|
||||
{task}
|
||||
You are a subagent spawned by the main agent to complete a specific task.
|
||||
|
||||
## Rules
|
||||
1. Stay focused - complete only the assigned task, nothing else
|
||||
@ -236,6 +248,7 @@ You are a subagent spawned by the main agent to complete a specific task.
|
||||
|
||||
## Workspace
|
||||
Your workspace is at: {self.workspace}
|
||||
Skills are available at: {self.workspace}/skills/ (read SKILL.md files as needed)
|
||||
|
||||
When you have completed the task, provide a clear summary of your findings or actions."""
|
||||
|
||||
|
||||
@ -50,6 +50,14 @@ class CronTool(Tool):
|
||||
"type": "string",
|
||||
"description": "Cron expression like '0 9 * * *' (for scheduled tasks)"
|
||||
},
|
||||
"tz": {
|
||||
"type": "string",
|
||||
"description": "IANA timezone for cron expressions (e.g. 'America/Vancouver')"
|
||||
},
|
||||
"at": {
|
||||
"type": "string",
|
||||
"description": "ISO datetime for one-time execution (e.g. '2026-02-12T10:30:00')"
|
||||
},
|
||||
"job_id": {
|
||||
"type": "string",
|
||||
"description": "Job ID (for remove)"
|
||||
@ -64,30 +72,54 @@ class CronTool(Tool):
|
||||
message: str = "",
|
||||
every_seconds: int | None = None,
|
||||
cron_expr: str | None = None,
|
||||
tz: str | None = None,
|
||||
at: str | None = None,
|
||||
job_id: str | None = None,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
if action == "add":
|
||||
return self._add_job(message, every_seconds, cron_expr)
|
||||
return self._add_job(message, every_seconds, cron_expr, tz, at)
|
||||
elif action == "list":
|
||||
return self._list_jobs()
|
||||
elif action == "remove":
|
||||
return self._remove_job(job_id)
|
||||
return f"Unknown action: {action}"
|
||||
|
||||
def _add_job(self, message: str, every_seconds: int | None, cron_expr: str | None) -> str:
|
||||
def _add_job(
|
||||
self,
|
||||
message: str,
|
||||
every_seconds: int | None,
|
||||
cron_expr: str | None,
|
||||
tz: str | None,
|
||||
at: str | None,
|
||||
) -> str:
|
||||
if not message:
|
||||
return "Error: message is required for add"
|
||||
if not self._channel or not self._chat_id:
|
||||
return "Error: no session context (channel/chat_id)"
|
||||
if tz and not cron_expr:
|
||||
return "Error: tz can only be used with cron_expr"
|
||||
if tz:
|
||||
from zoneinfo import ZoneInfo
|
||||
try:
|
||||
ZoneInfo(tz)
|
||||
except (KeyError, Exception):
|
||||
return f"Error: unknown timezone '{tz}'"
|
||||
|
||||
# Build schedule
|
||||
delete_after = False
|
||||
if every_seconds:
|
||||
schedule = CronSchedule(kind="every", every_ms=every_seconds * 1000)
|
||||
elif cron_expr:
|
||||
schedule = CronSchedule(kind="cron", expr=cron_expr)
|
||||
schedule = CronSchedule(kind="cron", expr=cron_expr, tz=tz)
|
||||
elif at:
|
||||
from datetime import datetime
|
||||
dt = datetime.fromisoformat(at)
|
||||
at_ms = int(dt.timestamp() * 1000)
|
||||
schedule = CronSchedule(kind="at", at_ms=at_ms)
|
||||
delete_after = True
|
||||
else:
|
||||
return "Error: either every_seconds or cron_expr is required"
|
||||
return "Error: either every_seconds, cron_expr, or at is required"
|
||||
|
||||
job = self._cron.add_job(
|
||||
name=message[:30],
|
||||
@ -96,6 +128,7 @@ class CronTool(Tool):
|
||||
deliver=True,
|
||||
channel=self._channel,
|
||||
to=self._chat_id,
|
||||
delete_after_run=delete_after,
|
||||
)
|
||||
return f"Created job '{job.name}' (id: {job.id})"
|
||||
|
||||
|
||||
80
nanobot/agent/tools/mcp.py
Normal file
80
nanobot/agent/tools/mcp.py
Normal file
@ -0,0 +1,80 @@
|
||||
"""MCP client: connects to MCP servers and wraps their tools as native nanobot tools."""
|
||||
|
||||
from contextlib import AsyncExitStack
|
||||
from typing import Any
|
||||
|
||||
from loguru import logger
|
||||
|
||||
from nanobot.agent.tools.base import Tool
|
||||
from nanobot.agent.tools.registry import ToolRegistry
|
||||
|
||||
|
||||
class MCPToolWrapper(Tool):
|
||||
"""Wraps a single MCP server tool as a nanobot Tool."""
|
||||
|
||||
def __init__(self, session, server_name: str, tool_def):
|
||||
self._session = session
|
||||
self._original_name = tool_def.name
|
||||
self._name = f"mcp_{server_name}_{tool_def.name}"
|
||||
self._description = tool_def.description or tool_def.name
|
||||
self._parameters = tool_def.inputSchema or {"type": "object", "properties": {}}
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return self._name
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return self._description
|
||||
|
||||
@property
|
||||
def parameters(self) -> dict[str, Any]:
|
||||
return self._parameters
|
||||
|
||||
async def execute(self, **kwargs: Any) -> str:
|
||||
from mcp import types
|
||||
result = await self._session.call_tool(self._original_name, arguments=kwargs)
|
||||
parts = []
|
||||
for block in result.content:
|
||||
if isinstance(block, types.TextContent):
|
||||
parts.append(block.text)
|
||||
else:
|
||||
parts.append(str(block))
|
||||
return "\n".join(parts) or "(no output)"
|
||||
|
||||
|
||||
async def connect_mcp_servers(
|
||||
mcp_servers: dict, registry: ToolRegistry, stack: AsyncExitStack
|
||||
) -> None:
|
||||
"""Connect to configured MCP servers and register their tools."""
|
||||
from mcp import ClientSession, StdioServerParameters
|
||||
from mcp.client.stdio import stdio_client
|
||||
|
||||
for name, cfg in mcp_servers.items():
|
||||
try:
|
||||
if cfg.command:
|
||||
params = StdioServerParameters(
|
||||
command=cfg.command, args=cfg.args, env=cfg.env or None
|
||||
)
|
||||
read, write = await stack.enter_async_context(stdio_client(params))
|
||||
elif cfg.url:
|
||||
from mcp.client.streamable_http import streamable_http_client
|
||||
read, write, _ = await stack.enter_async_context(
|
||||
streamable_http_client(cfg.url)
|
||||
)
|
||||
else:
|
||||
logger.warning(f"MCP server '{name}': no command or url configured, skipping")
|
||||
continue
|
||||
|
||||
session = await stack.enter_async_context(ClientSession(read, write))
|
||||
await session.initialize()
|
||||
|
||||
tools = await session.list_tools()
|
||||
for tool_def in tools.tools:
|
||||
wrapper = MCPToolWrapper(session, name, tool_def)
|
||||
registry.register(wrapper)
|
||||
logger.debug(f"MCP: registered tool '{wrapper.name}' from server '{name}'")
|
||||
|
||||
logger.info(f"MCP server '{name}': connected, {len(tools.tools)} tools registered")
|
||||
except Exception as e:
|
||||
logger.error(f"MCP server '{name}': failed to connect: {e}")
|
||||
@ -52,6 +52,11 @@ class MessageTool(Tool):
|
||||
"chat_id": {
|
||||
"type": "string",
|
||||
"description": "Optional: target chat/user ID"
|
||||
},
|
||||
"media": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"description": "Optional: list of file paths to attach (images, audio, documents)"
|
||||
}
|
||||
},
|
||||
"required": ["content"]
|
||||
@ -62,6 +67,7 @@ class MessageTool(Tool):
|
||||
content: str,
|
||||
channel: str | None = None,
|
||||
chat_id: str | None = None,
|
||||
media: list[str] | None = None,
|
||||
**kwargs: Any
|
||||
) -> str:
|
||||
channel = channel or self._default_channel
|
||||
@ -76,11 +82,13 @@ class MessageTool(Tool):
|
||||
msg = OutboundMessage(
|
||||
channel=channel,
|
||||
chat_id=chat_id,
|
||||
content=content
|
||||
content=content,
|
||||
media=media or []
|
||||
)
|
||||
|
||||
try:
|
||||
await self._send_callback(msg)
|
||||
return f"Message sent to {channel}:{chat_id}"
|
||||
media_info = f" with {len(media)} attachments" if media else ""
|
||||
return f"Message sent to {channel}:{chat_id}{media_info}"
|
||||
except Exception as e:
|
||||
return f"Error sending message: {str(e)}"
|
||||
|
||||
@ -137,8 +137,15 @@ class DingTalkChannel(BaseChannel):
|
||||
|
||||
logger.info("DingTalk bot started with Stream Mode")
|
||||
|
||||
# client.start() is an async infinite loop handling the websocket connection
|
||||
await self._client.start()
|
||||
# Reconnect loop: restart stream if SDK exits or crashes
|
||||
while self._running:
|
||||
try:
|
||||
await self._client.start()
|
||||
except Exception as e:
|
||||
logger.warning(f"DingTalk stream error: {e}")
|
||||
if self._running:
|
||||
logger.info("Reconnecting DingTalk stream in 5 seconds...")
|
||||
await asyncio.sleep(5)
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(f"Failed to start DingTalk channel: {e}")
|
||||
|
||||
@ -39,6 +39,53 @@ MSG_TYPE_MAP = {
|
||||
}
|
||||
|
||||
|
||||
def _extract_post_text(content_json: dict) -> str:
|
||||
"""Extract plain text from Feishu post (rich text) message content.
|
||||
|
||||
Supports two formats:
|
||||
1. Direct format: {"title": "...", "content": [...]}
|
||||
2. Localized format: {"zh_cn": {"title": "...", "content": [...]}}
|
||||
"""
|
||||
def extract_from_lang(lang_content: dict) -> str | None:
|
||||
if not isinstance(lang_content, dict):
|
||||
return None
|
||||
title = lang_content.get("title", "")
|
||||
content_blocks = lang_content.get("content", [])
|
||||
if not isinstance(content_blocks, list):
|
||||
return None
|
||||
text_parts = []
|
||||
if title:
|
||||
text_parts.append(title)
|
||||
for block in content_blocks:
|
||||
if not isinstance(block, list):
|
||||
continue
|
||||
for element in block:
|
||||
if isinstance(element, dict):
|
||||
tag = element.get("tag")
|
||||
if tag == "text":
|
||||
text_parts.append(element.get("text", ""))
|
||||
elif tag == "a":
|
||||
text_parts.append(element.get("text", ""))
|
||||
elif tag == "at":
|
||||
text_parts.append(f"@{element.get('user_name', 'user')}")
|
||||
return " ".join(text_parts).strip() if text_parts else None
|
||||
|
||||
# Try direct format first
|
||||
if "content" in content_json:
|
||||
result = extract_from_lang(content_json)
|
||||
if result:
|
||||
return result
|
||||
|
||||
# Try localized format
|
||||
for lang_key in ("zh_cn", "en_us", "ja_jp"):
|
||||
lang_content = content_json.get(lang_key)
|
||||
result = extract_from_lang(lang_content)
|
||||
if result:
|
||||
return result
|
||||
|
||||
return ""
|
||||
|
||||
|
||||
class FeishuChannel(BaseChannel):
|
||||
"""
|
||||
Feishu/Lark channel using WebSocket long connection.
|
||||
@ -98,12 +145,15 @@ class FeishuChannel(BaseChannel):
|
||||
log_level=lark.LogLevel.INFO
|
||||
)
|
||||
|
||||
# Start WebSocket client in a separate thread
|
||||
# Start WebSocket client in a separate thread with reconnect loop
|
||||
def run_ws():
|
||||
try:
|
||||
self._ws_client.start()
|
||||
except Exception as e:
|
||||
logger.error(f"Feishu WebSocket error: {e}")
|
||||
while self._running:
|
||||
try:
|
||||
self._ws_client.start()
|
||||
except Exception as e:
|
||||
logger.warning(f"Feishu WebSocket error: {e}")
|
||||
if self._running:
|
||||
import time; time.sleep(5)
|
||||
|
||||
self._ws_thread = threading.Thread(target=run_ws, daemon=True)
|
||||
self._ws_thread.start()
|
||||
@ -163,6 +213,10 @@ class FeishuChannel(BaseChannel):
|
||||
re.MULTILINE,
|
||||
)
|
||||
|
||||
_HEADING_RE = re.compile(r"^(#{1,6})\s+(.+)$", re.MULTILINE)
|
||||
|
||||
_CODE_BLOCK_RE = re.compile(r"(```[\s\S]*?```)", re.MULTILINE)
|
||||
|
||||
@staticmethod
|
||||
def _parse_md_table(table_text: str) -> dict | None:
|
||||
"""Parse a markdown table into a Feishu table element."""
|
||||
@ -182,17 +236,52 @@ class FeishuChannel(BaseChannel):
|
||||
}
|
||||
|
||||
def _build_card_elements(self, content: str) -> list[dict]:
|
||||
"""Split content into markdown + table elements for Feishu card."""
|
||||
"""Split content into div/markdown + table elements for Feishu card."""
|
||||
elements, last_end = [], 0
|
||||
for m in self._TABLE_RE.finditer(content):
|
||||
before = content[last_end:m.start()].strip()
|
||||
if before:
|
||||
elements.append({"tag": "markdown", "content": before})
|
||||
before = content[last_end:m.start()]
|
||||
if before.strip():
|
||||
elements.extend(self._split_headings(before))
|
||||
elements.append(self._parse_md_table(m.group(1)) or {"tag": "markdown", "content": m.group(1)})
|
||||
last_end = m.end()
|
||||
remaining = content[last_end:].strip()
|
||||
remaining = content[last_end:]
|
||||
if remaining.strip():
|
||||
elements.extend(self._split_headings(remaining))
|
||||
return elements or [{"tag": "markdown", "content": content}]
|
||||
|
||||
def _split_headings(self, content: str) -> list[dict]:
|
||||
"""Split content by headings, converting headings to div elements."""
|
||||
protected = content
|
||||
code_blocks = []
|
||||
for m in self._CODE_BLOCK_RE.finditer(content):
|
||||
code_blocks.append(m.group(1))
|
||||
protected = protected.replace(m.group(1), f"\x00CODE{len(code_blocks)-1}\x00", 1)
|
||||
|
||||
elements = []
|
||||
last_end = 0
|
||||
for m in self._HEADING_RE.finditer(protected):
|
||||
before = protected[last_end:m.start()].strip()
|
||||
if before:
|
||||
elements.append({"tag": "markdown", "content": before})
|
||||
level = len(m.group(1))
|
||||
text = m.group(2).strip()
|
||||
elements.append({
|
||||
"tag": "div",
|
||||
"text": {
|
||||
"tag": "lark_md",
|
||||
"content": f"**{text}**",
|
||||
},
|
||||
})
|
||||
last_end = m.end()
|
||||
remaining = protected[last_end:].strip()
|
||||
if remaining:
|
||||
elements.append({"tag": "markdown", "content": remaining})
|
||||
|
||||
for i, cb in enumerate(code_blocks):
|
||||
for el in elements:
|
||||
if el.get("tag") == "markdown":
|
||||
el["content"] = el["content"].replace(f"\x00CODE{i}\x00", cb)
|
||||
|
||||
return elements or [{"tag": "markdown", "content": content}]
|
||||
|
||||
async def send(self, msg: OutboundMessage) -> None:
|
||||
@ -284,6 +373,12 @@ class FeishuChannel(BaseChannel):
|
||||
content = json.loads(message.content).get("text", "")
|
||||
except json.JSONDecodeError:
|
||||
content = message.content or ""
|
||||
elif msg_type == "post":
|
||||
try:
|
||||
content_json = json.loads(message.content)
|
||||
content = _extract_post_text(content_json)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
content = message.content or ""
|
||||
else:
|
||||
content = MSG_TYPE_MAP.get(msg_type, f"[{msg_type}]")
|
||||
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from typing import Any, TYPE_CHECKING
|
||||
from typing import Any
|
||||
|
||||
from loguru import logger
|
||||
|
||||
@ -12,9 +12,6 @@ from nanobot.bus.queue import MessageBus
|
||||
from nanobot.channels.base import BaseChannel
|
||||
from nanobot.config.schema import Config
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from nanobot.session.manager import SessionManager
|
||||
|
||||
|
||||
class ChannelManager:
|
||||
"""
|
||||
@ -26,10 +23,9 @@ class ChannelManager:
|
||||
- Route outbound messages
|
||||
"""
|
||||
|
||||
def __init__(self, config: Config, bus: MessageBus, session_manager: "SessionManager | None" = None):
|
||||
def __init__(self, config: Config, bus: MessageBus):
|
||||
self.config = config
|
||||
self.bus = bus
|
||||
self.session_manager = session_manager
|
||||
self.channels: dict[str, BaseChannel] = {}
|
||||
self._dispatch_task: asyncio.Task | None = None
|
||||
|
||||
@ -46,7 +42,6 @@ class ChannelManager:
|
||||
self.config.channels.telegram,
|
||||
self.bus,
|
||||
groq_api_key=self.config.providers.groq.api_key,
|
||||
session_manager=self.session_manager,
|
||||
)
|
||||
logger.info("Telegram channel enabled")
|
||||
except ImportError as e:
|
||||
|
||||
@ -75,12 +75,15 @@ class QQChannel(BaseChannel):
|
||||
logger.info("QQ bot started (C2C private message)")
|
||||
|
||||
async def _run_bot(self) -> None:
|
||||
"""Run the bot connection."""
|
||||
try:
|
||||
await self._client.start(appid=self.config.app_id, secret=self.config.secret)
|
||||
except Exception as e:
|
||||
logger.error(f"QQ auth failed, check AppID/Secret at q.qq.com: {e}")
|
||||
self._running = False
|
||||
"""Run the bot connection with auto-reconnect."""
|
||||
while self._running:
|
||||
try:
|
||||
await self._client.start(appid=self.config.app_id, secret=self.config.secret)
|
||||
except Exception as e:
|
||||
logger.warning(f"QQ bot error: {e}")
|
||||
if self._running:
|
||||
logger.info("Reconnecting QQ bot in 5 seconds...")
|
||||
await asyncio.sleep(5)
|
||||
|
||||
async def stop(self) -> None:
|
||||
"""Stop the QQ bot."""
|
||||
|
||||
@ -10,6 +10,8 @@ from slack_sdk.socket_mode.request import SocketModeRequest
|
||||
from slack_sdk.socket_mode.response import SocketModeResponse
|
||||
from slack_sdk.web.async_client import AsyncWebClient
|
||||
|
||||
from slackify_markdown import slackify_markdown
|
||||
|
||||
from nanobot.bus.events import OutboundMessage
|
||||
from nanobot.bus.queue import MessageBus
|
||||
from nanobot.channels.base import BaseChannel
|
||||
@ -84,7 +86,7 @@ class SlackChannel(BaseChannel):
|
||||
use_thread = thread_ts and channel_type != "im"
|
||||
await self._web_client.chat_postMessage(
|
||||
channel=msg.chat_id,
|
||||
text=msg.content or "",
|
||||
text=self._to_mrkdwn(msg.content),
|
||||
thread_ts=thread_ts if use_thread else None,
|
||||
)
|
||||
except Exception as e:
|
||||
@ -150,13 +152,15 @@ class SlackChannel(BaseChannel):
|
||||
|
||||
text = self._strip_bot_mention(text)
|
||||
|
||||
thread_ts = event.get("thread_ts") or event.get("ts")
|
||||
thread_ts = event.get("thread_ts")
|
||||
if self.config.reply_in_thread and not thread_ts:
|
||||
thread_ts = event.get("ts")
|
||||
# Add :eyes: reaction to the triggering message (best-effort)
|
||||
try:
|
||||
if self._web_client and event.get("ts"):
|
||||
await self._web_client.reactions_add(
|
||||
channel=chat_id,
|
||||
name="eyes",
|
||||
name=self.config.react_emoji,
|
||||
timestamp=event.get("ts"),
|
||||
)
|
||||
except Exception as e:
|
||||
@ -203,3 +207,31 @@ class SlackChannel(BaseChannel):
|
||||
if not text or not self._bot_user_id:
|
||||
return text
|
||||
return re.sub(rf"<@{re.escape(self._bot_user_id)}>\s*", "", text).strip()
|
||||
|
||||
_TABLE_RE = re.compile(r"(?m)^\|.*\|$(?:\n\|[\s:|-]*\|$)(?:\n\|.*\|$)*")
|
||||
|
||||
@classmethod
|
||||
def _to_mrkdwn(cls, text: str) -> str:
|
||||
"""Convert Markdown to Slack mrkdwn, including tables."""
|
||||
if not text:
|
||||
return ""
|
||||
text = cls._TABLE_RE.sub(cls._convert_table, text)
|
||||
return slackify_markdown(text)
|
||||
|
||||
@staticmethod
|
||||
def _convert_table(match: re.Match) -> str:
|
||||
"""Convert a Markdown table to a Slack-readable list."""
|
||||
lines = [ln.strip() for ln in match.group(0).strip().splitlines() if ln.strip()]
|
||||
if len(lines) < 2:
|
||||
return match.group(0)
|
||||
headers = [h.strip() for h in lines[0].strip("|").split("|")]
|
||||
start = 2 if re.fullmatch(r"[|\s:\-]+", lines[1]) else 1
|
||||
rows: list[str] = []
|
||||
for line in lines[start:]:
|
||||
cells = [c.strip() for c in line.strip("|").split("|")]
|
||||
cells = (cells + [""] * len(headers))[: len(headers)]
|
||||
parts = [f"**{headers[i]}**: {cells[i]}" for i in range(len(headers)) if cells[i]]
|
||||
if parts:
|
||||
rows.append(" · ".join(parts))
|
||||
return "\n".join(rows)
|
||||
|
||||
|
||||
@ -4,20 +4,16 @@ from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import re
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from loguru import logger
|
||||
from telegram import BotCommand, Update
|
||||
from telegram.ext import Application, CommandHandler, MessageHandler, filters, ContextTypes
|
||||
from telegram.request import HTTPXRequest
|
||||
|
||||
from nanobot.bus.events import OutboundMessage
|
||||
from nanobot.bus.queue import MessageBus
|
||||
from nanobot.channels.base import BaseChannel
|
||||
from nanobot.config.schema import TelegramConfig
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from nanobot.session.manager import SessionManager
|
||||
|
||||
|
||||
def _markdown_to_telegram_html(text: str) -> str:
|
||||
"""
|
||||
@ -82,6 +78,26 @@ def _markdown_to_telegram_html(text: str) -> str:
|
||||
return text
|
||||
|
||||
|
||||
def _split_message(content: str, max_len: int = 4000) -> list[str]:
|
||||
"""Split content into chunks within max_len, preferring line breaks."""
|
||||
if len(content) <= max_len:
|
||||
return [content]
|
||||
chunks: list[str] = []
|
||||
while content:
|
||||
if len(content) <= max_len:
|
||||
chunks.append(content)
|
||||
break
|
||||
cut = content[:max_len]
|
||||
pos = cut.rfind('\n')
|
||||
if pos == -1:
|
||||
pos = cut.rfind(' ')
|
||||
if pos == -1:
|
||||
pos = max_len
|
||||
chunks.append(content[:pos])
|
||||
content = content[pos:].lstrip()
|
||||
return chunks
|
||||
|
||||
|
||||
class TelegramChannel(BaseChannel):
|
||||
"""
|
||||
Telegram channel using long polling.
|
||||
@ -94,7 +110,7 @@ class TelegramChannel(BaseChannel):
|
||||
# Commands registered with Telegram's command menu
|
||||
BOT_COMMANDS = [
|
||||
BotCommand("start", "Start the bot"),
|
||||
BotCommand("reset", "Reset conversation history"),
|
||||
BotCommand("new", "Start a new conversation"),
|
||||
BotCommand("help", "Show available commands"),
|
||||
]
|
||||
|
||||
@ -103,12 +119,10 @@ class TelegramChannel(BaseChannel):
|
||||
config: TelegramConfig,
|
||||
bus: MessageBus,
|
||||
groq_api_key: str = "",
|
||||
session_manager: SessionManager | None = None,
|
||||
):
|
||||
super().__init__(config, bus)
|
||||
self.config: TelegramConfig = config
|
||||
self.groq_api_key = groq_api_key
|
||||
self.session_manager = session_manager
|
||||
self._app: Application | None = None
|
||||
self._chat_ids: dict[str, int] = {} # Map sender_id to chat_id for replies
|
||||
self._typing_tasks: dict[str, asyncio.Task] = {} # chat_id -> typing loop task
|
||||
@ -121,16 +135,18 @@ class TelegramChannel(BaseChannel):
|
||||
|
||||
self._running = True
|
||||
|
||||
# Build the application
|
||||
builder = Application.builder().token(self.config.token)
|
||||
# Build the application with larger connection pool to avoid pool-timeout on long runs
|
||||
req = HTTPXRequest(connection_pool_size=16, pool_timeout=5.0, connect_timeout=30.0, read_timeout=30.0)
|
||||
builder = Application.builder().token(self.config.token).request(req).get_updates_request(req)
|
||||
if self.config.proxy:
|
||||
builder = builder.proxy(self.config.proxy).get_updates_proxy(self.config.proxy)
|
||||
self._app = builder.build()
|
||||
self._app.add_error_handler(self._on_error)
|
||||
|
||||
# Add command handlers
|
||||
self._app.add_handler(CommandHandler("start", self._on_start))
|
||||
self._app.add_handler(CommandHandler("reset", self._on_reset))
|
||||
self._app.add_handler(CommandHandler("help", self._on_help))
|
||||
self._app.add_handler(CommandHandler("new", self._forward_command))
|
||||
self._app.add_handler(CommandHandler("help", self._forward_command))
|
||||
|
||||
# Add message handler for text, photos, voice, documents
|
||||
self._app.add_handler(
|
||||
@ -182,37 +198,61 @@ class TelegramChannel(BaseChannel):
|
||||
await self._app.shutdown()
|
||||
self._app = None
|
||||
|
||||
@staticmethod
|
||||
def _get_media_type(path: str) -> str:
|
||||
"""Guess media type from file extension."""
|
||||
ext = path.rsplit(".", 1)[-1].lower() if "." in path else ""
|
||||
if ext in ("jpg", "jpeg", "png", "gif", "webp"):
|
||||
return "photo"
|
||||
if ext == "ogg":
|
||||
return "voice"
|
||||
if ext in ("mp3", "m4a", "wav", "aac"):
|
||||
return "audio"
|
||||
return "document"
|
||||
|
||||
async def send(self, msg: OutboundMessage) -> None:
|
||||
"""Send a message through Telegram."""
|
||||
if not self._app:
|
||||
logger.warning("Telegram bot not running")
|
||||
return
|
||||
|
||||
# Stop typing indicator for this chat
|
||||
|
||||
self._stop_typing(msg.chat_id)
|
||||
|
||||
|
||||
try:
|
||||
# chat_id should be the Telegram chat ID (integer)
|
||||
chat_id = int(msg.chat_id)
|
||||
# Convert markdown to Telegram HTML
|
||||
html_content = _markdown_to_telegram_html(msg.content)
|
||||
await self._app.bot.send_message(
|
||||
chat_id=chat_id,
|
||||
text=html_content,
|
||||
parse_mode="HTML"
|
||||
)
|
||||
except ValueError:
|
||||
logger.error(f"Invalid chat_id: {msg.chat_id}")
|
||||
except Exception as e:
|
||||
# Fallback to plain text if HTML parsing fails
|
||||
logger.warning(f"HTML parse failed, falling back to plain text: {e}")
|
||||
return
|
||||
|
||||
# Send media files
|
||||
for media_path in (msg.media or []):
|
||||
try:
|
||||
await self._app.bot.send_message(
|
||||
chat_id=int(msg.chat_id),
|
||||
text=msg.content
|
||||
)
|
||||
except Exception as e2:
|
||||
logger.error(f"Error sending Telegram message: {e2}")
|
||||
media_type = self._get_media_type(media_path)
|
||||
sender = {
|
||||
"photo": self._app.bot.send_photo,
|
||||
"voice": self._app.bot.send_voice,
|
||||
"audio": self._app.bot.send_audio,
|
||||
}.get(media_type, self._app.bot.send_document)
|
||||
param = "photo" if media_type == "photo" else media_type if media_type in ("voice", "audio") else "document"
|
||||
with open(media_path, 'rb') as f:
|
||||
await sender(chat_id=chat_id, **{param: f})
|
||||
except Exception as e:
|
||||
filename = media_path.rsplit("/", 1)[-1]
|
||||
logger.error(f"Failed to send media {media_path}: {e}")
|
||||
await self._app.bot.send_message(chat_id=chat_id, text=f"[Failed to send: {filename}]")
|
||||
|
||||
# Send text content
|
||||
if msg.content and msg.content != "[empty message]":
|
||||
for chunk in _split_message(msg.content):
|
||||
try:
|
||||
html = _markdown_to_telegram_html(chunk)
|
||||
await self._app.bot.send_message(chat_id=chat_id, text=html, parse_mode="HTML")
|
||||
except Exception as e:
|
||||
logger.warning(f"HTML parse failed, falling back to plain text: {e}")
|
||||
try:
|
||||
await self._app.bot.send_message(chat_id=chat_id, text=chunk)
|
||||
except Exception as e2:
|
||||
logger.error(f"Error sending Telegram message: {e2}")
|
||||
|
||||
async def _on_start(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
|
||||
"""Handle /start command."""
|
||||
@ -226,40 +266,21 @@ class TelegramChannel(BaseChannel):
|
||||
"Type /help to see available commands."
|
||||
)
|
||||
|
||||
async def _on_reset(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
|
||||
"""Handle /reset command — clear conversation history."""
|
||||
@staticmethod
|
||||
def _sender_id(user) -> str:
|
||||
"""Build sender_id with username for allowlist matching."""
|
||||
sid = str(user.id)
|
||||
return f"{sid}|{user.username}" if user.username else sid
|
||||
|
||||
async def _forward_command(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
|
||||
"""Forward slash commands to the bus for unified handling in AgentLoop."""
|
||||
if not update.message or not update.effective_user:
|
||||
return
|
||||
|
||||
chat_id = str(update.message.chat_id)
|
||||
session_key = f"{self.name}:{chat_id}"
|
||||
|
||||
if self.session_manager is None:
|
||||
logger.warning("/reset called but session_manager is not available")
|
||||
await update.message.reply_text("⚠️ Session management is not available.")
|
||||
return
|
||||
|
||||
session = self.session_manager.get_or_create(session_key)
|
||||
msg_count = len(session.messages)
|
||||
session.clear()
|
||||
self.session_manager.save(session)
|
||||
|
||||
logger.info(f"Session reset for {session_key} (cleared {msg_count} messages)")
|
||||
await update.message.reply_text("🔄 Conversation history cleared. Let's start fresh!")
|
||||
|
||||
async def _on_help(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
|
||||
"""Handle /help command — show available commands."""
|
||||
if not update.message:
|
||||
return
|
||||
|
||||
help_text = (
|
||||
"🐈 <b>nanobot commands</b>\n\n"
|
||||
"/start — Start the bot\n"
|
||||
"/reset — Reset conversation history\n"
|
||||
"/help — Show this help message\n\n"
|
||||
"Just send me a text message to chat!"
|
||||
await self._handle_message(
|
||||
sender_id=self._sender_id(update.effective_user),
|
||||
chat_id=str(update.message.chat_id),
|
||||
content=update.message.text,
|
||||
)
|
||||
await update.message.reply_text(help_text, parse_mode="HTML")
|
||||
|
||||
async def _on_message(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
|
||||
"""Handle incoming messages (text, photos, voice, documents)."""
|
||||
@ -269,11 +290,7 @@ class TelegramChannel(BaseChannel):
|
||||
message = update.message
|
||||
user = update.effective_user
|
||||
chat_id = message.chat_id
|
||||
|
||||
# Use stable numeric ID, but keep username for allowlist compatibility
|
||||
sender_id = str(user.id)
|
||||
if user.username:
|
||||
sender_id = f"{sender_id}|{user.username}"
|
||||
sender_id = self._sender_id(user)
|
||||
|
||||
# Store chat_id for replies
|
||||
self._chat_ids[sender_id] = chat_id
|
||||
@ -386,6 +403,10 @@ class TelegramChannel(BaseChannel):
|
||||
except Exception as e:
|
||||
logger.debug(f"Typing indicator stopped for {chat_id}: {e}")
|
||||
|
||||
async def _on_error(self, update: object, context: ContextTypes.DEFAULT_TYPE) -> None:
|
||||
"""Log polling / handler errors instead of silently swallowing them."""
|
||||
logger.error(f"Telegram error: {context.error}")
|
||||
|
||||
def _get_extension(self, media_type: str, mime_type: str | None) -> str:
|
||||
"""Get file extension based on media type."""
|
||||
if mime_type:
|
||||
|
||||
@ -42,6 +42,9 @@ class WhatsAppChannel(BaseChannel):
|
||||
try:
|
||||
async with websockets.connect(bridge_url) as ws:
|
||||
self._ws = ws
|
||||
# Send auth token if configured
|
||||
if self.config.bridge_token:
|
||||
await ws.send(json.dumps({"type": "auth", "token": self.config.bridge_token}))
|
||||
self._connected = True
|
||||
logger.info("Connected to WhatsApp bridge")
|
||||
|
||||
|
||||
@ -19,6 +19,7 @@ from prompt_toolkit.history import FileHistory
|
||||
from prompt_toolkit.patch_stdout import patch_stdout
|
||||
|
||||
from nanobot import __version__, __logo__
|
||||
from nanobot.config.schema import Config
|
||||
|
||||
app = typer.Typer(
|
||||
name="nanobot",
|
||||
@ -155,7 +156,7 @@ def main(
|
||||
@app.command()
|
||||
def onboard():
|
||||
"""Initialize nanobot configuration and workspace."""
|
||||
from nanobot.config.loader import get_config_path, save_config
|
||||
from nanobot.config.loader import get_config_path, load_config, save_config
|
||||
from nanobot.config.schema import Config
|
||||
from nanobot.utils.helpers import get_workspace_path
|
||||
|
||||
@ -163,17 +164,26 @@ def onboard():
|
||||
|
||||
if config_path.exists():
|
||||
console.print(f"[yellow]Config already exists at {config_path}[/yellow]")
|
||||
if not typer.confirm("Overwrite?"):
|
||||
raise typer.Exit()
|
||||
|
||||
# Create default config
|
||||
config = Config()
|
||||
save_config(config)
|
||||
console.print(f"[green]✓[/green] Created config at {config_path}")
|
||||
console.print(" [bold]y[/bold] = overwrite with defaults (existing values will be lost)")
|
||||
console.print(" [bold]N[/bold] = refresh config, keeping existing values and adding new fields")
|
||||
if typer.confirm("Overwrite?"):
|
||||
config = Config()
|
||||
save_config(config)
|
||||
console.print(f"[green]✓[/green] Config reset to defaults at {config_path}")
|
||||
else:
|
||||
config = load_config()
|
||||
save_config(config)
|
||||
console.print(f"[green]✓[/green] Config refreshed at {config_path} (existing values preserved)")
|
||||
else:
|
||||
save_config(Config())
|
||||
console.print(f"[green]✓[/green] Created config at {config_path}")
|
||||
|
||||
# Create workspace
|
||||
workspace = get_workspace_path()
|
||||
console.print(f"[green]✓[/green] Created workspace at {workspace}")
|
||||
|
||||
if not workspace.exists():
|
||||
workspace.mkdir(parents=True, exist_ok=True)
|
||||
console.print(f"[green]✓[/green] Created workspace at {workspace}")
|
||||
|
||||
# Create default bootstrap files
|
||||
_create_workspace_templates(workspace)
|
||||
@ -200,7 +210,7 @@ You are a helpful AI assistant. Be concise, accurate, and friendly.
|
||||
- Always explain what you're doing before taking actions
|
||||
- Ask for clarification when the request is ambiguous
|
||||
- Use tools to help accomplish tasks
|
||||
- Remember important information in your memory files
|
||||
- Remember important information in memory/MEMORY.md; past events are logged in memory/HISTORY.md
|
||||
""",
|
||||
"SOUL.md": """# Soul
|
||||
|
||||
@ -258,18 +268,39 @@ This file stores important information that should persist across sessions.
|
||||
(Things to remember)
|
||||
""")
|
||||
console.print(" [dim]Created memory/MEMORY.md[/dim]")
|
||||
|
||||
history_file = memory_dir / "HISTORY.md"
|
||||
if not history_file.exists():
|
||||
history_file.write_text("")
|
||||
console.print(" [dim]Created memory/HISTORY.md[/dim]")
|
||||
|
||||
# Create skills directory for custom user skills
|
||||
skills_dir = workspace / "skills"
|
||||
skills_dir.mkdir(exist_ok=True)
|
||||
|
||||
|
||||
def _make_provider(config):
|
||||
"""Create LLM provider from config. Supports LiteLLMProvider and AirLLMProvider."""
|
||||
provider_name = config.get_provider_name()
|
||||
p = config.get_provider()
|
||||
def _make_provider(config: Config):
|
||||
"""Create the appropriate LLM provider from config."""
|
||||
from nanobot.providers.litellm_provider import LiteLLMProvider
|
||||
from nanobot.providers.openai_codex_provider import OpenAICodexProvider
|
||||
from nanobot.providers.custom_provider import CustomProvider
|
||||
|
||||
model = config.agents.defaults.model
|
||||
|
||||
provider_name = config.get_provider_name(model)
|
||||
p = config.get_provider(model)
|
||||
|
||||
# OpenAI Codex (OAuth)
|
||||
if provider_name == "openai_codex" or model.startswith("openai-codex/"):
|
||||
return OpenAICodexProvider(default_model=model)
|
||||
|
||||
# Custom: direct OpenAI-compatible endpoint, bypasses LiteLLM
|
||||
if provider_name == "custom":
|
||||
return CustomProvider(
|
||||
api_key=p.api_key if p else "no-key",
|
||||
api_base=config.get_api_base(model) or "http://localhost:8000/v1",
|
||||
default_model=model,
|
||||
)
|
||||
|
||||
# Check if AirLLM provider is requested
|
||||
if provider_name == "airllm":
|
||||
try:
|
||||
@ -316,16 +347,17 @@ def _make_provider(config):
|
||||
console.print(f"[red]Error: AirLLM provider not available: {e}[/red]")
|
||||
console.print("Please ensure airllm_ollama_wrapper.py is in the Python path.")
|
||||
raise typer.Exit(1)
|
||||
|
||||
# Default to LiteLLMProvider
|
||||
from nanobot.providers.litellm_provider import LiteLLMProvider
|
||||
if not (p and p.api_key) and not model.startswith("bedrock/"):
|
||||
|
||||
from nanobot.providers.registry import find_by_name
|
||||
spec = find_by_name(provider_name)
|
||||
if not model.startswith("bedrock/") and not (p and p.api_key) and not (spec and spec.is_oauth):
|
||||
console.print("[red]Error: No API key configured.[/red]")
|
||||
console.print("Set one in ~/.nanobot/config.json under providers section")
|
||||
raise typer.Exit(1)
|
||||
|
||||
return LiteLLMProvider(
|
||||
api_key=p.api_key if p else None,
|
||||
api_base=config.get_api_base(),
|
||||
api_base=config.get_api_base(model),
|
||||
default_model=model,
|
||||
extra_headers=p.extra_headers if p else None,
|
||||
provider_name=provider_name,
|
||||
@ -373,12 +405,16 @@ def gateway(
|
||||
provider=provider,
|
||||
workspace=config.workspace_path,
|
||||
model=config.agents.defaults.model,
|
||||
temperature=config.agents.defaults.temperature,
|
||||
max_tokens=config.agents.defaults.max_tokens,
|
||||
max_iterations=config.agents.defaults.max_tool_iterations,
|
||||
memory_window=config.agents.defaults.memory_window,
|
||||
brave_api_key=config.tools.web.search.api_key or None,
|
||||
exec_config=config.tools.exec,
|
||||
cron_service=cron,
|
||||
restrict_to_workspace=config.tools.restrict_to_workspace,
|
||||
session_manager=session_manager,
|
||||
mcp_servers=config.tools.mcp_servers,
|
||||
)
|
||||
|
||||
# Set cron callback (needs agent)
|
||||
@ -413,7 +449,7 @@ def gateway(
|
||||
)
|
||||
|
||||
# Create channel manager
|
||||
channels = ChannelManager(config, bus, session_manager=session_manager)
|
||||
channels = ChannelManager(config, bus)
|
||||
|
||||
if channels.enabled_channels:
|
||||
console.print(f"[green]✓[/green] Channels enabled: {', '.join(channels.enabled_channels)}")
|
||||
@ -436,6 +472,8 @@ def gateway(
|
||||
)
|
||||
except KeyboardInterrupt:
|
||||
console.print("\nShutting down...")
|
||||
finally:
|
||||
await agent.close_mcp()
|
||||
heartbeat.stop()
|
||||
cron.stop()
|
||||
agent.stop()
|
||||
@ -454,14 +492,15 @@ def gateway(
|
||||
@app.command()
|
||||
def agent(
|
||||
message: str = typer.Option(None, "--message", "-m", help="Message to send to the agent"),
|
||||
session_id: str = typer.Option("cli:default", "--session", "-s", help="Session ID"),
|
||||
session_id: str = typer.Option("cli:direct", "--session", "-s", help="Session ID"),
|
||||
markdown: bool = typer.Option(True, "--markdown/--no-markdown", help="Render assistant output as Markdown"),
|
||||
logs: bool = typer.Option(False, "--logs/--no-logs", help="Show nanobot runtime logs during chat"),
|
||||
):
|
||||
"""Interact with the agent directly."""
|
||||
from nanobot.config.loader import load_config
|
||||
from nanobot.config.loader import load_config, get_data_dir
|
||||
from nanobot.bus.queue import MessageBus
|
||||
from nanobot.agent.loop import AgentLoop
|
||||
from nanobot.cron.service import CronService
|
||||
from loguru import logger
|
||||
|
||||
config = load_config()
|
||||
@ -469,6 +508,10 @@ def agent(
|
||||
bus = MessageBus()
|
||||
provider = _make_provider(config)
|
||||
|
||||
# Create cron service for tool usage (no callback needed for CLI unless running)
|
||||
cron_store_path = get_data_dir() / "cron" / "jobs.json"
|
||||
cron = CronService(cron_store_path)
|
||||
|
||||
if logs:
|
||||
logger.enable("nanobot")
|
||||
else:
|
||||
@ -478,9 +521,16 @@ def agent(
|
||||
bus=bus,
|
||||
provider=provider,
|
||||
workspace=config.workspace_path,
|
||||
model=config.agents.defaults.model,
|
||||
temperature=config.agents.defaults.temperature,
|
||||
max_tokens=config.agents.defaults.max_tokens,
|
||||
max_iterations=config.agents.defaults.max_tool_iterations,
|
||||
memory_window=config.agents.defaults.memory_window,
|
||||
brave_api_key=config.tools.web.search.api_key or None,
|
||||
exec_config=config.tools.exec,
|
||||
cron_service=cron,
|
||||
restrict_to_workspace=config.tools.restrict_to_workspace,
|
||||
mcp_servers=config.tools.mcp_servers,
|
||||
)
|
||||
|
||||
# Show spinner when logs are off (no output to miss); skip when logs are on
|
||||
@ -491,14 +541,18 @@ def agent(
|
||||
# Animated spinner is safe to use with prompt_toolkit input handling
|
||||
return console.status("[dim]nanobot is thinking...[/dim]", spinner="dots")
|
||||
|
||||
async def _cli_progress(content: str) -> None:
|
||||
console.print(f" [dim]↳ {content}[/dim]")
|
||||
|
||||
if message:
|
||||
# Single message mode
|
||||
async def run_once():
|
||||
try:
|
||||
with _thinking_ctx():
|
||||
response = await agent_loop.process_direct(message, session_id)
|
||||
response = await agent_loop.process_direct(message, session_id, on_progress=_cli_progress)
|
||||
# response is a string (content) from process_direct
|
||||
_print_agent_response(response or "", render_markdown=markdown)
|
||||
await agent_loop.close_mcp()
|
||||
except Exception as e:
|
||||
import traceback
|
||||
console.print(f"[red]Error: {e}[/red]")
|
||||
@ -519,30 +573,33 @@ def agent(
|
||||
signal.signal(signal.SIGINT, _exit_on_sigint)
|
||||
|
||||
async def run_interactive():
|
||||
while True:
|
||||
try:
|
||||
_flush_pending_tty_input()
|
||||
user_input = await _read_interactive_input_async()
|
||||
command = user_input.strip()
|
||||
if not command:
|
||||
continue
|
||||
try:
|
||||
while True:
|
||||
try:
|
||||
_flush_pending_tty_input()
|
||||
user_input = await _read_interactive_input_async()
|
||||
command = user_input.strip()
|
||||
if not command:
|
||||
continue
|
||||
|
||||
if _is_exit_command(command):
|
||||
if _is_exit_command(command):
|
||||
_restore_terminal()
|
||||
console.print("\nGoodbye!")
|
||||
break
|
||||
|
||||
with _thinking_ctx():
|
||||
response = await agent_loop.process_direct(user_input, session_id, on_progress=_cli_progress)
|
||||
_print_agent_response(response, render_markdown=markdown)
|
||||
except KeyboardInterrupt:
|
||||
_restore_terminal()
|
||||
console.print("\nGoodbye!")
|
||||
break
|
||||
|
||||
with _thinking_ctx():
|
||||
response = await agent_loop.process_direct(user_input, session_id)
|
||||
_print_agent_response(response, render_markdown=markdown)
|
||||
except KeyboardInterrupt:
|
||||
_restore_terminal()
|
||||
console.print("\nGoodbye!")
|
||||
break
|
||||
except EOFError:
|
||||
_restore_terminal()
|
||||
console.print("\nGoodbye!")
|
||||
break
|
||||
except EOFError:
|
||||
_restore_terminal()
|
||||
console.print("\nGoodbye!")
|
||||
break
|
||||
finally:
|
||||
await agent_loop.close_mcp()
|
||||
|
||||
asyncio.run(run_interactive())
|
||||
|
||||
@ -684,14 +741,20 @@ def _get_bridge_dir() -> Path:
|
||||
def channels_login():
|
||||
"""Link device via QR code."""
|
||||
import subprocess
|
||||
from nanobot.config.loader import load_config
|
||||
|
||||
config = load_config()
|
||||
bridge_dir = _get_bridge_dir()
|
||||
|
||||
console.print(f"{__logo__} Starting bridge...")
|
||||
console.print("Scan the QR code to connect.\n")
|
||||
|
||||
env = {**os.environ}
|
||||
if config.channels.whatsapp.bridge_token:
|
||||
env["BRIDGE_TOKEN"] = config.channels.whatsapp.bridge_token
|
||||
|
||||
try:
|
||||
subprocess.run(["npm", "start"], cwd=bridge_dir, check=True)
|
||||
subprocess.run(["npm", "start"], cwd=bridge_dir, check=True, env=env)
|
||||
except subprocess.CalledProcessError as e:
|
||||
console.print(f"[red]Bridge failed: {e}[/red]")
|
||||
except FileNotFoundError:
|
||||
@ -731,20 +794,26 @@ def cron_list(
|
||||
table.add_column("Next Run")
|
||||
|
||||
import time
|
||||
from datetime import datetime as _dt
|
||||
from zoneinfo import ZoneInfo
|
||||
for job in jobs:
|
||||
# Format schedule
|
||||
if job.schedule.kind == "every":
|
||||
sched = f"every {(job.schedule.every_ms or 0) // 1000}s"
|
||||
elif job.schedule.kind == "cron":
|
||||
sched = job.schedule.expr or ""
|
||||
sched = f"{job.schedule.expr or ''} ({job.schedule.tz})" if job.schedule.tz else (job.schedule.expr or "")
|
||||
else:
|
||||
sched = "one-time"
|
||||
|
||||
# Format next run
|
||||
next_run = ""
|
||||
if job.state.next_run_at_ms:
|
||||
next_time = time.strftime("%Y-%m-%d %H:%M", time.localtime(job.state.next_run_at_ms / 1000))
|
||||
next_run = next_time
|
||||
ts = job.state.next_run_at_ms / 1000
|
||||
try:
|
||||
tz = ZoneInfo(job.schedule.tz) if job.schedule.tz else None
|
||||
next_run = _dt.fromtimestamp(ts, tz).strftime("%Y-%m-%d %H:%M")
|
||||
except Exception:
|
||||
next_run = time.strftime("%Y-%m-%d %H:%M", time.localtime(ts))
|
||||
|
||||
status = "[green]enabled[/green]" if job.enabled else "[dim]disabled[/dim]"
|
||||
|
||||
@ -759,6 +828,7 @@ def cron_add(
|
||||
message: str = typer.Option(..., "--message", "-m", help="Message for agent"),
|
||||
every: int = typer.Option(None, "--every", "-e", help="Run every N seconds"),
|
||||
cron_expr: str = typer.Option(None, "--cron", "-c", help="Cron expression (e.g. '0 9 * * *')"),
|
||||
tz: str | None = typer.Option(None, "--tz", help="IANA timezone for cron (e.g. 'America/Vancouver')"),
|
||||
at: str = typer.Option(None, "--at", help="Run once at time (ISO format)"),
|
||||
deliver: bool = typer.Option(False, "--deliver", "-d", help="Deliver response to channel"),
|
||||
to: str = typer.Option(None, "--to", help="Recipient for delivery"),
|
||||
@ -769,11 +839,15 @@ def cron_add(
|
||||
from nanobot.cron.service import CronService
|
||||
from nanobot.cron.types import CronSchedule
|
||||
|
||||
if tz and not cron_expr:
|
||||
console.print("[red]Error: --tz can only be used with --cron[/red]")
|
||||
raise typer.Exit(1)
|
||||
|
||||
# Determine schedule type
|
||||
if every:
|
||||
schedule = CronSchedule(kind="every", every_ms=every * 1000)
|
||||
elif cron_expr:
|
||||
schedule = CronSchedule(kind="cron", expr=cron_expr)
|
||||
schedule = CronSchedule(kind="cron", expr=cron_expr, tz=tz)
|
||||
elif at:
|
||||
import datetime
|
||||
dt = datetime.datetime.fromisoformat(at)
|
||||
@ -884,7 +958,9 @@ def status():
|
||||
p = getattr(config.providers, spec.name, None)
|
||||
if p is None:
|
||||
continue
|
||||
if spec.is_local:
|
||||
if spec.is_oauth:
|
||||
console.print(f"{spec.label}: [green]✓ (OAuth)[/green]")
|
||||
elif spec.is_local:
|
||||
# Local deployments show api_base instead of api_key
|
||||
if p.api_base:
|
||||
console.print(f"{spec.label}: [green]✓ {p.api_base}[/green]")
|
||||
@ -895,5 +971,88 @@ def status():
|
||||
console.print(f"{spec.label}: {'[green]✓[/green]' if has_key else '[dim]not set[/dim]'}")
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# OAuth Login
|
||||
# ============================================================================
|
||||
|
||||
provider_app = typer.Typer(help="Manage providers")
|
||||
app.add_typer(provider_app, name="provider")
|
||||
|
||||
|
||||
_LOGIN_HANDLERS: dict[str, callable] = {}
|
||||
|
||||
|
||||
def _register_login(name: str):
|
||||
def decorator(fn):
|
||||
_LOGIN_HANDLERS[name] = fn
|
||||
return fn
|
||||
return decorator
|
||||
|
||||
|
||||
@provider_app.command("login")
|
||||
def provider_login(
|
||||
provider: str = typer.Argument(..., help="OAuth provider (e.g. 'openai-codex', 'github-copilot')"),
|
||||
):
|
||||
"""Authenticate with an OAuth provider."""
|
||||
from nanobot.providers.registry import PROVIDERS
|
||||
|
||||
key = provider.replace("-", "_")
|
||||
spec = next((s for s in PROVIDERS if s.name == key and s.is_oauth), None)
|
||||
if not spec:
|
||||
names = ", ".join(s.name.replace("_", "-") for s in PROVIDERS if s.is_oauth)
|
||||
console.print(f"[red]Unknown OAuth provider: {provider}[/red] Supported: {names}")
|
||||
raise typer.Exit(1)
|
||||
|
||||
handler = _LOGIN_HANDLERS.get(spec.name)
|
||||
if not handler:
|
||||
console.print(f"[red]Login not implemented for {spec.label}[/red]")
|
||||
raise typer.Exit(1)
|
||||
|
||||
console.print(f"{__logo__} OAuth Login - {spec.label}\n")
|
||||
handler()
|
||||
|
||||
|
||||
@_register_login("openai_codex")
|
||||
def _login_openai_codex() -> None:
|
||||
try:
|
||||
from oauth_cli_kit import get_token, login_oauth_interactive
|
||||
token = None
|
||||
try:
|
||||
token = get_token()
|
||||
except Exception:
|
||||
pass
|
||||
if not (token and token.access):
|
||||
console.print("[cyan]Starting interactive OAuth login...[/cyan]\n")
|
||||
token = login_oauth_interactive(
|
||||
print_fn=lambda s: console.print(s),
|
||||
prompt_fn=lambda s: typer.prompt(s),
|
||||
)
|
||||
if not (token and token.access):
|
||||
console.print("[red]✗ Authentication failed[/red]")
|
||||
raise typer.Exit(1)
|
||||
console.print(f"[green]✓ Authenticated with OpenAI Codex[/green] [dim]{token.account_id}[/dim]")
|
||||
except ImportError:
|
||||
console.print("[red]oauth_cli_kit not installed. Run: pip install oauth-cli-kit[/red]")
|
||||
raise typer.Exit(1)
|
||||
|
||||
|
||||
@_register_login("github_copilot")
|
||||
def _login_github_copilot() -> None:
|
||||
import asyncio
|
||||
|
||||
console.print("[cyan]Starting GitHub Copilot device flow...[/cyan]\n")
|
||||
|
||||
async def _trigger():
|
||||
from litellm import acompletion
|
||||
await acompletion(model="github_copilot/gpt-4o", messages=[{"role": "user", "content": "hi"}], max_tokens=1)
|
||||
|
||||
try:
|
||||
asyncio.run(_trigger())
|
||||
console.print("[green]✓ Authenticated with GitHub Copilot[/green]")
|
||||
except Exception as e:
|
||||
console.print(f"[red]Authentication error: {e}[/red]")
|
||||
raise typer.Exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
app()
|
||||
|
||||
@ -2,7 +2,6 @@
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from nanobot.config.schema import Config
|
||||
|
||||
@ -21,43 +20,41 @@ def get_data_dir() -> Path:
|
||||
def load_config(config_path: Path | None = None) -> Config:
|
||||
"""
|
||||
Load configuration from file or create default.
|
||||
|
||||
|
||||
Args:
|
||||
config_path: Optional path to config file. Uses default if not provided.
|
||||
|
||||
|
||||
Returns:
|
||||
Loaded configuration object.
|
||||
"""
|
||||
path = config_path or get_config_path()
|
||||
|
||||
|
||||
if path.exists():
|
||||
try:
|
||||
with open(path) as f:
|
||||
data = json.load(f)
|
||||
data = _migrate_config(data)
|
||||
return Config.model_validate(convert_keys(data))
|
||||
return Config.model_validate(data)
|
||||
except (json.JSONDecodeError, ValueError) as e:
|
||||
print(f"Warning: Failed to load config from {path}: {e}")
|
||||
print("Using default configuration.")
|
||||
|
||||
|
||||
return Config()
|
||||
|
||||
|
||||
def save_config(config: Config, config_path: Path | None = None) -> None:
|
||||
"""
|
||||
Save configuration to file.
|
||||
|
||||
|
||||
Args:
|
||||
config: Configuration to save.
|
||||
config_path: Optional path to save to. Uses default if not provided.
|
||||
"""
|
||||
path = config_path or get_config_path()
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Convert to camelCase format
|
||||
data = config.model_dump()
|
||||
data = convert_to_camel(data)
|
||||
|
||||
|
||||
data = config.model_dump(by_alias=True)
|
||||
|
||||
with open(path, "w") as f:
|
||||
json.dump(data, f, indent=2)
|
||||
|
||||
@ -70,37 +67,3 @@ def _migrate_config(data: dict) -> dict:
|
||||
if "restrictToWorkspace" in exec_cfg and "restrictToWorkspace" not in tools:
|
||||
tools["restrictToWorkspace"] = exec_cfg.pop("restrictToWorkspace")
|
||||
return data
|
||||
|
||||
|
||||
def convert_keys(data: Any) -> Any:
|
||||
"""Convert camelCase keys to snake_case for Pydantic."""
|
||||
if isinstance(data, dict):
|
||||
return {camel_to_snake(k): convert_keys(v) for k, v in data.items()}
|
||||
if isinstance(data, list):
|
||||
return [convert_keys(item) for item in data]
|
||||
return data
|
||||
|
||||
|
||||
def convert_to_camel(data: Any) -> Any:
|
||||
"""Convert snake_case keys to camelCase."""
|
||||
if isinstance(data, dict):
|
||||
return {snake_to_camel(k): convert_to_camel(v) for k, v in data.items()}
|
||||
if isinstance(data, list):
|
||||
return [convert_to_camel(item) for item in data]
|
||||
return data
|
||||
|
||||
|
||||
def camel_to_snake(name: str) -> str:
|
||||
"""Convert camelCase to snake_case."""
|
||||
result = []
|
||||
for i, char in enumerate(name):
|
||||
if char.isupper() and i > 0:
|
||||
result.append("_")
|
||||
result.append(char.lower())
|
||||
return "".join(result)
|
||||
|
||||
|
||||
def snake_to_camel(name: str) -> str:
|
||||
"""Convert snake_case to camelCase."""
|
||||
components = name.split("_")
|
||||
return components[0] + "".join(x.title() for x in components[1:])
|
||||
|
||||
@ -2,26 +2,37 @@
|
||||
|
||||
from pathlib import Path
|
||||
from pydantic import BaseModel, Field, ConfigDict
|
||||
from pydantic.alias_generators import to_camel
|
||||
from pydantic_settings import BaseSettings
|
||||
|
||||
|
||||
class WhatsAppConfig(BaseModel):
|
||||
class Base(BaseModel):
|
||||
"""Base model that accepts both camelCase and snake_case keys."""
|
||||
|
||||
model_config = ConfigDict(alias_generator=to_camel, populate_by_name=True)
|
||||
|
||||
|
||||
class WhatsAppConfig(Base):
|
||||
"""WhatsApp channel configuration."""
|
||||
|
||||
enabled: bool = False
|
||||
bridge_url: str = "ws://localhost:3001"
|
||||
bridge_token: str = "" # Shared token for bridge auth (optional, recommended)
|
||||
allow_from: list[str] = Field(default_factory=list) # Allowed phone numbers
|
||||
|
||||
|
||||
class TelegramConfig(BaseModel):
|
||||
class TelegramConfig(Base):
|
||||
"""Telegram channel configuration."""
|
||||
|
||||
enabled: bool = False
|
||||
token: str = "" # Bot token from @BotFather
|
||||
allow_from: list[str] = Field(default_factory=list) # Allowed user IDs or usernames
|
||||
proxy: str | None = None # HTTP/SOCKS5 proxy URL, e.g. "http://127.0.0.1:7890" or "socks5://127.0.0.1:1080"
|
||||
|
||||
|
||||
class FeishuConfig(BaseModel):
|
||||
class FeishuConfig(Base):
|
||||
"""Feishu/Lark channel configuration using WebSocket long connection."""
|
||||
|
||||
enabled: bool = False
|
||||
app_id: str = "" # App ID from Feishu Open Platform
|
||||
app_secret: str = "" # App Secret from Feishu Open Platform
|
||||
@ -30,24 +41,28 @@ class FeishuConfig(BaseModel):
|
||||
allow_from: list[str] = Field(default_factory=list) # Allowed user open_ids
|
||||
|
||||
|
||||
class DingTalkConfig(BaseModel):
|
||||
class DingTalkConfig(Base):
|
||||
"""DingTalk channel configuration using Stream mode."""
|
||||
|
||||
enabled: bool = False
|
||||
client_id: str = "" # AppKey
|
||||
client_secret: str = "" # AppSecret
|
||||
allow_from: list[str] = Field(default_factory=list) # Allowed staff_ids
|
||||
|
||||
|
||||
class DiscordConfig(BaseModel):
|
||||
class DiscordConfig(Base):
|
||||
"""Discord channel configuration."""
|
||||
|
||||
enabled: bool = False
|
||||
token: str = "" # Bot token from Discord Developer Portal
|
||||
allow_from: list[str] = Field(default_factory=list) # Allowed user IDs
|
||||
gateway_url: str = "wss://gateway.discord.gg/?v=10&encoding=json"
|
||||
intents: int = 37377 # GUILDS + GUILD_MESSAGES + DIRECT_MESSAGES + MESSAGE_CONTENT
|
||||
|
||||
class EmailConfig(BaseModel):
|
||||
|
||||
class EmailConfig(Base):
|
||||
"""Email channel configuration (IMAP inbound + SMTP outbound)."""
|
||||
|
||||
enabled: bool = False
|
||||
consent_granted: bool = False # Explicit owner permission to access mailbox data
|
||||
|
||||
@ -77,18 +92,21 @@ class EmailConfig(BaseModel):
|
||||
allow_from: list[str] = Field(default_factory=list) # Allowed sender email addresses
|
||||
|
||||
|
||||
class MochatMentionConfig(BaseModel):
|
||||
class MochatMentionConfig(Base):
|
||||
"""Mochat mention behavior configuration."""
|
||||
|
||||
require_in_groups: bool = False
|
||||
|
||||
|
||||
class MochatGroupRule(BaseModel):
|
||||
class MochatGroupRule(Base):
|
||||
"""Mochat per-group mention requirement."""
|
||||
|
||||
require_mention: bool = False
|
||||
|
||||
|
||||
class MochatConfig(BaseModel):
|
||||
class MochatConfig(Base):
|
||||
"""Mochat channel configuration."""
|
||||
|
||||
enabled: bool = False
|
||||
base_url: str = "https://mochat.io"
|
||||
socket_url: str = ""
|
||||
@ -113,36 +131,42 @@ class MochatConfig(BaseModel):
|
||||
reply_delay_ms: int = 120000
|
||||
|
||||
|
||||
class SlackDMConfig(BaseModel):
|
||||
class SlackDMConfig(Base):
|
||||
"""Slack DM policy configuration."""
|
||||
|
||||
enabled: bool = True
|
||||
policy: str = "open" # "open" or "allowlist"
|
||||
allow_from: list[str] = Field(default_factory=list) # Allowed Slack user IDs
|
||||
|
||||
|
||||
class SlackConfig(BaseModel):
|
||||
class SlackConfig(Base):
|
||||
"""Slack channel configuration."""
|
||||
|
||||
enabled: bool = False
|
||||
mode: str = "socket" # "socket" supported
|
||||
webhook_path: str = "/slack/events"
|
||||
bot_token: str = "" # xoxb-...
|
||||
app_token: str = "" # xapp-...
|
||||
user_token_read_only: bool = True
|
||||
reply_in_thread: bool = True
|
||||
react_emoji: str = "eyes"
|
||||
group_policy: str = "mention" # "mention", "open", "allowlist"
|
||||
group_allow_from: list[str] = Field(default_factory=list) # Allowed channel IDs if allowlist
|
||||
dm: SlackDMConfig = Field(default_factory=SlackDMConfig)
|
||||
|
||||
|
||||
class QQConfig(BaseModel):
|
||||
class QQConfig(Base):
|
||||
"""QQ channel configuration using botpy SDK."""
|
||||
|
||||
enabled: bool = False
|
||||
app_id: str = "" # 机器人 ID (AppID) from q.qq.com
|
||||
secret: str = "" # 机器人密钥 (AppSecret) from q.qq.com
|
||||
allow_from: list[str] = Field(default_factory=list) # Allowed user openids (empty = public access)
|
||||
|
||||
|
||||
class ChannelsConfig(BaseModel):
|
||||
class ChannelsConfig(Base):
|
||||
"""Configuration for chat channels."""
|
||||
|
||||
whatsapp: WhatsAppConfig = Field(default_factory=WhatsAppConfig)
|
||||
telegram: TelegramConfig = Field(default_factory=TelegramConfig)
|
||||
discord: DiscordConfig = Field(default_factory=DiscordConfig)
|
||||
@ -154,80 +178,113 @@ class ChannelsConfig(BaseModel):
|
||||
qq: QQConfig = Field(default_factory=QQConfig)
|
||||
|
||||
|
||||
class AgentDefaults(BaseModel):
|
||||
class AgentDefaults(Base):
|
||||
"""Default agent configuration."""
|
||||
|
||||
workspace: str = "~/.nanobot/workspace"
|
||||
model: str = "anthropic/claude-opus-4-5"
|
||||
max_tokens: int = 8192
|
||||
temperature: float = 0.7
|
||||
max_tool_iterations: int = 20
|
||||
memory_window: int = 50
|
||||
|
||||
|
||||
class AgentsConfig(BaseModel):
|
||||
class AgentsConfig(Base):
|
||||
"""Agent configuration."""
|
||||
|
||||
defaults: AgentDefaults = Field(default_factory=AgentDefaults)
|
||||
|
||||
|
||||
class ProviderConfig(BaseModel):
|
||||
class ProviderConfig(Base):
|
||||
"""LLM provider configuration."""
|
||||
|
||||
api_key: str = ""
|
||||
api_base: str | None = None
|
||||
extra_headers: dict[str, str] | None = None # Custom headers (e.g. APP-Code for AiHubMix)
|
||||
|
||||
|
||||
class ProvidersConfig(BaseModel):
|
||||
class ProvidersConfig(Base):
|
||||
"""Configuration for LLM providers."""
|
||||
|
||||
custom: ProviderConfig = Field(default_factory=ProviderConfig) # Any OpenAI-compatible endpoint
|
||||
anthropic: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||
openai: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||
openrouter: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||
deepseek: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||
vllm: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||
ollama: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||
airllm: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||
gemini: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||
moonshot: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||
minimax: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||
aihubmix: ProviderConfig = Field(default_factory=ProviderConfig) # AiHubMix API gateway
|
||||
siliconflow: ProviderConfig = Field(default_factory=ProviderConfig) # SiliconFlow (硅基流动) API gateway
|
||||
openai_codex: ProviderConfig = Field(default_factory=ProviderConfig) # OpenAI Codex (OAuth)
|
||||
github_copilot: ProviderConfig = Field(default_factory=ProviderConfig) # Github Copilot (OAuth)
|
||||
|
||||
|
||||
class GatewayConfig(BaseModel):
|
||||
class GatewayConfig(Base):
|
||||
"""Gateway/server configuration."""
|
||||
|
||||
host: str = "0.0.0.0"
|
||||
port: int = 18790
|
||||
|
||||
|
||||
class WebSearchConfig(BaseModel):
|
||||
class WebSearchConfig(Base):
|
||||
"""Web search tool configuration."""
|
||||
|
||||
api_key: str = "" # Brave Search API key
|
||||
max_results: int = 5
|
||||
|
||||
|
||||
class WebToolsConfig(BaseModel):
|
||||
class WebToolsConfig(Base):
|
||||
"""Web tools configuration."""
|
||||
|
||||
search: WebSearchConfig = Field(default_factory=WebSearchConfig)
|
||||
|
||||
|
||||
class ExecToolConfig(BaseModel):
|
||||
class ExecToolConfig(Base):
|
||||
"""Shell exec tool configuration."""
|
||||
|
||||
timeout: int = 60
|
||||
|
||||
|
||||
class ToolsConfig(BaseModel):
|
||||
class MCPServerConfig(Base):
|
||||
"""MCP server connection configuration (stdio or HTTP)."""
|
||||
|
||||
command: str = "" # Stdio: command to run (e.g. "npx")
|
||||
args: list[str] = Field(default_factory=list) # Stdio: command arguments
|
||||
env: dict[str, str] = Field(default_factory=dict) # Stdio: extra env vars
|
||||
url: str = "" # HTTP: streamable HTTP endpoint URL
|
||||
|
||||
|
||||
class ToolsConfig(Base):
|
||||
"""Tools configuration."""
|
||||
|
||||
web: WebToolsConfig = Field(default_factory=WebToolsConfig)
|
||||
exec: ExecToolConfig = Field(default_factory=ExecToolConfig)
|
||||
restrict_to_workspace: bool = False # If true, restrict all tool access to workspace directory
|
||||
mcp_servers: dict[str, MCPServerConfig] = Field(default_factory=dict)
|
||||
|
||||
|
||||
class Config(BaseSettings):
|
||||
"""Root configuration for nanobot."""
|
||||
|
||||
agents: AgentsConfig = Field(default_factory=AgentsConfig)
|
||||
channels: ChannelsConfig = Field(default_factory=ChannelsConfig)
|
||||
providers: ProvidersConfig = Field(default_factory=ProvidersConfig)
|
||||
gateway: GatewayConfig = Field(default_factory=GatewayConfig)
|
||||
tools: ToolsConfig = Field(default_factory=ToolsConfig)
|
||||
|
||||
|
||||
@property
|
||||
def workspace_path(self) -> Path:
|
||||
"""Get expanded workspace path."""
|
||||
return Path(self.agents.defaults.workspace).expanduser()
|
||||
|
||||
|
||||
def _match_provider(self, model: str | None = None) -> tuple["ProviderConfig | None", str | None]:
|
||||
"""Match provider config and its registry name. Returns (config, spec_name)."""
|
||||
from nanobot.providers.registry import PROVIDERS
|
||||
|
||||
model_lower = (model or self.agents.defaults.model).lower()
|
||||
|
||||
# Match by keyword (order follows PROVIDERS registry)
|
||||
@ -235,12 +292,13 @@ class Config(BaseSettings):
|
||||
p = getattr(self.providers, spec.name, None)
|
||||
if p and any(kw in model_lower for kw in spec.keywords):
|
||||
# For local providers (Ollama, AirLLM), allow empty api_key or "dummy"
|
||||
# For OAuth providers, no API key needed
|
||||
# For other providers, require api_key
|
||||
if spec.is_local:
|
||||
# Local providers can work with empty/dummy api_key
|
||||
if p.api_key or p.api_base or spec.name == "airllm":
|
||||
return p, spec.name
|
||||
elif p.api_key:
|
||||
elif spec.is_oauth or p.api_key:
|
||||
return p, spec.name
|
||||
|
||||
# Check local providers by api_base detection (for explicit config)
|
||||
@ -256,7 +314,10 @@ class Config(BaseSettings):
|
||||
return p, spec.name
|
||||
|
||||
# Fallback: gateways first, then others (follows registry order)
|
||||
# OAuth providers are NOT valid fallbacks — they require explicit model selection
|
||||
for spec in PROVIDERS:
|
||||
if spec.is_oauth:
|
||||
continue
|
||||
p = getattr(self.providers, spec.name, None)
|
||||
if p:
|
||||
# For local providers, allow empty/dummy api_key
|
||||
@ -280,10 +341,11 @@ class Config(BaseSettings):
|
||||
"""Get API key for the given model. Falls back to first available key."""
|
||||
p = self.get_provider(model)
|
||||
return p.api_key if p else None
|
||||
|
||||
|
||||
def get_api_base(self, model: str | None = None) -> str | None:
|
||||
"""Get API base URL for the given model. Applies default URLs for known gateways."""
|
||||
from nanobot.providers.registry import find_by_name
|
||||
|
||||
p, name = self._match_provider(model)
|
||||
if p and p.api_base:
|
||||
return p.api_base
|
||||
@ -295,8 +357,5 @@ class Config(BaseSettings):
|
||||
if spec and spec.is_gateway and spec.default_api_base:
|
||||
return spec.default_api_base
|
||||
return None
|
||||
|
||||
model_config = ConfigDict(
|
||||
env_prefix="NANOBOT_",
|
||||
env_nested_delimiter="__"
|
||||
)
|
||||
|
||||
model_config = ConfigDict(env_prefix="NANOBOT_", env_nested_delimiter="__")
|
||||
|
||||
@ -4,6 +4,7 @@ import asyncio
|
||||
import json
|
||||
import time
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Coroutine
|
||||
|
||||
@ -30,9 +31,14 @@ def _compute_next_run(schedule: CronSchedule, now_ms: int) -> int | None:
|
||||
if schedule.kind == "cron" and schedule.expr:
|
||||
try:
|
||||
from croniter import croniter
|
||||
cron = croniter(schedule.expr, time.time())
|
||||
next_time = cron.get_next()
|
||||
return int(next_time * 1000)
|
||||
from zoneinfo import ZoneInfo
|
||||
# Use caller-provided reference time for deterministic scheduling
|
||||
base_time = now_ms / 1000
|
||||
tz = ZoneInfo(schedule.tz) if schedule.tz else datetime.now().astimezone().tzinfo
|
||||
base_dt = datetime.fromtimestamp(base_time, tz=tz)
|
||||
cron = croniter(schedule.expr, base_dt)
|
||||
next_dt = cron.get_next(datetime)
|
||||
return int(next_dt.timestamp() * 1000)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
@ -2,9 +2,10 @@
|
||||
|
||||
from nanobot.providers.base import LLMProvider, LLMResponse
|
||||
from nanobot.providers.litellm_provider import LiteLLMProvider
|
||||
from nanobot.providers.openai_codex_provider import OpenAICodexProvider
|
||||
|
||||
try:
|
||||
from nanobot.providers.airllm_provider import AirLLMProvider
|
||||
__all__ = ["LLMProvider", "LLMResponse", "LiteLLMProvider", "AirLLMProvider"]
|
||||
__all__ = ["LLMProvider", "LLMResponse", "LiteLLMProvider", "AirLLMProvider", "OpenAICodexProvider"]
|
||||
except ImportError:
|
||||
__all__ = ["LLMProvider", "LLMResponse", "LiteLLMProvider"]
|
||||
__all__ = ["LLMProvider", "LLMResponse", "LiteLLMProvider", "OpenAICodexProvider"]
|
||||
|
||||
47
nanobot/providers/custom_provider.py
Normal file
47
nanobot/providers/custom_provider.py
Normal file
@ -0,0 +1,47 @@
|
||||
"""Direct OpenAI-compatible provider — bypasses LiteLLM."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
import json_repair
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
from nanobot.providers.base import LLMProvider, LLMResponse, ToolCallRequest
|
||||
|
||||
|
||||
class CustomProvider(LLMProvider):
|
||||
|
||||
def __init__(self, api_key: str = "no-key", api_base: str = "http://localhost:8000/v1", default_model: str = "default"):
|
||||
super().__init__(api_key, api_base)
|
||||
self.default_model = default_model
|
||||
self._client = AsyncOpenAI(api_key=api_key, base_url=api_base)
|
||||
|
||||
async def chat(self, messages: list[dict[str, Any]], tools: list[dict[str, Any]] | None = None,
|
||||
model: str | None = None, max_tokens: int = 4096, temperature: float = 0.7) -> LLMResponse:
|
||||
kwargs: dict[str, Any] = {"model": model or self.default_model, "messages": messages,
|
||||
"max_tokens": max(1, max_tokens), "temperature": temperature}
|
||||
if tools:
|
||||
kwargs.update(tools=tools, tool_choice="auto")
|
||||
try:
|
||||
return self._parse(await self._client.chat.completions.create(**kwargs))
|
||||
except Exception as e:
|
||||
return LLMResponse(content=f"Error: {e}", finish_reason="error")
|
||||
|
||||
def _parse(self, response: Any) -> LLMResponse:
|
||||
choice = response.choices[0]
|
||||
msg = choice.message
|
||||
tool_calls = [
|
||||
ToolCallRequest(id=tc.id, name=tc.function.name,
|
||||
arguments=json_repair.loads(tc.function.arguments) if isinstance(tc.function.arguments, str) else tc.function.arguments)
|
||||
for tc in (msg.tool_calls or [])
|
||||
]
|
||||
u = response.usage
|
||||
return LLMResponse(
|
||||
content=msg.content, tool_calls=tool_calls, finish_reason=choice.finish_reason or "stop",
|
||||
usage={"prompt_tokens": u.prompt_tokens, "completion_tokens": u.completion_tokens, "total_tokens": u.total_tokens} if u else {},
|
||||
reasoning_content=getattr(msg, "reasoning_content", None),
|
||||
)
|
||||
|
||||
def get_default_model(self) -> str:
|
||||
return self.default_model
|
||||
@ -1,6 +1,7 @@
|
||||
"""LiteLLM provider implementation for multi-provider support."""
|
||||
|
||||
import json
|
||||
import json_repair
|
||||
import os
|
||||
from typing import Any
|
||||
|
||||
@ -54,6 +55,9 @@ class LiteLLMProvider(LLMProvider):
|
||||
spec = self._gateway or find_by_model(model)
|
||||
if not spec:
|
||||
return
|
||||
if not spec.env_key:
|
||||
# OAuth/provider-only specs (for example: openai_codex)
|
||||
return
|
||||
|
||||
# Gateway/local overrides existing env; standard provider doesn't
|
||||
if self._gateway:
|
||||
@ -122,6 +126,10 @@ class LiteLLMProvider(LLMProvider):
|
||||
"""
|
||||
model = self._resolve_model(model or self.default_model)
|
||||
|
||||
# Clamp max_tokens to at least 1 — negative or zero values cause
|
||||
# LiteLLM to reject the request with "max_tokens must be at least 1".
|
||||
max_tokens = max(1, max_tokens)
|
||||
|
||||
kwargs: dict[str, Any] = {
|
||||
"model": model,
|
||||
"messages": messages,
|
||||
@ -175,10 +183,7 @@ class LiteLLMProvider(LLMProvider):
|
||||
# Parse arguments from JSON string if needed
|
||||
args = tc.function.arguments
|
||||
if isinstance(args, str):
|
||||
try:
|
||||
args = json.loads(args)
|
||||
except json.JSONDecodeError:
|
||||
args = {"raw": args}
|
||||
args = json_repair.loads(args)
|
||||
|
||||
tool_calls.append(ToolCallRequest(
|
||||
id=tc.id,
|
||||
|
||||
312
nanobot/providers/openai_codex_provider.py
Normal file
312
nanobot/providers/openai_codex_provider.py
Normal file
@ -0,0 +1,312 @@
|
||||
"""OpenAI Codex Responses Provider."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import hashlib
|
||||
import json
|
||||
from typing import Any, AsyncGenerator
|
||||
|
||||
import httpx
|
||||
from loguru import logger
|
||||
|
||||
from oauth_cli_kit import get_token as get_codex_token
|
||||
from nanobot.providers.base import LLMProvider, LLMResponse, ToolCallRequest
|
||||
|
||||
DEFAULT_CODEX_URL = "https://chatgpt.com/backend-api/codex/responses"
|
||||
DEFAULT_ORIGINATOR = "nanobot"
|
||||
|
||||
|
||||
class OpenAICodexProvider(LLMProvider):
|
||||
"""Use Codex OAuth to call the Responses API."""
|
||||
|
||||
def __init__(self, default_model: str = "openai-codex/gpt-5.1-codex"):
|
||||
super().__init__(api_key=None, api_base=None)
|
||||
self.default_model = default_model
|
||||
|
||||
async def chat(
|
||||
self,
|
||||
messages: list[dict[str, Any]],
|
||||
tools: list[dict[str, Any]] | None = None,
|
||||
model: str | None = None,
|
||||
max_tokens: int = 4096,
|
||||
temperature: float = 0.7,
|
||||
) -> LLMResponse:
|
||||
model = model or self.default_model
|
||||
system_prompt, input_items = _convert_messages(messages)
|
||||
|
||||
token = await asyncio.to_thread(get_codex_token)
|
||||
headers = _build_headers(token.account_id, token.access)
|
||||
|
||||
body: dict[str, Any] = {
|
||||
"model": _strip_model_prefix(model),
|
||||
"store": False,
|
||||
"stream": True,
|
||||
"instructions": system_prompt,
|
||||
"input": input_items,
|
||||
"text": {"verbosity": "medium"},
|
||||
"include": ["reasoning.encrypted_content"],
|
||||
"prompt_cache_key": _prompt_cache_key(messages),
|
||||
"tool_choice": "auto",
|
||||
"parallel_tool_calls": True,
|
||||
}
|
||||
|
||||
if tools:
|
||||
body["tools"] = _convert_tools(tools)
|
||||
|
||||
url = DEFAULT_CODEX_URL
|
||||
|
||||
try:
|
||||
try:
|
||||
content, tool_calls, finish_reason = await _request_codex(url, headers, body, verify=True)
|
||||
except Exception as e:
|
||||
if "CERTIFICATE_VERIFY_FAILED" not in str(e):
|
||||
raise
|
||||
logger.warning("SSL certificate verification failed for Codex API; retrying with verify=False")
|
||||
content, tool_calls, finish_reason = await _request_codex(url, headers, body, verify=False)
|
||||
return LLMResponse(
|
||||
content=content,
|
||||
tool_calls=tool_calls,
|
||||
finish_reason=finish_reason,
|
||||
)
|
||||
except Exception as e:
|
||||
return LLMResponse(
|
||||
content=f"Error calling Codex: {str(e)}",
|
||||
finish_reason="error",
|
||||
)
|
||||
|
||||
def get_default_model(self) -> str:
|
||||
return self.default_model
|
||||
|
||||
|
||||
def _strip_model_prefix(model: str) -> str:
|
||||
if model.startswith("openai-codex/"):
|
||||
return model.split("/", 1)[1]
|
||||
return model
|
||||
|
||||
|
||||
def _build_headers(account_id: str, token: str) -> dict[str, str]:
|
||||
return {
|
||||
"Authorization": f"Bearer {token}",
|
||||
"chatgpt-account-id": account_id,
|
||||
"OpenAI-Beta": "responses=experimental",
|
||||
"originator": DEFAULT_ORIGINATOR,
|
||||
"User-Agent": "nanobot (python)",
|
||||
"accept": "text/event-stream",
|
||||
"content-type": "application/json",
|
||||
}
|
||||
|
||||
|
||||
async def _request_codex(
|
||||
url: str,
|
||||
headers: dict[str, str],
|
||||
body: dict[str, Any],
|
||||
verify: bool,
|
||||
) -> tuple[str, list[ToolCallRequest], str]:
|
||||
async with httpx.AsyncClient(timeout=60.0, verify=verify) as client:
|
||||
async with client.stream("POST", url, headers=headers, json=body) as response:
|
||||
if response.status_code != 200:
|
||||
text = await response.aread()
|
||||
raise RuntimeError(_friendly_error(response.status_code, text.decode("utf-8", "ignore")))
|
||||
return await _consume_sse(response)
|
||||
|
||||
|
||||
def _convert_tools(tools: list[dict[str, Any]]) -> list[dict[str, Any]]:
|
||||
"""Convert OpenAI function-calling schema to Codex flat format."""
|
||||
converted: list[dict[str, Any]] = []
|
||||
for tool in tools:
|
||||
fn = (tool.get("function") or {}) if tool.get("type") == "function" else tool
|
||||
name = fn.get("name")
|
||||
if not name:
|
||||
continue
|
||||
params = fn.get("parameters") or {}
|
||||
converted.append({
|
||||
"type": "function",
|
||||
"name": name,
|
||||
"description": fn.get("description") or "",
|
||||
"parameters": params if isinstance(params, dict) else {},
|
||||
})
|
||||
return converted
|
||||
|
||||
|
||||
def _convert_messages(messages: list[dict[str, Any]]) -> tuple[str, list[dict[str, Any]]]:
|
||||
system_prompt = ""
|
||||
input_items: list[dict[str, Any]] = []
|
||||
|
||||
for idx, msg in enumerate(messages):
|
||||
role = msg.get("role")
|
||||
content = msg.get("content")
|
||||
|
||||
if role == "system":
|
||||
system_prompt = content if isinstance(content, str) else ""
|
||||
continue
|
||||
|
||||
if role == "user":
|
||||
input_items.append(_convert_user_message(content))
|
||||
continue
|
||||
|
||||
if role == "assistant":
|
||||
# Handle text first.
|
||||
if isinstance(content, str) and content:
|
||||
input_items.append(
|
||||
{
|
||||
"type": "message",
|
||||
"role": "assistant",
|
||||
"content": [{"type": "output_text", "text": content}],
|
||||
"status": "completed",
|
||||
"id": f"msg_{idx}",
|
||||
}
|
||||
)
|
||||
# Then handle tool calls.
|
||||
for tool_call in msg.get("tool_calls", []) or []:
|
||||
fn = tool_call.get("function") or {}
|
||||
call_id, item_id = _split_tool_call_id(tool_call.get("id"))
|
||||
call_id = call_id or f"call_{idx}"
|
||||
item_id = item_id or f"fc_{idx}"
|
||||
input_items.append(
|
||||
{
|
||||
"type": "function_call",
|
||||
"id": item_id,
|
||||
"call_id": call_id,
|
||||
"name": fn.get("name"),
|
||||
"arguments": fn.get("arguments") or "{}",
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
if role == "tool":
|
||||
call_id, _ = _split_tool_call_id(msg.get("tool_call_id"))
|
||||
output_text = content if isinstance(content, str) else json.dumps(content)
|
||||
input_items.append(
|
||||
{
|
||||
"type": "function_call_output",
|
||||
"call_id": call_id,
|
||||
"output": output_text,
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
return system_prompt, input_items
|
||||
|
||||
|
||||
def _convert_user_message(content: Any) -> dict[str, Any]:
|
||||
if isinstance(content, str):
|
||||
return {"role": "user", "content": [{"type": "input_text", "text": content}]}
|
||||
if isinstance(content, list):
|
||||
converted: list[dict[str, Any]] = []
|
||||
for item in content:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
if item.get("type") == "text":
|
||||
converted.append({"type": "input_text", "text": item.get("text", "")})
|
||||
elif item.get("type") == "image_url":
|
||||
url = (item.get("image_url") or {}).get("url")
|
||||
if url:
|
||||
converted.append({"type": "input_image", "image_url": url, "detail": "auto"})
|
||||
if converted:
|
||||
return {"role": "user", "content": converted}
|
||||
return {"role": "user", "content": [{"type": "input_text", "text": ""}]}
|
||||
|
||||
|
||||
def _split_tool_call_id(tool_call_id: Any) -> tuple[str, str | None]:
|
||||
if isinstance(tool_call_id, str) and tool_call_id:
|
||||
if "|" in tool_call_id:
|
||||
call_id, item_id = tool_call_id.split("|", 1)
|
||||
return call_id, item_id or None
|
||||
return tool_call_id, None
|
||||
return "call_0", None
|
||||
|
||||
|
||||
def _prompt_cache_key(messages: list[dict[str, Any]]) -> str:
|
||||
raw = json.dumps(messages, ensure_ascii=True, sort_keys=True)
|
||||
return hashlib.sha256(raw.encode("utf-8")).hexdigest()
|
||||
|
||||
|
||||
async def _iter_sse(response: httpx.Response) -> AsyncGenerator[dict[str, Any], None]:
|
||||
buffer: list[str] = []
|
||||
async for line in response.aiter_lines():
|
||||
if line == "":
|
||||
if buffer:
|
||||
data_lines = [l[5:].strip() for l in buffer if l.startswith("data:")]
|
||||
buffer = []
|
||||
if not data_lines:
|
||||
continue
|
||||
data = "\n".join(data_lines).strip()
|
||||
if not data or data == "[DONE]":
|
||||
continue
|
||||
try:
|
||||
yield json.loads(data)
|
||||
except Exception:
|
||||
continue
|
||||
continue
|
||||
buffer.append(line)
|
||||
|
||||
|
||||
async def _consume_sse(response: httpx.Response) -> tuple[str, list[ToolCallRequest], str]:
|
||||
content = ""
|
||||
tool_calls: list[ToolCallRequest] = []
|
||||
tool_call_buffers: dict[str, dict[str, Any]] = {}
|
||||
finish_reason = "stop"
|
||||
|
||||
async for event in _iter_sse(response):
|
||||
event_type = event.get("type")
|
||||
if event_type == "response.output_item.added":
|
||||
item = event.get("item") or {}
|
||||
if item.get("type") == "function_call":
|
||||
call_id = item.get("call_id")
|
||||
if not call_id:
|
||||
continue
|
||||
tool_call_buffers[call_id] = {
|
||||
"id": item.get("id") or "fc_0",
|
||||
"name": item.get("name"),
|
||||
"arguments": item.get("arguments") or "",
|
||||
}
|
||||
elif event_type == "response.output_text.delta":
|
||||
content += event.get("delta") or ""
|
||||
elif event_type == "response.function_call_arguments.delta":
|
||||
call_id = event.get("call_id")
|
||||
if call_id and call_id in tool_call_buffers:
|
||||
tool_call_buffers[call_id]["arguments"] += event.get("delta") or ""
|
||||
elif event_type == "response.function_call_arguments.done":
|
||||
call_id = event.get("call_id")
|
||||
if call_id and call_id in tool_call_buffers:
|
||||
tool_call_buffers[call_id]["arguments"] = event.get("arguments") or ""
|
||||
elif event_type == "response.output_item.done":
|
||||
item = event.get("item") or {}
|
||||
if item.get("type") == "function_call":
|
||||
call_id = item.get("call_id")
|
||||
if not call_id:
|
||||
continue
|
||||
buf = tool_call_buffers.get(call_id) or {}
|
||||
args_raw = buf.get("arguments") or item.get("arguments") or "{}"
|
||||
try:
|
||||
args = json.loads(args_raw)
|
||||
except Exception:
|
||||
args = {"raw": args_raw}
|
||||
tool_calls.append(
|
||||
ToolCallRequest(
|
||||
id=f"{call_id}|{buf.get('id') or item.get('id') or 'fc_0'}",
|
||||
name=buf.get("name") or item.get("name"),
|
||||
arguments=args,
|
||||
)
|
||||
)
|
||||
elif event_type == "response.completed":
|
||||
status = (event.get("response") or {}).get("status")
|
||||
finish_reason = _map_finish_reason(status)
|
||||
elif event_type in {"error", "response.failed"}:
|
||||
raise RuntimeError("Codex response failed")
|
||||
|
||||
return content, tool_calls, finish_reason
|
||||
|
||||
|
||||
_FINISH_REASON_MAP = {"completed": "stop", "incomplete": "length", "failed": "error", "cancelled": "error"}
|
||||
|
||||
|
||||
def _map_finish_reason(status: str | None) -> str:
|
||||
return _FINISH_REASON_MAP.get(status or "completed", "stop")
|
||||
|
||||
|
||||
def _friendly_error(status_code: int, raw: str) -> str:
|
||||
if status_code == 429:
|
||||
return "ChatGPT usage quota exceeded or rate limit triggered. Please try again later."
|
||||
return f"HTTP {status_code}: {raw}"
|
||||
@ -51,6 +51,12 @@ class ProviderSpec:
|
||||
# per-model param overrides, e.g. (("kimi-k2.5", {"temperature": 1.0}),)
|
||||
model_overrides: tuple[tuple[str, dict[str, Any]], ...] = ()
|
||||
|
||||
# OAuth-based providers (e.g., OpenAI Codex) don't use API keys
|
||||
is_oauth: bool = False # if True, uses OAuth flow instead of API key
|
||||
|
||||
# Direct providers bypass LiteLLM entirely (e.g., CustomProvider)
|
||||
is_direct: bool = False
|
||||
|
||||
@property
|
||||
def label(self) -> str:
|
||||
return self.display_name or self.name.title()
|
||||
@ -62,8 +68,151 @@ class ProviderSpec:
|
||||
|
||||
PROVIDERS: tuple[ProviderSpec, ...] = (
|
||||
|
||||
# === Custom (direct OpenAI-compatible endpoint, bypasses LiteLLM) ======
|
||||
ProviderSpec(
|
||||
name="custom",
|
||||
keywords=(),
|
||||
env_key="",
|
||||
display_name="Custom",
|
||||
litellm_prefix="",
|
||||
is_direct=True,
|
||||
),
|
||||
|
||||
# === Gateways (detected by api_key / api_base, not model name) =========
|
||||
# Gateways can route any model, so they win in fallback.
|
||||
|
||||
# OpenRouter: global gateway, keys start with "sk-or-"
|
||||
ProviderSpec(
|
||||
name="openrouter",
|
||||
keywords=("openrouter",),
|
||||
env_key="OPENROUTER_API_KEY",
|
||||
display_name="OpenRouter",
|
||||
litellm_prefix="openrouter", # claude-3 → openrouter/claude-3
|
||||
skip_prefixes=(),
|
||||
env_extras=(),
|
||||
is_gateway=True,
|
||||
is_local=False,
|
||||
detect_by_key_prefix="sk-or-",
|
||||
detect_by_base_keyword="openrouter",
|
||||
default_api_base="https://openrouter.ai/api/v1",
|
||||
strip_model_prefix=False,
|
||||
model_overrides=(),
|
||||
),
|
||||
|
||||
# AiHubMix: global gateway, OpenAI-compatible interface.
|
||||
# strip_model_prefix=True: it doesn't understand "anthropic/claude-3",
|
||||
# so we strip to bare "claude-3" then re-prefix as "openai/claude-3".
|
||||
ProviderSpec(
|
||||
name="aihubmix",
|
||||
keywords=("aihubmix",),
|
||||
env_key="OPENAI_API_KEY", # OpenAI-compatible
|
||||
display_name="AiHubMix",
|
||||
litellm_prefix="openai", # → openai/{model}
|
||||
skip_prefixes=(),
|
||||
env_extras=(),
|
||||
is_gateway=True,
|
||||
is_local=False,
|
||||
detect_by_key_prefix="",
|
||||
detect_by_base_keyword="aihubmix",
|
||||
default_api_base="https://aihubmix.com/v1",
|
||||
strip_model_prefix=True, # anthropic/claude-3 → claude-3 → openai/claude-3
|
||||
model_overrides=(),
|
||||
),
|
||||
|
||||
# SiliconFlow (硅基流动): OpenAI-compatible gateway, model names keep org prefix
|
||||
ProviderSpec(
|
||||
name="siliconflow",
|
||||
keywords=("siliconflow",),
|
||||
env_key="OPENAI_API_KEY",
|
||||
display_name="SiliconFlow",
|
||||
litellm_prefix="openai",
|
||||
skip_prefixes=(),
|
||||
env_extras=(),
|
||||
is_gateway=True,
|
||||
is_local=False,
|
||||
detect_by_key_prefix="",
|
||||
detect_by_base_keyword="siliconflow",
|
||||
default_api_base="https://api.siliconflow.cn/v1",
|
||||
strip_model_prefix=False,
|
||||
model_overrides=(),
|
||||
),
|
||||
|
||||
# === Standard providers (matched by model-name keywords) ===============
|
||||
|
||||
# Anthropic: LiteLLM recognizes "claude-*" natively, no prefix needed.
|
||||
ProviderSpec(
|
||||
name="anthropic",
|
||||
keywords=("anthropic", "claude"),
|
||||
env_key="ANTHROPIC_API_KEY",
|
||||
display_name="Anthropic",
|
||||
litellm_prefix="",
|
||||
skip_prefixes=(),
|
||||
env_extras=(),
|
||||
is_gateway=False,
|
||||
is_local=False,
|
||||
detect_by_key_prefix="",
|
||||
detect_by_base_keyword="",
|
||||
default_api_base="",
|
||||
strip_model_prefix=False,
|
||||
model_overrides=(),
|
||||
),
|
||||
|
||||
# OpenAI: LiteLLM recognizes "gpt-*" natively, no prefix needed.
|
||||
ProviderSpec(
|
||||
name="openai",
|
||||
keywords=("openai", "gpt"),
|
||||
env_key="OPENAI_API_KEY",
|
||||
display_name="OpenAI",
|
||||
litellm_prefix="",
|
||||
skip_prefixes=(),
|
||||
env_extras=(),
|
||||
is_gateway=False,
|
||||
is_local=False,
|
||||
detect_by_key_prefix="",
|
||||
detect_by_base_keyword="",
|
||||
default_api_base="",
|
||||
strip_model_prefix=False,
|
||||
model_overrides=(),
|
||||
),
|
||||
|
||||
# OpenAI Codex: uses OAuth, not API key.
|
||||
ProviderSpec(
|
||||
name="openai_codex",
|
||||
keywords=("openai-codex", "codex"),
|
||||
env_key="", # OAuth-based, no API key
|
||||
display_name="OpenAI Codex",
|
||||
litellm_prefix="", # Not routed through LiteLLM
|
||||
skip_prefixes=(),
|
||||
env_extras=(),
|
||||
is_gateway=False,
|
||||
is_local=False,
|
||||
detect_by_key_prefix="",
|
||||
detect_by_base_keyword="codex",
|
||||
default_api_base="https://chatgpt.com/backend-api",
|
||||
strip_model_prefix=False,
|
||||
model_overrides=(),
|
||||
is_oauth=True, # OAuth-based authentication
|
||||
),
|
||||
|
||||
# Github Copilot: uses OAuth, not API key.
|
||||
ProviderSpec(
|
||||
name="github_copilot",
|
||||
keywords=("github_copilot", "copilot"),
|
||||
env_key="", # OAuth-based, no API key
|
||||
display_name="Github Copilot",
|
||||
litellm_prefix="github_copilot", # github_copilot/model → github_copilot/model
|
||||
skip_prefixes=("github_copilot/",),
|
||||
env_extras=(),
|
||||
is_gateway=False,
|
||||
is_local=False,
|
||||
detect_by_key_prefix="",
|
||||
detect_by_base_keyword="",
|
||||
default_api_base="",
|
||||
strip_model_prefix=False,
|
||||
model_overrides=(),
|
||||
is_oauth=True, # OAuth-based authentication
|
||||
),
|
||||
|
||||
# DeepSeek: needs "deepseek/" prefix for LiteLLM routing.
|
||||
# Can be used with local models or API.
|
||||
ProviderSpec(
|
||||
|
||||
@ -15,15 +15,20 @@ from nanobot.utils.helpers import ensure_dir, safe_filename
|
||||
class Session:
|
||||
"""
|
||||
A conversation session.
|
||||
|
||||
|
||||
Stores messages in JSONL format for easy reading and persistence.
|
||||
|
||||
Important: Messages are append-only for LLM cache efficiency.
|
||||
The consolidation process writes summaries to MEMORY.md/HISTORY.md
|
||||
but does NOT modify the messages list or get_history() output.
|
||||
"""
|
||||
|
||||
|
||||
key: str # channel:chat_id
|
||||
messages: list[dict[str, Any]] = field(default_factory=list)
|
||||
created_at: datetime = field(default_factory=datetime.now)
|
||||
updated_at: datetime = field(default_factory=datetime.now)
|
||||
metadata: dict[str, Any] = field(default_factory=dict)
|
||||
last_consolidated: int = 0 # Number of messages already consolidated to files
|
||||
|
||||
def add_message(self, role: str, content: str, **kwargs: Any) -> None:
|
||||
"""Add a message to the session."""
|
||||
@ -36,44 +41,46 @@ class Session:
|
||||
self.messages.append(msg)
|
||||
self.updated_at = datetime.now()
|
||||
|
||||
def get_history(self, max_messages: int = 50) -> list[dict[str, Any]]:
|
||||
"""
|
||||
Get message history for LLM context.
|
||||
|
||||
Args:
|
||||
max_messages: Maximum messages to return.
|
||||
|
||||
Returns:
|
||||
List of messages in LLM format.
|
||||
"""
|
||||
# Get recent messages
|
||||
recent = self.messages[-max_messages:] if len(self.messages) > max_messages else self.messages
|
||||
|
||||
# Convert to LLM format (just role and content)
|
||||
return [{"role": m["role"], "content": m["content"]} for m in recent]
|
||||
def get_history(self, max_messages: int = 500) -> list[dict[str, Any]]:
|
||||
"""Get recent messages in LLM format, preserving tool metadata."""
|
||||
out: list[dict[str, Any]] = []
|
||||
for m in self.messages[-max_messages:]:
|
||||
entry: dict[str, Any] = {"role": m["role"], "content": m.get("content", "")}
|
||||
for k in ("tool_calls", "tool_call_id", "name"):
|
||||
if k in m:
|
||||
entry[k] = m[k]
|
||||
out.append(entry)
|
||||
return out
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Clear all messages in the session."""
|
||||
"""Clear all messages and reset session to initial state."""
|
||||
self.messages = []
|
||||
self.last_consolidated = 0
|
||||
self.updated_at = datetime.now()
|
||||
|
||||
|
||||
class SessionManager:
|
||||
"""
|
||||
Manages conversation sessions.
|
||||
|
||||
|
||||
Sessions are stored as JSONL files in the sessions directory.
|
||||
"""
|
||||
|
||||
|
||||
def __init__(self, workspace: Path):
|
||||
self.workspace = workspace
|
||||
self.sessions_dir = ensure_dir(Path.home() / ".nanobot" / "sessions")
|
||||
self.sessions_dir = ensure_dir(self.workspace / "sessions")
|
||||
self.legacy_sessions_dir = Path.home() / ".nanobot" / "sessions"
|
||||
self._cache: dict[str, Session] = {}
|
||||
|
||||
def _get_session_path(self, key: str) -> Path:
|
||||
"""Get the file path for a session."""
|
||||
safe_key = safe_filename(key.replace(":", "_"))
|
||||
return self.sessions_dir / f"{safe_key}.jsonl"
|
||||
|
||||
def _get_legacy_session_path(self, key: str) -> Path:
|
||||
"""Legacy global session path (~/.nanobot/sessions/)."""
|
||||
safe_key = safe_filename(key.replace(":", "_"))
|
||||
return self.legacy_sessions_dir / f"{safe_key}.jsonl"
|
||||
|
||||
def get_or_create(self, key: str) -> Session:
|
||||
"""
|
||||
@ -85,11 +92,9 @@ class SessionManager:
|
||||
Returns:
|
||||
The session.
|
||||
"""
|
||||
# Check cache
|
||||
if key in self._cache:
|
||||
return self._cache[key]
|
||||
|
||||
# Try to load from disk
|
||||
session = self._load(key)
|
||||
if session is None:
|
||||
session = Session(key=key)
|
||||
@ -100,34 +105,43 @@ class SessionManager:
|
||||
def _load(self, key: str) -> Session | None:
|
||||
"""Load a session from disk."""
|
||||
path = self._get_session_path(key)
|
||||
|
||||
if not path.exists():
|
||||
legacy_path = self._get_legacy_session_path(key)
|
||||
if legacy_path.exists():
|
||||
import shutil
|
||||
shutil.move(str(legacy_path), str(path))
|
||||
logger.info(f"Migrated session {key} from legacy path")
|
||||
|
||||
if not path.exists():
|
||||
return None
|
||||
|
||||
|
||||
try:
|
||||
messages = []
|
||||
metadata = {}
|
||||
created_at = None
|
||||
|
||||
last_consolidated = 0
|
||||
|
||||
with open(path) as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
|
||||
|
||||
data = json.loads(line)
|
||||
|
||||
|
||||
if data.get("_type") == "metadata":
|
||||
metadata = data.get("metadata", {})
|
||||
created_at = datetime.fromisoformat(data["created_at"]) if data.get("created_at") else None
|
||||
last_consolidated = data.get("last_consolidated", 0)
|
||||
else:
|
||||
messages.append(data)
|
||||
|
||||
|
||||
return Session(
|
||||
key=key,
|
||||
messages=messages,
|
||||
created_at=created_at or datetime.now(),
|
||||
metadata=metadata
|
||||
metadata=metadata,
|
||||
last_consolidated=last_consolidated
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to load session {key}: {e}")
|
||||
@ -136,42 +150,24 @@ class SessionManager:
|
||||
def save(self, session: Session) -> None:
|
||||
"""Save a session to disk."""
|
||||
path = self._get_session_path(session.key)
|
||||
|
||||
|
||||
with open(path, "w") as f:
|
||||
# Write metadata first
|
||||
metadata_line = {
|
||||
"_type": "metadata",
|
||||
"created_at": session.created_at.isoformat(),
|
||||
"updated_at": session.updated_at.isoformat(),
|
||||
"metadata": session.metadata
|
||||
"metadata": session.metadata,
|
||||
"last_consolidated": session.last_consolidated
|
||||
}
|
||||
f.write(json.dumps(metadata_line) + "\n")
|
||||
|
||||
# Write messages
|
||||
for msg in session.messages:
|
||||
f.write(json.dumps(msg) + "\n")
|
||||
|
||||
|
||||
self._cache[session.key] = session
|
||||
|
||||
def delete(self, key: str) -> bool:
|
||||
"""
|
||||
Delete a session.
|
||||
|
||||
Args:
|
||||
key: Session key.
|
||||
|
||||
Returns:
|
||||
True if deleted, False if not found.
|
||||
"""
|
||||
# Remove from cache
|
||||
def invalidate(self, key: str) -> None:
|
||||
"""Remove a session from the in-memory cache."""
|
||||
self._cache.pop(key, None)
|
||||
|
||||
# Remove file
|
||||
path = self._get_session_path(key)
|
||||
if path.exists():
|
||||
path.unlink()
|
||||
return True
|
||||
return False
|
||||
|
||||
def list_sessions(self) -> list[dict[str, Any]]:
|
||||
"""
|
||||
|
||||
@ -21,4 +21,5 @@ The skill format and metadata structure follow OpenClaw's conventions to maintai
|
||||
| `weather` | Get weather info using wttr.in and Open-Meteo |
|
||||
| `summarize` | Summarize URLs, files, and YouTube videos |
|
||||
| `tmux` | Remote-control tmux sessions |
|
||||
| `clawhub` | Search and install skills from ClawHub registry |
|
||||
| `skill-creator` | Create new skills |
|
||||
53
nanobot/skills/clawhub/SKILL.md
Normal file
53
nanobot/skills/clawhub/SKILL.md
Normal file
@ -0,0 +1,53 @@
|
||||
---
|
||||
name: clawhub
|
||||
description: Search and install agent skills from ClawHub, the public skill registry.
|
||||
homepage: https://clawhub.ai
|
||||
metadata: {"nanobot":{"emoji":"🦞"}}
|
||||
---
|
||||
|
||||
# ClawHub
|
||||
|
||||
Public skill registry for AI agents. Search by natural language (vector search).
|
||||
|
||||
## When to use
|
||||
|
||||
Use this skill when the user asks any of:
|
||||
- "find a skill for …"
|
||||
- "search for skills"
|
||||
- "install a skill"
|
||||
- "what skills are available?"
|
||||
- "update my skills"
|
||||
|
||||
## Search
|
||||
|
||||
```bash
|
||||
npx --yes clawhub@latest search "web scraping" --limit 5
|
||||
```
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
npx --yes clawhub@latest install <slug> --workdir ~/.nanobot/workspace
|
||||
```
|
||||
|
||||
Replace `<slug>` with the skill name from search results. This places the skill into `~/.nanobot/workspace/skills/`, where nanobot loads workspace skills from. Always include `--workdir`.
|
||||
|
||||
## Update
|
||||
|
||||
```bash
|
||||
npx --yes clawhub@latest update --all --workdir ~/.nanobot/workspace
|
||||
```
|
||||
|
||||
## List installed
|
||||
|
||||
```bash
|
||||
npx --yes clawhub@latest list --workdir ~/.nanobot/workspace
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires Node.js (`npx` comes with it).
|
||||
- No API key needed for search and install.
|
||||
- Login (`npx --yes clawhub@latest login`) is only required for publishing.
|
||||
- `--workdir ~/.nanobot/workspace` is critical — without it, skills install to the current directory instead of the nanobot workspace.
|
||||
- After install, remind the user to start a new session to load the skill.
|
||||
@ -7,10 +7,11 @@ description: Schedule reminders and recurring tasks.
|
||||
|
||||
Use the `cron` tool to schedule reminders or recurring tasks.
|
||||
|
||||
## Two Modes
|
||||
## Three Modes
|
||||
|
||||
1. **Reminder** - message is sent directly to user
|
||||
2. **Task** - message is a task description, agent executes and sends result
|
||||
3. **One-time** - runs once at a specific time, then auto-deletes
|
||||
|
||||
## Examples
|
||||
|
||||
@ -24,6 +25,16 @@ Dynamic task (agent executes each time):
|
||||
cron(action="add", message="Check HKUDS/nanobot GitHub stars and report", every_seconds=600)
|
||||
```
|
||||
|
||||
One-time scheduled task (compute ISO datetime from current time):
|
||||
```
|
||||
cron(action="add", message="Remind me about the meeting", at="<ISO datetime>")
|
||||
```
|
||||
|
||||
Timezone-aware cron:
|
||||
```
|
||||
cron(action="add", message="Morning standup", cron_expr="0 9 * * 1-5", tz="America/Vancouver")
|
||||
```
|
||||
|
||||
List/remove:
|
||||
```
|
||||
cron(action="list")
|
||||
@ -38,3 +49,9 @@ cron(action="remove", job_id="abc123")
|
||||
| every hour | every_seconds: 3600 |
|
||||
| every day at 8am | cron_expr: "0 8 * * *" |
|
||||
| weekdays at 5pm | cron_expr: "0 17 * * 1-5" |
|
||||
| 9am Vancouver time daily | cron_expr: "0 9 * * *", tz: "America/Vancouver" |
|
||||
| at a specific time | at: ISO datetime string (compute from current time) |
|
||||
|
||||
## Timezone
|
||||
|
||||
Use `tz` with `cron_expr` to schedule in a specific IANA timezone. Without `tz`, the server's local timezone is used.
|
||||
|
||||
31
nanobot/skills/memory/SKILL.md
Normal file
31
nanobot/skills/memory/SKILL.md
Normal file
@ -0,0 +1,31 @@
|
||||
---
|
||||
name: memory
|
||||
description: Two-layer memory system with grep-based recall.
|
||||
always: true
|
||||
---
|
||||
|
||||
# Memory
|
||||
|
||||
## Structure
|
||||
|
||||
- `memory/MEMORY.md` — Long-term facts (preferences, project context, relationships). Always loaded into your context.
|
||||
- `memory/HISTORY.md` — Append-only event log. NOT loaded into context. Search it with grep.
|
||||
|
||||
## Search Past Events
|
||||
|
||||
```bash
|
||||
grep -i "keyword" memory/HISTORY.md
|
||||
```
|
||||
|
||||
Use the `exec` tool to run grep. Combine patterns: `grep -iE "meeting|deadline" memory/HISTORY.md`
|
||||
|
||||
## When to Update MEMORY.md
|
||||
|
||||
Write important facts immediately using `edit_file` or `write_file`:
|
||||
- User preferences ("I prefer dark mode")
|
||||
- Project context ("The API uses OAuth2")
|
||||
- Relationships ("Alice is the project lead")
|
||||
|
||||
## Auto-consolidation
|
||||
|
||||
Old conversations are automatically summarized and appended to HISTORY.md when the session grows large. Long-term facts are extracted to MEMORY.md. You don't need to manage this.
|
||||
@ -37,23 +37,12 @@ def get_sessions_path() -> Path:
|
||||
return ensure_dir(get_data_path() / "sessions")
|
||||
|
||||
|
||||
def get_memory_path(workspace: Path | None = None) -> Path:
|
||||
"""Get the memory directory within the workspace."""
|
||||
ws = workspace or get_workspace_path()
|
||||
return ensure_dir(ws / "memory")
|
||||
|
||||
|
||||
def get_skills_path(workspace: Path | None = None) -> Path:
|
||||
"""Get the skills directory within the workspace."""
|
||||
ws = workspace or get_workspace_path()
|
||||
return ensure_dir(ws / "skills")
|
||||
|
||||
|
||||
def today_date() -> str:
|
||||
"""Get today's date in YYYY-MM-DD format."""
|
||||
return datetime.now().strftime("%Y-%m-%d")
|
||||
|
||||
|
||||
def timestamp() -> str:
|
||||
"""Get current timestamp in ISO format."""
|
||||
return datetime.now().isoformat()
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
[project]
|
||||
name = "nanobot-ai"
|
||||
version = "0.1.3.post6"
|
||||
version = "0.1.4"
|
||||
description = "A lightweight personal AI assistant framework"
|
||||
requires-python = ">=3.11"
|
||||
license = {text = "MIT"}
|
||||
@ -23,7 +23,8 @@ dependencies = [
|
||||
"pydantic-settings>=2.0.0",
|
||||
"websockets>=12.0",
|
||||
"websocket-client>=1.6.0",
|
||||
"httpx[socks]>=0.25.0",
|
||||
"httpx>=0.25.0",
|
||||
"oauth-cli-kit>=0.1.1",
|
||||
"loguru>=0.7.0",
|
||||
"readability-lxml>=0.8.0",
|
||||
"rich>=13.0.0",
|
||||
@ -35,9 +36,12 @@ dependencies = [
|
||||
"python-socketio>=5.11.0",
|
||||
"msgpack>=1.0.8",
|
||||
"slack-sdk>=3.26.0",
|
||||
"slackify-markdown>=0.2.0",
|
||||
"qq-botpy>=1.0.0",
|
||||
"python-socks[asyncio]>=2.4.0",
|
||||
"prompt-toolkit>=3.0.0",
|
||||
"mcp>=1.0.0",
|
||||
"json-repair>=0.30.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
|
||||
196
setup_llama3.2_local.py
Normal file
196
setup_llama3.2_local.py
Normal file
@ -0,0 +1,196 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Setup script to configure llama3.2 with AirLLM using local model path (no tokens after initial download).
|
||||
|
||||
This script will:
|
||||
1. Download llama3.2 to a local directory (one-time token needed)
|
||||
2. Configure nanobot to use the local path (no tokens needed after)
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
CONFIG_PATH = Path.home() / ".nanobot" / "config.json"
|
||||
MODEL_DIR = Path.home() / ".local" / "models" / "llama3.2-3b-instruct"
|
||||
MODEL_NAME = "meta-llama/Llama-3.2-3B-Instruct"
|
||||
|
||||
def load_existing_config():
|
||||
"""Load existing config or return default."""
|
||||
if CONFIG_PATH.exists():
|
||||
try:
|
||||
with open(CONFIG_PATH) as f:
|
||||
return json.load(f)
|
||||
except Exception as e:
|
||||
print(f"Warning: Could not read existing config: {e}")
|
||||
return {}
|
||||
return {}
|
||||
|
||||
def download_model_with_token():
|
||||
"""Download model using Hugging Face token."""
|
||||
print("\n" + "="*70)
|
||||
print("DOWNLOADING LLAMA3.2 MODEL")
|
||||
print("="*70)
|
||||
print(f"\nThis will download {MODEL_NAME} to:")
|
||||
print(f" {MODEL_DIR}")
|
||||
print("\nYou'll need a Hugging Face token (one-time only).")
|
||||
print("After download, no tokens will be needed!\n")
|
||||
|
||||
has_token = input("Do you have a Hugging Face token? (y/n): ").strip().lower()
|
||||
|
||||
if has_token != 'y':
|
||||
print("\n" + "="*70)
|
||||
print("GETTING A HUGGING FACE TOKEN")
|
||||
print("="*70)
|
||||
print("\n1. Go to: https://huggingface.co/settings/tokens")
|
||||
print("2. Click 'New token'")
|
||||
print("3. Give it a name (e.g., 'nanobot')")
|
||||
print("4. Select 'Read' permission")
|
||||
print("5. Click 'Generate token'")
|
||||
print("6. Copy the token (starts with 'hf_...')")
|
||||
print("\nThen accept the Llama license:")
|
||||
print(f"1. Go to: https://huggingface.co/{MODEL_NAME}")
|
||||
print("2. Click 'Agree and access repository'")
|
||||
print("3. Accept the license terms")
|
||||
print("\nRun this script again after getting your token.")
|
||||
return False
|
||||
|
||||
hf_token = input("\nEnter your Hugging Face token (starts with 'hf_'): ").strip()
|
||||
if not hf_token or not hf_token.startswith('hf_'):
|
||||
print("⚠ Error: Token must start with 'hf_'")
|
||||
return False
|
||||
|
||||
print(f"\nDownloading {MODEL_NAME}...")
|
||||
print("This may take a while depending on your internet connection...")
|
||||
|
||||
try:
|
||||
from huggingface_hub import snapshot_download
|
||||
import os
|
||||
|
||||
# Set token as environment variable
|
||||
os.environ['HF_TOKEN'] = hf_token
|
||||
|
||||
# Download to local directory
|
||||
MODEL_DIR.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
snapshot_download(
|
||||
repo_id=MODEL_NAME,
|
||||
local_dir=str(MODEL_DIR),
|
||||
token=hf_token,
|
||||
local_dir_use_symlinks=False
|
||||
)
|
||||
|
||||
print(f"\n✓ Model downloaded successfully to: {MODEL_DIR}")
|
||||
return True
|
||||
|
||||
except ImportError:
|
||||
print("\n⚠ Error: huggingface_hub not installed.")
|
||||
print("Install it with: pip install huggingface_hub")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f"\n⚠ Error downloading model: {e}")
|
||||
print("\nYou can try downloading manually:")
|
||||
print(f" huggingface-cli download {MODEL_NAME} --local-dir {MODEL_DIR} --token {hf_token[:10]}...")
|
||||
return False
|
||||
|
||||
def check_model_exists():
|
||||
"""Check if model is already downloaded locally."""
|
||||
# Check for common model files
|
||||
required_files = ['config.json', 'tokenizer.json']
|
||||
if MODEL_DIR.exists():
|
||||
has_files = all((MODEL_DIR / f).exists() for f in required_files)
|
||||
if has_files:
|
||||
print(f"✓ Found existing model at: {MODEL_DIR}")
|
||||
return True
|
||||
return False
|
||||
|
||||
def configure_for_local_path(config):
|
||||
"""Configure nanobot to use local model path."""
|
||||
# Ensure providers section exists
|
||||
if "providers" not in config:
|
||||
config["providers"] = {}
|
||||
|
||||
# Ensure agents section exists
|
||||
if "agents" not in config:
|
||||
config["agents"] = {}
|
||||
if "defaults" not in config["agents"]:
|
||||
config["agents"]["defaults"] = {}
|
||||
|
||||
# Set up AirLLM provider with local path
|
||||
config["providers"]["airllm"] = {
|
||||
"apiKey": str(MODEL_DIR), # Local path - no tokens needed!
|
||||
"apiBase": None,
|
||||
"extraHeaders": {} # No hf_token needed for local paths
|
||||
}
|
||||
|
||||
# Set default model to local path
|
||||
config["agents"]["defaults"]["model"] = str(MODEL_DIR)
|
||||
|
||||
return config
|
||||
|
||||
def save_config(config):
|
||||
"""Save config to file."""
|
||||
CONFIG_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(CONFIG_PATH, 'w') as f:
|
||||
json.dump(config, f, indent=2)
|
||||
|
||||
# Set secure permissions
|
||||
os.chmod(CONFIG_PATH, 0o600)
|
||||
print(f"\n✓ Configuration saved to: {CONFIG_PATH}")
|
||||
|
||||
def main():
|
||||
"""Main setup function."""
|
||||
print("\n" + "="*70)
|
||||
print("LLAMA3.2 + AIRLLM LOCAL SETUP (NO TOKENS AFTER DOWNLOAD)")
|
||||
print("="*70)
|
||||
|
||||
# Check if model already exists
|
||||
if check_model_exists():
|
||||
print("\nModel already downloaded! Configuring...")
|
||||
config = load_existing_config()
|
||||
config = configure_for_local_path(config)
|
||||
save_config(config)
|
||||
print("\n✓ Configuration complete!")
|
||||
print(f" Model path: {MODEL_DIR}")
|
||||
print(" No tokens needed - using local model!")
|
||||
return
|
||||
|
||||
# Check if user wants to download
|
||||
print(f"\nModel not found at: {MODEL_DIR}")
|
||||
download = input("\nDownload model now? (y/n): ").strip().lower()
|
||||
|
||||
if download == 'y':
|
||||
if download_model_with_token():
|
||||
# Configure after successful download
|
||||
config = load_existing_config()
|
||||
config = configure_for_local_path(config)
|
||||
save_config(config)
|
||||
print("\n" + "="*70)
|
||||
print("SETUP COMPLETE!")
|
||||
print("="*70)
|
||||
print(f"\n✓ Model downloaded to: {MODEL_DIR}")
|
||||
print(f"✓ Configuration updated to use local path")
|
||||
print("\n🎉 No tokens needed anymore - using local model!")
|
||||
print("\nTest it with:")
|
||||
print(" nanobot agent -m 'Hello, what is 2+5?'")
|
||||
else:
|
||||
print("\n⚠ Download failed. You can:")
|
||||
print(" 1. Run this script again")
|
||||
print(" 2. Download manually and point config to the path")
|
||||
else:
|
||||
# Just configure for local path (user will provide model)
|
||||
print(f"\nConfiguring for local path: {MODEL_DIR}")
|
||||
print("Make sure the model is downloaded to this location.")
|
||||
config = load_existing_config()
|
||||
config = configure_for_local_path(config)
|
||||
save_config(config)
|
||||
print("\n✓ Configuration saved!")
|
||||
print(f"\nTo download the model manually:")
|
||||
print(f" huggingface-cli download {MODEL_NAME} --local-dir {MODEL_DIR}")
|
||||
print("\nOr place your model files in:")
|
||||
print(f" {MODEL_DIR}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
26
test_airllm.py
Normal file
26
test_airllm.py
Normal file
@ -0,0 +1,26 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test script for AirLLM with Nanobot"""
|
||||
import sys
|
||||
import traceback
|
||||
|
||||
print("Starting test...", file=sys.stderr)
|
||||
print("Starting test...", file=sys.stdout)
|
||||
|
||||
try:
|
||||
from nanobot.providers.airllm_wrapper import create_ollama_client
|
||||
print("✓ Imported create_ollama_client", file=sys.stderr)
|
||||
|
||||
print("Creating client with model path...", file=sys.stderr)
|
||||
client = create_ollama_client('/home/ladmin/.local/models/llama3.2-3b-instruct')
|
||||
print("✓ Client created", file=sys.stderr)
|
||||
|
||||
print("Testing generate...", file=sys.stderr)
|
||||
result = client.generate('Hello, what is 2+5?', max_tokens=20)
|
||||
print(f"✓ Result: {result}", file=sys.stderr)
|
||||
print(result)
|
||||
|
||||
except Exception as e:
|
||||
print(f"✗ ERROR: {e}", file=sys.stderr)
|
||||
traceback.print_exc(file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
61
test_airllm_fix.py
Normal file
61
test_airllm_fix.py
Normal file
@ -0,0 +1,61 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Direct test of AirLLM fix with Llama3.2"""
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add paths
|
||||
sys.path.insert(0, '/home/ladmin/code/nanobot/nanobot')
|
||||
sys.path.insert(0, '/home/ladmin/code/airllm/airllm/air_llm')
|
||||
|
||||
# Inject BetterTransformer before importing
|
||||
import importlib.util
|
||||
class DummyBetterTransformer:
|
||||
@staticmethod
|
||||
def transform(model):
|
||||
return model
|
||||
if "optimum.bettertransformer" not in sys.modules:
|
||||
spec = importlib.util.spec_from_loader("optimum.bettertransformer", None)
|
||||
dummy_module = importlib.util.module_from_spec(spec)
|
||||
dummy_module.BetterTransformer = DummyBetterTransformer
|
||||
sys.modules["optimum.bettertransformer"] = dummy_module
|
||||
|
||||
print("=" * 60)
|
||||
print("TESTING AIRLLM FIX WITH LLAMA3.2")
|
||||
print("=" * 60)
|
||||
|
||||
try:
|
||||
from airllm import AutoModel
|
||||
print("✓ AirLLM imported")
|
||||
|
||||
print("\nLoading model...")
|
||||
model = AutoModel.from_pretrained("/home/ladmin/.local/models/llama3.2-3b-instruct")
|
||||
print("✓ Model loaded")
|
||||
|
||||
print("\nTesting generation...")
|
||||
prompt = "Hello, what is 2+5?"
|
||||
print(f"Prompt: {prompt}")
|
||||
|
||||
# Tokenize
|
||||
input_ids = model.tokenizer(prompt, return_tensors="pt")['input_ids'].to('cuda' if os.environ.get('CUDA_VISIBLE_DEVICES') else 'cpu')
|
||||
|
||||
# Generate
|
||||
print("Generating (this may take a minute)...")
|
||||
output = model.generate(input_ids, max_new_tokens=20, temperature=0.7)
|
||||
|
||||
# Decode
|
||||
response = model.tokenizer.decode(output[0][input_ids.shape[1]:], skip_special_tokens=True)
|
||||
print(f"\n{'='*60}")
|
||||
print("SUCCESS! Response:")
|
||||
print(f"{'='*60}")
|
||||
print(response)
|
||||
print(f"{'='*60}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n{'='*60}")
|
||||
print("ERROR")
|
||||
print(f"{'='*60}")
|
||||
import traceback
|
||||
print(f"{e}")
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
46
test_nanobot_direct.py
Normal file
46
test_nanobot_direct.py
Normal file
@ -0,0 +1,46 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Direct test of nanobot agent"""
|
||||
import asyncio
|
||||
import sys
|
||||
sys.path.insert(0, '.')
|
||||
|
||||
from nanobot.config.loader import load_config
|
||||
from nanobot.bus.queue import MessageBus
|
||||
from nanobot.agent.loop import AgentLoop
|
||||
|
||||
async def test():
|
||||
print("Loading config...", file=sys.stderr)
|
||||
config = load_config()
|
||||
print(f"Config loaded. Provider: {config.providers}", file=sys.stderr)
|
||||
|
||||
print("Creating bus and provider...", file=sys.stderr)
|
||||
bus = MessageBus()
|
||||
from nanobot.cli.commands import _make_provider
|
||||
provider = _make_provider(config)
|
||||
print(f"Provider created: {type(provider)}", file=sys.stderr)
|
||||
|
||||
print("Creating agent loop...", file=sys.stderr)
|
||||
agent_loop = AgentLoop(
|
||||
bus=bus,
|
||||
provider=provider,
|
||||
workspace=config.workspace_path,
|
||||
brave_api_key=config.tools.web.search.api_key or None,
|
||||
exec_config=config.tools.exec,
|
||||
restrict_to_workspace=config.tools.restrict_to_workspace,
|
||||
)
|
||||
print("Agent loop created", file=sys.stderr)
|
||||
|
||||
print("Processing message...", file=sys.stderr)
|
||||
response = await agent_loop.process_direct("Hello, what is 2+5?", "cli:test")
|
||||
print(f"Response received: {response}", file=sys.stderr)
|
||||
print(f"Response content: {response.content if response else 'None'}", file=sys.stderr)
|
||||
|
||||
if response:
|
||||
print("\n=== RESPONSE ===")
|
||||
print(response.content or "(empty)")
|
||||
else:
|
||||
print("\n=== NO RESPONSE ===")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(test())
|
||||
|
||||
66
test_nanobot_file.py
Normal file
66
test_nanobot_file.py
Normal file
@ -0,0 +1,66 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Direct test of nanobot agent - write to file"""
|
||||
import asyncio
|
||||
import sys
|
||||
sys.path.insert(0, '.')
|
||||
|
||||
log_file = open('/tmp/nanobot_debug.log', 'w')
|
||||
|
||||
def log(msg):
|
||||
log_file.write(f"{msg}\n")
|
||||
log_file.flush()
|
||||
print(msg, file=sys.stderr)
|
||||
sys.stderr.flush()
|
||||
|
||||
try:
|
||||
log("Starting test...")
|
||||
|
||||
log("Loading config...")
|
||||
from nanobot.config.loader import load_config
|
||||
config = load_config()
|
||||
log(f"Config loaded. Provider type: {type(config.providers)}")
|
||||
|
||||
log("Creating bus and provider...")
|
||||
from nanobot.bus.queue import MessageBus
|
||||
bus = MessageBus()
|
||||
from nanobot.cli.commands import _make_provider
|
||||
provider = _make_provider(config)
|
||||
log(f"Provider created: {type(provider)}")
|
||||
|
||||
log("Creating agent loop...")
|
||||
from nanobot.agent.loop import AgentLoop
|
||||
agent_loop = AgentLoop(
|
||||
bus=bus,
|
||||
provider=provider,
|
||||
workspace=config.workspace_path,
|
||||
brave_api_key=config.tools.web.search.api_key or None,
|
||||
exec_config=config.tools.exec,
|
||||
restrict_to_workspace=config.tools.restrict_to_workspace,
|
||||
)
|
||||
log("Agent loop created")
|
||||
|
||||
log("Processing message...")
|
||||
async def run():
|
||||
response = await agent_loop.process_direct("Hello, what is 2+5?", "cli:test")
|
||||
log(f"Response received: {response}")
|
||||
log(f"Response type: {type(response)}")
|
||||
if response:
|
||||
log(f"Response content: {response.content}")
|
||||
return response.content or "(empty)"
|
||||
else:
|
||||
log("No response")
|
||||
return "(no response)"
|
||||
|
||||
result = asyncio.run(run())
|
||||
log(f"Final result: {result}")
|
||||
print(f"\n=== RESULT ===")
|
||||
print(result)
|
||||
|
||||
except Exception as e:
|
||||
import traceback
|
||||
error_msg = f"ERROR: {e}\n{traceback.format_exc()}"
|
||||
log(error_msg)
|
||||
print(error_msg, file=sys.stderr)
|
||||
finally:
|
||||
log_file.close()
|
||||
|
||||
40
test_pos_emb.py
Normal file
40
test_pos_emb.py
Normal file
@ -0,0 +1,40 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test get_pos_emb_args directly"""
|
||||
import sys
|
||||
import os
|
||||
sys.path.insert(0, '/home/ladmin/code/airllm/airllm/air_llm')
|
||||
|
||||
# Inject BetterTransformer
|
||||
import importlib.util
|
||||
class DummyBetterTransformer:
|
||||
@staticmethod
|
||||
def transform(model):
|
||||
return model
|
||||
if "optimum.bettertransformer" not in sys.modules:
|
||||
spec = importlib.util.spec_from_loader("optimum.bettertransformer", None)
|
||||
dummy_module = importlib.util.module_from_spec(spec)
|
||||
dummy_module.BetterTransformer = DummyBetterTransformer
|
||||
sys.modules["optimum.bettertransformer"] = dummy_module
|
||||
|
||||
from airllm import AutoModel
|
||||
|
||||
print("Loading model...")
|
||||
model = AutoModel.from_pretrained("/home/ladmin/.local/models/llama3.2-3b-instruct")
|
||||
print("Model loaded")
|
||||
|
||||
print("\nTesting get_pos_emb_args...")
|
||||
result = model.get_pos_emb_args(0, 128)
|
||||
print(f"Result type: {type(result)}")
|
||||
print(f"Result keys: {result.keys() if isinstance(result, dict) else 'not a dict'}")
|
||||
if isinstance(result, dict) and "position_embeddings" in result:
|
||||
pos_emb = result["position_embeddings"]
|
||||
print(f"position_embeddings type: {type(pos_emb)}")
|
||||
if isinstance(pos_emb, tuple) and len(pos_emb) == 2:
|
||||
cos, sin = pos_emb
|
||||
print(f"✓ cos shape: {cos.shape}, sin shape: {sin.shape}")
|
||||
print("✓ SUCCESS: position_embeddings created correctly")
|
||||
else:
|
||||
print(f"✗ position_embeddings is not a 2-tuple: {pos_emb}")
|
||||
else:
|
||||
print(f"✗ position_embeddings not in result: {result}")
|
||||
|
||||
62
test_provider_direct.py
Normal file
62
test_provider_direct.py
Normal file
@ -0,0 +1,62 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Direct test of AirLLM provider"""
|
||||
import asyncio
|
||||
import sys
|
||||
import traceback
|
||||
|
||||
# Force output
|
||||
sys.stdout.reconfigure(line_buffering=True)
|
||||
sys.stderr.reconfigure(line_buffering=True)
|
||||
|
||||
print("=== STARTING TEST ===", flush=True)
|
||||
|
||||
try:
|
||||
print("1. Importing...", flush=True)
|
||||
from nanobot.config.loader import load_config
|
||||
from nanobot.bus.queue import MessageBus
|
||||
from nanobot.agent.loop import AgentLoop
|
||||
from nanobot.cli.commands import _make_provider
|
||||
|
||||
print("2. Loading config...", flush=True)
|
||||
config = load_config()
|
||||
print(f" Config loaded. Provider name: {config.get_provider_name()}", flush=True)
|
||||
|
||||
print("3. Creating provider...", flush=True)
|
||||
provider = _make_provider(config)
|
||||
print(f" Provider created: {type(provider)}", flush=True)
|
||||
|
||||
print("4. Creating agent loop...", flush=True)
|
||||
bus = MessageBus()
|
||||
agent_loop = AgentLoop(
|
||||
bus=bus,
|
||||
provider=provider,
|
||||
workspace=config.workspace_path,
|
||||
brave_api_key=config.tools.web.search.api_key or None,
|
||||
exec_config=config.tools.exec,
|
||||
restrict_to_workspace=config.tools.restrict_to_workspace,
|
||||
)
|
||||
print(" Agent loop created", flush=True)
|
||||
|
||||
print("5. Processing message...", flush=True)
|
||||
|
||||
async def test():
|
||||
response = await agent_loop.process_direct("Hello, what is 2+5?", "cli:test")
|
||||
print(f"6. Response: {response}", flush=True)
|
||||
print(f" Response type: {type(response)}", flush=True)
|
||||
if response:
|
||||
print(f" Response content: {response.content}", flush=True)
|
||||
print(f"\n=== FINAL RESPONSE ===", flush=True)
|
||||
print(response.content or "(empty)", flush=True)
|
||||
else:
|
||||
print(" NO RESPONSE", flush=True)
|
||||
|
||||
asyncio.run(test())
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n=== ERROR ===", flush=True)
|
||||
print(f"{e}", flush=True)
|
||||
traceback.print_exc(file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
print("\n=== TEST COMPLETE ===", flush=True)
|
||||
|
||||
@ -12,7 +12,8 @@ def mock_prompt_session():
|
||||
"""Mock the global prompt session."""
|
||||
mock_session = MagicMock()
|
||||
mock_session.prompt_async = AsyncMock()
|
||||
with patch("nanobot.cli.commands._PROMPT_SESSION", mock_session):
|
||||
with patch("nanobot.cli.commands._PROMPT_SESSION", mock_session), \
|
||||
patch("nanobot.cli.commands.patch_stdout"):
|
||||
yield mock_session
|
||||
|
||||
|
||||
|
||||
92
tests/test_commands.py
Normal file
92
tests/test_commands.py
Normal file
@ -0,0 +1,92 @@
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
from typer.testing import CliRunner
|
||||
|
||||
from nanobot.cli.commands import app
|
||||
|
||||
runner = CliRunner()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_paths():
|
||||
"""Mock config/workspace paths for test isolation."""
|
||||
with patch("nanobot.config.loader.get_config_path") as mock_cp, \
|
||||
patch("nanobot.config.loader.save_config") as mock_sc, \
|
||||
patch("nanobot.config.loader.load_config") as mock_lc, \
|
||||
patch("nanobot.utils.helpers.get_workspace_path") as mock_ws:
|
||||
|
||||
base_dir = Path("./test_onboard_data")
|
||||
if base_dir.exists():
|
||||
shutil.rmtree(base_dir)
|
||||
base_dir.mkdir()
|
||||
|
||||
config_file = base_dir / "config.json"
|
||||
workspace_dir = base_dir / "workspace"
|
||||
|
||||
mock_cp.return_value = config_file
|
||||
mock_ws.return_value = workspace_dir
|
||||
mock_sc.side_effect = lambda config: config_file.write_text("{}")
|
||||
|
||||
yield config_file, workspace_dir
|
||||
|
||||
if base_dir.exists():
|
||||
shutil.rmtree(base_dir)
|
||||
|
||||
|
||||
def test_onboard_fresh_install(mock_paths):
|
||||
"""No existing config — should create from scratch."""
|
||||
config_file, workspace_dir = mock_paths
|
||||
|
||||
result = runner.invoke(app, ["onboard"])
|
||||
|
||||
assert result.exit_code == 0
|
||||
assert "Created config" in result.stdout
|
||||
assert "Created workspace" in result.stdout
|
||||
assert "nanobot is ready" in result.stdout
|
||||
assert config_file.exists()
|
||||
assert (workspace_dir / "AGENTS.md").exists()
|
||||
assert (workspace_dir / "memory" / "MEMORY.md").exists()
|
||||
|
||||
|
||||
def test_onboard_existing_config_refresh(mock_paths):
|
||||
"""Config exists, user declines overwrite — should refresh (load-merge-save)."""
|
||||
config_file, workspace_dir = mock_paths
|
||||
config_file.write_text('{"existing": true}')
|
||||
|
||||
result = runner.invoke(app, ["onboard"], input="n\n")
|
||||
|
||||
assert result.exit_code == 0
|
||||
assert "Config already exists" in result.stdout
|
||||
assert "existing values preserved" in result.stdout
|
||||
assert workspace_dir.exists()
|
||||
assert (workspace_dir / "AGENTS.md").exists()
|
||||
|
||||
|
||||
def test_onboard_existing_config_overwrite(mock_paths):
|
||||
"""Config exists, user confirms overwrite — should reset to defaults."""
|
||||
config_file, workspace_dir = mock_paths
|
||||
config_file.write_text('{"existing": true}')
|
||||
|
||||
result = runner.invoke(app, ["onboard"], input="y\n")
|
||||
|
||||
assert result.exit_code == 0
|
||||
assert "Config already exists" in result.stdout
|
||||
assert "Config reset to defaults" in result.stdout
|
||||
assert workspace_dir.exists()
|
||||
|
||||
|
||||
def test_onboard_existing_workspace_safe_create(mock_paths):
|
||||
"""Workspace exists — should not recreate, but still add missing templates."""
|
||||
config_file, workspace_dir = mock_paths
|
||||
workspace_dir.mkdir(parents=True)
|
||||
config_file.write_text("{}")
|
||||
|
||||
result = runner.invoke(app, ["onboard"], input="n\n")
|
||||
|
||||
assert result.exit_code == 0
|
||||
assert "Created workspace" not in result.stdout
|
||||
assert "Created AGENTS.md" in result.stdout
|
||||
assert (workspace_dir / "AGENTS.md").exists()
|
||||
477
tests/test_consolidate_offset.py
Normal file
477
tests/test_consolidate_offset.py
Normal file
@ -0,0 +1,477 @@
|
||||
"""Test session management with cache-friendly message handling."""
|
||||
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
from nanobot.session.manager import Session, SessionManager
|
||||
|
||||
# Test constants
|
||||
MEMORY_WINDOW = 50
|
||||
KEEP_COUNT = MEMORY_WINDOW // 2 # 25
|
||||
|
||||
|
||||
def create_session_with_messages(key: str, count: int, role: str = "user") -> Session:
|
||||
"""Create a session and add the specified number of messages.
|
||||
|
||||
Args:
|
||||
key: Session identifier
|
||||
count: Number of messages to add
|
||||
role: Message role (default: "user")
|
||||
|
||||
Returns:
|
||||
Session with the specified messages
|
||||
"""
|
||||
session = Session(key=key)
|
||||
for i in range(count):
|
||||
session.add_message(role, f"msg{i}")
|
||||
return session
|
||||
|
||||
|
||||
def assert_messages_content(messages: list, start_index: int, end_index: int) -> None:
|
||||
"""Assert that messages contain expected content from start to end index.
|
||||
|
||||
Args:
|
||||
messages: List of message dictionaries
|
||||
start_index: Expected first message index
|
||||
end_index: Expected last message index
|
||||
"""
|
||||
assert len(messages) > 0
|
||||
assert messages[0]["content"] == f"msg{start_index}"
|
||||
assert messages[-1]["content"] == f"msg{end_index}"
|
||||
|
||||
|
||||
def get_old_messages(session: Session, last_consolidated: int, keep_count: int) -> list:
|
||||
"""Extract messages that would be consolidated using the standard slice logic.
|
||||
|
||||
Args:
|
||||
session: The session containing messages
|
||||
last_consolidated: Index of last consolidated message
|
||||
keep_count: Number of recent messages to keep
|
||||
|
||||
Returns:
|
||||
List of messages that would be consolidated
|
||||
"""
|
||||
return session.messages[last_consolidated:-keep_count]
|
||||
|
||||
|
||||
class TestSessionLastConsolidated:
|
||||
"""Test last_consolidated tracking to avoid duplicate processing."""
|
||||
|
||||
def test_initial_last_consolidated_zero(self) -> None:
|
||||
"""Test that new session starts with last_consolidated=0."""
|
||||
session = Session(key="test:initial")
|
||||
assert session.last_consolidated == 0
|
||||
|
||||
def test_last_consolidated_persistence(self, tmp_path) -> None:
|
||||
"""Test that last_consolidated persists across save/load."""
|
||||
manager = SessionManager(Path(tmp_path))
|
||||
session1 = create_session_with_messages("test:persist", 20)
|
||||
session1.last_consolidated = 15
|
||||
manager.save(session1)
|
||||
|
||||
session2 = manager.get_or_create("test:persist")
|
||||
assert session2.last_consolidated == 15
|
||||
assert len(session2.messages) == 20
|
||||
|
||||
def test_clear_resets_last_consolidated(self) -> None:
|
||||
"""Test that clear() resets last_consolidated to 0."""
|
||||
session = create_session_with_messages("test:clear", 10)
|
||||
session.last_consolidated = 5
|
||||
|
||||
session.clear()
|
||||
assert len(session.messages) == 0
|
||||
assert session.last_consolidated == 0
|
||||
|
||||
|
||||
class TestSessionImmutableHistory:
|
||||
"""Test Session message immutability for cache efficiency."""
|
||||
|
||||
def test_initial_state(self) -> None:
|
||||
"""Test that new session has empty messages list."""
|
||||
session = Session(key="test:initial")
|
||||
assert len(session.messages) == 0
|
||||
|
||||
def test_add_messages_appends_only(self) -> None:
|
||||
"""Test that adding messages only appends, never modifies."""
|
||||
session = Session(key="test:preserve")
|
||||
session.add_message("user", "msg1")
|
||||
session.add_message("assistant", "resp1")
|
||||
session.add_message("user", "msg2")
|
||||
assert len(session.messages) == 3
|
||||
assert session.messages[0]["content"] == "msg1"
|
||||
|
||||
def test_get_history_returns_most_recent(self) -> None:
|
||||
"""Test get_history returns the most recent messages."""
|
||||
session = Session(key="test:history")
|
||||
for i in range(10):
|
||||
session.add_message("user", f"msg{i}")
|
||||
session.add_message("assistant", f"resp{i}")
|
||||
|
||||
history = session.get_history(max_messages=6)
|
||||
assert len(history) == 6
|
||||
assert history[0]["content"] == "msg7"
|
||||
assert history[-1]["content"] == "resp9"
|
||||
|
||||
def test_get_history_with_all_messages(self) -> None:
|
||||
"""Test get_history with max_messages larger than actual."""
|
||||
session = create_session_with_messages("test:all", 5)
|
||||
history = session.get_history(max_messages=100)
|
||||
assert len(history) == 5
|
||||
assert history[0]["content"] == "msg0"
|
||||
|
||||
def test_get_history_stable_for_same_session(self) -> None:
|
||||
"""Test that get_history returns same content for same max_messages."""
|
||||
session = create_session_with_messages("test:stable", 20)
|
||||
history1 = session.get_history(max_messages=10)
|
||||
history2 = session.get_history(max_messages=10)
|
||||
assert history1 == history2
|
||||
|
||||
def test_messages_list_never_modified(self) -> None:
|
||||
"""Test that messages list is never modified after creation."""
|
||||
session = create_session_with_messages("test:immutable", 5)
|
||||
original_len = len(session.messages)
|
||||
|
||||
session.get_history(max_messages=2)
|
||||
assert len(session.messages) == original_len
|
||||
|
||||
for _ in range(10):
|
||||
session.get_history(max_messages=3)
|
||||
assert len(session.messages) == original_len
|
||||
|
||||
|
||||
class TestSessionPersistence:
|
||||
"""Test Session persistence and reload."""
|
||||
|
||||
@pytest.fixture
|
||||
def temp_manager(self, tmp_path):
|
||||
return SessionManager(Path(tmp_path))
|
||||
|
||||
def test_persistence_roundtrip(self, temp_manager):
|
||||
"""Test that messages persist across save/load."""
|
||||
session1 = create_session_with_messages("test:persistence", 20)
|
||||
temp_manager.save(session1)
|
||||
|
||||
session2 = temp_manager.get_or_create("test:persistence")
|
||||
assert len(session2.messages) == 20
|
||||
assert session2.messages[0]["content"] == "msg0"
|
||||
assert session2.messages[-1]["content"] == "msg19"
|
||||
|
||||
def test_get_history_after_reload(self, temp_manager):
|
||||
"""Test that get_history works correctly after reload."""
|
||||
session1 = create_session_with_messages("test:reload", 30)
|
||||
temp_manager.save(session1)
|
||||
|
||||
session2 = temp_manager.get_or_create("test:reload")
|
||||
history = session2.get_history(max_messages=10)
|
||||
assert len(history) == 10
|
||||
assert history[0]["content"] == "msg20"
|
||||
assert history[-1]["content"] == "msg29"
|
||||
|
||||
def test_clear_resets_session(self, temp_manager):
|
||||
"""Test that clear() properly resets session."""
|
||||
session = create_session_with_messages("test:clear", 10)
|
||||
assert len(session.messages) == 10
|
||||
|
||||
session.clear()
|
||||
assert len(session.messages) == 0
|
||||
|
||||
|
||||
class TestConsolidationTriggerConditions:
|
||||
"""Test consolidation trigger conditions and logic."""
|
||||
|
||||
def test_consolidation_needed_when_messages_exceed_window(self):
|
||||
"""Test consolidation logic: should trigger when messages > memory_window."""
|
||||
session = create_session_with_messages("test:trigger", 60)
|
||||
|
||||
total_messages = len(session.messages)
|
||||
messages_to_process = total_messages - session.last_consolidated
|
||||
|
||||
assert total_messages > MEMORY_WINDOW
|
||||
assert messages_to_process > 0
|
||||
|
||||
expected_consolidate_count = total_messages - KEEP_COUNT
|
||||
assert expected_consolidate_count == 35
|
||||
|
||||
def test_consolidation_skipped_when_within_keep_count(self):
|
||||
"""Test consolidation skipped when total messages <= keep_count."""
|
||||
session = create_session_with_messages("test:skip", 20)
|
||||
|
||||
total_messages = len(session.messages)
|
||||
assert total_messages <= KEEP_COUNT
|
||||
|
||||
old_messages = get_old_messages(session, session.last_consolidated, KEEP_COUNT)
|
||||
assert len(old_messages) == 0
|
||||
|
||||
def test_consolidation_skipped_when_no_new_messages(self):
|
||||
"""Test consolidation skipped when messages_to_process <= 0."""
|
||||
session = create_session_with_messages("test:already_consolidated", 40)
|
||||
session.last_consolidated = len(session.messages) - KEEP_COUNT # 15
|
||||
|
||||
# Add a few more messages
|
||||
for i in range(40, 42):
|
||||
session.add_message("user", f"msg{i}")
|
||||
|
||||
total_messages = len(session.messages)
|
||||
messages_to_process = total_messages - session.last_consolidated
|
||||
assert messages_to_process > 0
|
||||
|
||||
# Simulate last_consolidated catching up
|
||||
session.last_consolidated = total_messages - KEEP_COUNT
|
||||
old_messages = get_old_messages(session, session.last_consolidated, KEEP_COUNT)
|
||||
assert len(old_messages) == 0
|
||||
|
||||
|
||||
class TestLastConsolidatedEdgeCases:
|
||||
"""Test last_consolidated edge cases and data corruption scenarios."""
|
||||
|
||||
def test_last_consolidated_exceeds_message_count(self):
|
||||
"""Test behavior when last_consolidated > len(messages) (data corruption)."""
|
||||
session = create_session_with_messages("test:corruption", 10)
|
||||
session.last_consolidated = 20
|
||||
|
||||
total_messages = len(session.messages)
|
||||
messages_to_process = total_messages - session.last_consolidated
|
||||
assert messages_to_process <= 0
|
||||
|
||||
old_messages = get_old_messages(session, session.last_consolidated, 5)
|
||||
assert len(old_messages) == 0
|
||||
|
||||
def test_last_consolidated_negative_value(self):
|
||||
"""Test behavior with negative last_consolidated (invalid state)."""
|
||||
session = create_session_with_messages("test:negative", 10)
|
||||
session.last_consolidated = -5
|
||||
|
||||
keep_count = 3
|
||||
old_messages = get_old_messages(session, session.last_consolidated, keep_count)
|
||||
|
||||
# messages[-5:-3] with 10 messages gives indices 5,6
|
||||
assert len(old_messages) == 2
|
||||
assert old_messages[0]["content"] == "msg5"
|
||||
assert old_messages[-1]["content"] == "msg6"
|
||||
|
||||
def test_messages_added_after_consolidation(self):
|
||||
"""Test correct behavior when new messages arrive after consolidation."""
|
||||
session = create_session_with_messages("test:new_messages", 40)
|
||||
session.last_consolidated = len(session.messages) - KEEP_COUNT # 15
|
||||
|
||||
# Add new messages after consolidation
|
||||
for i in range(40, 50):
|
||||
session.add_message("user", f"msg{i}")
|
||||
|
||||
total_messages = len(session.messages)
|
||||
old_messages = get_old_messages(session, session.last_consolidated, KEEP_COUNT)
|
||||
expected_consolidate_count = total_messages - KEEP_COUNT - session.last_consolidated
|
||||
|
||||
assert len(old_messages) == expected_consolidate_count
|
||||
assert_messages_content(old_messages, 15, 24)
|
||||
|
||||
def test_slice_behavior_when_indices_overlap(self):
|
||||
"""Test slice behavior when last_consolidated >= total - keep_count."""
|
||||
session = create_session_with_messages("test:overlap", 30)
|
||||
session.last_consolidated = 12
|
||||
|
||||
old_messages = get_old_messages(session, session.last_consolidated, 20)
|
||||
assert len(old_messages) == 0
|
||||
|
||||
|
||||
class TestArchiveAllMode:
|
||||
"""Test archive_all mode (used by /new command)."""
|
||||
|
||||
def test_archive_all_consolidates_everything(self):
|
||||
"""Test archive_all=True consolidates all messages."""
|
||||
session = create_session_with_messages("test:archive_all", 50)
|
||||
|
||||
archive_all = True
|
||||
if archive_all:
|
||||
old_messages = session.messages
|
||||
assert len(old_messages) == 50
|
||||
|
||||
assert session.last_consolidated == 0
|
||||
|
||||
def test_archive_all_resets_last_consolidated(self):
|
||||
"""Test that archive_all mode resets last_consolidated to 0."""
|
||||
session = create_session_with_messages("test:reset", 40)
|
||||
session.last_consolidated = 15
|
||||
|
||||
archive_all = True
|
||||
if archive_all:
|
||||
session.last_consolidated = 0
|
||||
|
||||
assert session.last_consolidated == 0
|
||||
assert len(session.messages) == 40
|
||||
|
||||
def test_archive_all_vs_normal_consolidation(self):
|
||||
"""Test difference between archive_all and normal consolidation."""
|
||||
# Normal consolidation
|
||||
session1 = create_session_with_messages("test:normal", 60)
|
||||
session1.last_consolidated = len(session1.messages) - KEEP_COUNT
|
||||
|
||||
# archive_all mode
|
||||
session2 = create_session_with_messages("test:all", 60)
|
||||
session2.last_consolidated = 0
|
||||
|
||||
assert session1.last_consolidated == 35
|
||||
assert len(session1.messages) == 60
|
||||
assert session2.last_consolidated == 0
|
||||
assert len(session2.messages) == 60
|
||||
|
||||
|
||||
class TestCacheImmutability:
|
||||
"""Test that consolidation doesn't modify session.messages (cache safety)."""
|
||||
|
||||
def test_consolidation_does_not_modify_messages_list(self):
|
||||
"""Test that consolidation leaves messages list unchanged."""
|
||||
session = create_session_with_messages("test:immutable", 50)
|
||||
|
||||
original_messages = session.messages.copy()
|
||||
original_len = len(session.messages)
|
||||
session.last_consolidated = original_len - KEEP_COUNT
|
||||
|
||||
assert len(session.messages) == original_len
|
||||
assert session.messages == original_messages
|
||||
|
||||
def test_get_history_does_not_modify_messages(self):
|
||||
"""Test that get_history doesn't modify messages list."""
|
||||
session = create_session_with_messages("test:history_immutable", 40)
|
||||
original_messages = [m.copy() for m in session.messages]
|
||||
|
||||
for _ in range(5):
|
||||
history = session.get_history(max_messages=10)
|
||||
assert len(history) == 10
|
||||
|
||||
assert len(session.messages) == 40
|
||||
for i, msg in enumerate(session.messages):
|
||||
assert msg["content"] == original_messages[i]["content"]
|
||||
|
||||
def test_consolidation_only_updates_last_consolidated(self):
|
||||
"""Test that consolidation only updates last_consolidated field."""
|
||||
session = create_session_with_messages("test:field_only", 60)
|
||||
|
||||
original_messages = session.messages.copy()
|
||||
original_key = session.key
|
||||
original_metadata = session.metadata.copy()
|
||||
|
||||
session.last_consolidated = len(session.messages) - KEEP_COUNT
|
||||
|
||||
assert session.messages == original_messages
|
||||
assert session.key == original_key
|
||||
assert session.metadata == original_metadata
|
||||
assert session.last_consolidated == 35
|
||||
|
||||
|
||||
class TestSliceLogic:
|
||||
"""Test the slice logic: messages[last_consolidated:-keep_count]."""
|
||||
|
||||
def test_slice_extracts_correct_range(self):
|
||||
"""Test that slice extracts the correct message range."""
|
||||
session = create_session_with_messages("test:slice", 60)
|
||||
|
||||
old_messages = get_old_messages(session, 0, KEEP_COUNT)
|
||||
|
||||
assert len(old_messages) == 35
|
||||
assert_messages_content(old_messages, 0, 34)
|
||||
|
||||
remaining = session.messages[-KEEP_COUNT:]
|
||||
assert len(remaining) == 25
|
||||
assert_messages_content(remaining, 35, 59)
|
||||
|
||||
def test_slice_with_partial_consolidation(self):
|
||||
"""Test slice when some messages already consolidated."""
|
||||
session = create_session_with_messages("test:partial", 70)
|
||||
|
||||
last_consolidated = 30
|
||||
old_messages = get_old_messages(session, last_consolidated, KEEP_COUNT)
|
||||
|
||||
assert len(old_messages) == 15
|
||||
assert_messages_content(old_messages, 30, 44)
|
||||
|
||||
def test_slice_with_various_keep_counts(self):
|
||||
"""Test slice behavior with different keep_count values."""
|
||||
session = create_session_with_messages("test:keep_counts", 50)
|
||||
|
||||
test_cases = [(10, 40), (20, 30), (30, 20), (40, 10)]
|
||||
|
||||
for keep_count, expected_count in test_cases:
|
||||
old_messages = session.messages[0:-keep_count]
|
||||
assert len(old_messages) == expected_count
|
||||
|
||||
def test_slice_when_keep_count_exceeds_messages(self):
|
||||
"""Test slice when keep_count > len(messages)."""
|
||||
session = create_session_with_messages("test:exceed", 10)
|
||||
|
||||
old_messages = session.messages[0:-20]
|
||||
assert len(old_messages) == 0
|
||||
|
||||
|
||||
class TestEmptyAndBoundarySessions:
|
||||
"""Test empty sessions and boundary conditions."""
|
||||
|
||||
def test_empty_session_consolidation(self):
|
||||
"""Test consolidation behavior with empty session."""
|
||||
session = Session(key="test:empty")
|
||||
|
||||
assert len(session.messages) == 0
|
||||
assert session.last_consolidated == 0
|
||||
|
||||
messages_to_process = len(session.messages) - session.last_consolidated
|
||||
assert messages_to_process == 0
|
||||
|
||||
old_messages = get_old_messages(session, session.last_consolidated, KEEP_COUNT)
|
||||
assert len(old_messages) == 0
|
||||
|
||||
def test_single_message_session(self):
|
||||
"""Test consolidation with single message."""
|
||||
session = Session(key="test:single")
|
||||
session.add_message("user", "only message")
|
||||
|
||||
assert len(session.messages) == 1
|
||||
|
||||
old_messages = get_old_messages(session, session.last_consolidated, KEEP_COUNT)
|
||||
assert len(old_messages) == 0
|
||||
|
||||
def test_exactly_keep_count_messages(self):
|
||||
"""Test session with exactly keep_count messages."""
|
||||
session = create_session_with_messages("test:exact", KEEP_COUNT)
|
||||
|
||||
assert len(session.messages) == KEEP_COUNT
|
||||
|
||||
old_messages = get_old_messages(session, session.last_consolidated, KEEP_COUNT)
|
||||
assert len(old_messages) == 0
|
||||
|
||||
def test_just_over_keep_count(self):
|
||||
"""Test session with one message over keep_count."""
|
||||
session = create_session_with_messages("test:over", KEEP_COUNT + 1)
|
||||
|
||||
assert len(session.messages) == 26
|
||||
|
||||
old_messages = get_old_messages(session, session.last_consolidated, KEEP_COUNT)
|
||||
assert len(old_messages) == 1
|
||||
assert old_messages[0]["content"] == "msg0"
|
||||
|
||||
def test_very_large_session(self):
|
||||
"""Test consolidation with very large message count."""
|
||||
session = create_session_with_messages("test:large", 1000)
|
||||
|
||||
assert len(session.messages) == 1000
|
||||
|
||||
old_messages = get_old_messages(session, session.last_consolidated, KEEP_COUNT)
|
||||
assert len(old_messages) == 975
|
||||
assert_messages_content(old_messages, 0, 974)
|
||||
|
||||
remaining = session.messages[-KEEP_COUNT:]
|
||||
assert len(remaining) == 25
|
||||
assert_messages_content(remaining, 975, 999)
|
||||
|
||||
def test_session_with_gaps_in_consolidation(self):
|
||||
"""Test session with potential gaps in consolidation history."""
|
||||
session = create_session_with_messages("test:gaps", 50)
|
||||
session.last_consolidated = 10
|
||||
|
||||
# Add more messages
|
||||
for i in range(50, 60):
|
||||
session.add_message("user", f"msg{i}")
|
||||
|
||||
old_messages = get_old_messages(session, session.last_consolidated, KEEP_COUNT)
|
||||
|
||||
expected_count = 60 - KEEP_COUNT - 10
|
||||
assert len(old_messages) == expected_count
|
||||
assert_messages_content(old_messages, 10, 34)
|
||||
@ -20,8 +20,8 @@ You have access to:
|
||||
|
||||
## Memory
|
||||
|
||||
- Use `memory/` directory for daily notes
|
||||
- Use `MEMORY.md` for long-term information
|
||||
- `memory/MEMORY.md` — long-term facts (preferences, context, relationships)
|
||||
- `memory/HISTORY.md` — append-only event log, search with grep to recall past events
|
||||
|
||||
## Scheduled Reminders
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user