Merge branch 'main' into pr-713
This commit is contained in:
commit
5c61f30546
230
README.md
230
README.md
@ -16,12 +16,16 @@
|
|||||||
|
|
||||||
⚡️ Delivers core agent functionality in just **~4,000** lines of code — **99% smaller** than Clawdbot's 430k+ lines.
|
⚡️ Delivers core agent functionality in just **~4,000** lines of code — **99% smaller** than Clawdbot's 430k+ lines.
|
||||||
|
|
||||||
📏 Real-time line count: **3,536 lines** (run `bash core_agent_lines.sh` to verify anytime)
|
📏 Real-time line count: **3,696 lines** (run `bash core_agent_lines.sh` to verify anytime)
|
||||||
|
|
||||||
## 📢 News
|
## 📢 News
|
||||||
|
|
||||||
|
- **2026-02-16** 🦞 nanobot now integrates a [ClawHub](https://clawhub.ai) skill — search and install public agent skills.
|
||||||
|
- **2026-02-15** 🔑 nanobot now supports OpenAI Codex provider with OAuth login support.
|
||||||
|
- **2026-02-14** 🔌 nanobot now supports MCP! See [MCP section](#mcp-model-context-protocol) for details.
|
||||||
- **2026-02-13** 🎉 Released v0.1.3.post7 — includes security hardening and multiple improvements. All users are recommended to upgrade to the latest version. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post7) for more details.
|
- **2026-02-13** 🎉 Released v0.1.3.post7 — includes security hardening and multiple improvements. All users are recommended to upgrade to the latest version. See [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post7) for more details.
|
||||||
- **2026-02-12** 🧠 Redesigned memory system — Less code, more reliable. Join the [discussion](https://github.com/HKUDS/nanobot/discussions/566) about it!
|
- **2026-02-12** 🧠 Redesigned memory system — Less code, more reliable. Join the [discussion](https://github.com/HKUDS/nanobot/discussions/566) about it!
|
||||||
|
- **2026-02-11** ✨ Enhanced CLI experience and added MiniMax support!
|
||||||
- **2026-02-10** 🎉 Released v0.1.3.post6 with improvements! Check the updates [notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post6) and our [roadmap](https://github.com/HKUDS/nanobot/discussions/431).
|
- **2026-02-10** 🎉 Released v0.1.3.post6 with improvements! Check the updates [notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.3.post6) and our [roadmap](https://github.com/HKUDS/nanobot/discussions/431).
|
||||||
- **2026-02-09** 💬 Added Slack, Email, and QQ support — nanobot now supports multiple chat platforms!
|
- **2026-02-09** 💬 Added Slack, Email, and QQ support — nanobot now supports multiple chat platforms!
|
||||||
- **2026-02-08** 🔧 Refactored Providers—adding a new LLM provider now takes just 2 simple steps! Check [here](#providers).
|
- **2026-02-08** 🔧 Refactored Providers—adding a new LLM provider now takes just 2 simple steps! Check [here](#providers).
|
||||||
@ -107,14 +111,22 @@ nanobot onboard
|
|||||||
|
|
||||||
**2. Configure** (`~/.nanobot/config.json`)
|
**2. Configure** (`~/.nanobot/config.json`)
|
||||||
|
|
||||||
For OpenRouter - recommended for global users:
|
Add or merge these **two parts** into your config (other options have defaults).
|
||||||
|
|
||||||
|
*Set your API key* (e.g. OpenRouter, recommended for global users):
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"providers": {
|
"providers": {
|
||||||
"openrouter": {
|
"openrouter": {
|
||||||
"apiKey": "sk-or-v1-xxx"
|
"apiKey": "sk-or-v1-xxx"
|
||||||
}
|
}
|
||||||
},
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
*Set your model*:
|
||||||
|
```json
|
||||||
|
{
|
||||||
"agents": {
|
"agents": {
|
||||||
"defaults": {
|
"defaults": {
|
||||||
"model": "anthropic/claude-opus-4-5"
|
"model": "anthropic/claude-opus-4-5"
|
||||||
@ -126,63 +138,26 @@ For OpenRouter - recommended for global users:
|
|||||||
**3. Chat**
|
**3. Chat**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
nanobot agent -m "What is 2+2?"
|
nanobot agent
|
||||||
```
|
```
|
||||||
|
|
||||||
That's it! You have a working AI assistant in 2 minutes.
|
That's it! You have a working AI assistant in 2 minutes.
|
||||||
|
|
||||||
## 🖥️ Local Models (vLLM)
|
|
||||||
|
|
||||||
Run nanobot with your own local models using vLLM or any OpenAI-compatible server.
|
|
||||||
|
|
||||||
**1. Start your vLLM server**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. Configure** (`~/.nanobot/config.json`)
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"providers": {
|
|
||||||
"vllm": {
|
|
||||||
"apiKey": "dummy",
|
|
||||||
"apiBase": "http://localhost:8000/v1"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"agents": {
|
|
||||||
"defaults": {
|
|
||||||
"model": "meta-llama/Llama-3.1-8B-Instruct"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Chat**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
nanobot agent -m "Hello from my local LLM!"
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> The `apiKey` can be any non-empty string for local servers that don't require authentication.
|
|
||||||
|
|
||||||
## 💬 Chat Apps
|
## 💬 Chat Apps
|
||||||
|
|
||||||
Talk to your nanobot through Telegram, Discord, WhatsApp, Feishu, Mochat, DingTalk, Slack, Email, or QQ — anytime, anywhere.
|
Connect nanobot to your favorite chat platform.
|
||||||
|
|
||||||
| Channel | Setup |
|
| Channel | What you need |
|
||||||
|---------|-------|
|
|---------|---------------|
|
||||||
| **Telegram** | Easy (just a token) |
|
| **Telegram** | Bot token from @BotFather |
|
||||||
| **Discord** | Easy (bot token + intents) |
|
| **Discord** | Bot token + Message Content intent |
|
||||||
| **WhatsApp** | Medium (scan QR) |
|
| **WhatsApp** | QR code scan |
|
||||||
| **Feishu** | Medium (app credentials) |
|
| **Feishu** | App ID + App Secret |
|
||||||
| **Mochat** | Medium (claw token + websocket) |
|
| **Mochat** | Claw token (auto-setup available) |
|
||||||
| **DingTalk** | Medium (app credentials) |
|
| **DingTalk** | App Key + App Secret |
|
||||||
| **Slack** | Medium (bot + app tokens) |
|
| **Slack** | Bot token + App-Level token |
|
||||||
| **Email** | Medium (IMAP/SMTP credentials) |
|
| **Email** | IMAP/SMTP credentials |
|
||||||
| **QQ** | Easy (app credentials) |
|
| **QQ** | App ID + App Secret |
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary><b>Telegram</b> (Recommended)</summary>
|
<summary><b>Telegram</b> (Recommended)</summary>
|
||||||
@ -599,6 +574,7 @@ Config file: `~/.nanobot/config.json`
|
|||||||
|
|
||||||
| Provider | Purpose | Get API Key |
|
| Provider | Purpose | Get API Key |
|
||||||
|----------|---------|-------------|
|
|----------|---------|-------------|
|
||||||
|
| `custom` | Any OpenAI-compatible endpoint (direct, no LiteLLM) | — |
|
||||||
| `openrouter` | LLM (recommended, access to all models) | [openrouter.ai](https://openrouter.ai) |
|
| `openrouter` | LLM (recommended, access to all models) | [openrouter.ai](https://openrouter.ai) |
|
||||||
| `anthropic` | LLM (Claude direct) | [console.anthropic.com](https://console.anthropic.com) |
|
| `anthropic` | LLM (Claude direct) | [console.anthropic.com](https://console.anthropic.com) |
|
||||||
| `openai` | LLM (GPT direct) | [platform.openai.com](https://platform.openai.com) |
|
| `openai` | LLM (GPT direct) | [platform.openai.com](https://platform.openai.com) |
|
||||||
@ -607,10 +583,105 @@ Config file: `~/.nanobot/config.json`
|
|||||||
| `gemini` | LLM (Gemini direct) | [aistudio.google.com](https://aistudio.google.com) |
|
| `gemini` | LLM (Gemini direct) | [aistudio.google.com](https://aistudio.google.com) |
|
||||||
| `minimax` | LLM (MiniMax direct) | [platform.minimax.io](https://platform.minimax.io) |
|
| `minimax` | LLM (MiniMax direct) | [platform.minimax.io](https://platform.minimax.io) |
|
||||||
| `aihubmix` | LLM (API gateway, access to all models) | [aihubmix.com](https://aihubmix.com) |
|
| `aihubmix` | LLM (API gateway, access to all models) | [aihubmix.com](https://aihubmix.com) |
|
||||||
|
| `siliconflow` | LLM (SiliconFlow/硅基流动, API gateway) | [siliconflow.cn](https://siliconflow.cn) |
|
||||||
| `dashscope` | LLM (Qwen) | [dashscope.console.aliyun.com](https://dashscope.console.aliyun.com) |
|
| `dashscope` | LLM (Qwen) | [dashscope.console.aliyun.com](https://dashscope.console.aliyun.com) |
|
||||||
| `moonshot` | LLM (Moonshot/Kimi) | [platform.moonshot.cn](https://platform.moonshot.cn) |
|
| `moonshot` | LLM (Moonshot/Kimi) | [platform.moonshot.cn](https://platform.moonshot.cn) |
|
||||||
| `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https://open.bigmodel.cn) |
|
| `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https://open.bigmodel.cn) |
|
||||||
| `vllm` | LLM (local, any OpenAI-compatible server) | — |
|
| `vllm` | LLM (local, any OpenAI-compatible server) | — |
|
||||||
|
| `openai_codex` | LLM (Codex, OAuth) | `nanobot provider login openai-codex` |
|
||||||
|
| `github_copilot` | LLM (GitHub Copilot, OAuth) | `nanobot provider login github-copilot` |
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary><b>OpenAI Codex (OAuth)</b></summary>
|
||||||
|
|
||||||
|
Codex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account.
|
||||||
|
|
||||||
|
**1. Login:**
|
||||||
|
```bash
|
||||||
|
nanobot provider login openai-codex
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Set model** (merge into `~/.nanobot/config.json`):
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"agents": {
|
||||||
|
"defaults": {
|
||||||
|
"model": "openai-codex/gpt-5.1-codex"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. Chat:**
|
||||||
|
```bash
|
||||||
|
nanobot agent -m "Hello!"
|
||||||
|
```
|
||||||
|
|
||||||
|
> Docker users: use `docker run -it` for interactive OAuth login.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary><b>Custom Provider (Any OpenAI-compatible API)</b></summary>
|
||||||
|
|
||||||
|
Connects directly to any OpenAI-compatible endpoint — LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI, or any self-hosted server. Bypasses LiteLLM; model name is passed as-is.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"providers": {
|
||||||
|
"custom": {
|
||||||
|
"apiKey": "your-api-key",
|
||||||
|
"apiBase": "https://api.your-provider.com/v1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"agents": {
|
||||||
|
"defaults": {
|
||||||
|
"model": "your-model-name"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
> For local servers that don't require a key, set `apiKey` to any non-empty string (e.g. `"no-key"`).
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary><b>vLLM (local / OpenAI-compatible)</b></summary>
|
||||||
|
|
||||||
|
Run your own model with vLLM or any OpenAI-compatible server, then add to config:
|
||||||
|
|
||||||
|
**1. Start the server** (example):
|
||||||
|
```bash
|
||||||
|
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 8000
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Add to config** (partial — merge into `~/.nanobot/config.json`):
|
||||||
|
|
||||||
|
*Provider (key can be any non-empty string for local):*
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"providers": {
|
||||||
|
"vllm": {
|
||||||
|
"apiKey": "dummy",
|
||||||
|
"apiBase": "http://localhost:8000/v1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
*Model:*
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"agents": {
|
||||||
|
"defaults": {
|
||||||
|
"model": "meta-llama/Llama-3.1-8B-Instruct"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary><b>Adding a New Provider (Developer Guide)</b></summary>
|
<summary><b>Adding a New Provider (Developer Guide)</b></summary>
|
||||||
@ -657,8 +728,43 @@ That's it! Environment variables, model prefixing, config matching, and `nanobot
|
|||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
|
||||||
|
### MCP (Model Context Protocol)
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> The config format is compatible with Claude Desktop / Cursor. You can copy MCP server configs directly from any MCP server's README.
|
||||||
|
|
||||||
|
nanobot supports [MCP](https://modelcontextprotocol.io/) — connect external tool servers and use them as native agent tools.
|
||||||
|
|
||||||
|
Add MCP servers to your `config.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tools": {
|
||||||
|
"mcpServers": {
|
||||||
|
"filesystem": {
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Two transport modes are supported:
|
||||||
|
|
||||||
|
| Mode | Config | Example |
|
||||||
|
|------|--------|---------|
|
||||||
|
| **Stdio** | `command` + `args` | Local process via `npx` / `uvx` |
|
||||||
|
| **HTTP** | `url` | Remote endpoint (`https://mcp.example.com/sse`) |
|
||||||
|
|
||||||
|
MCP tools are automatically discovered and registered on startup. The LLM can use them alongside built-in tools — no extra configuration needed.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Security
|
### Security
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
> For production deployments, set `"restrictToWorkspace": true` in your config to sandbox the agent.
|
> For production deployments, set `"restrictToWorkspace": true` in your config to sandbox the agent.
|
||||||
|
|
||||||
| Option | Default | Description |
|
| Option | Default | Description |
|
||||||
@ -678,6 +784,7 @@ That's it! Environment variables, model prefixing, config matching, and `nanobot
|
|||||||
| `nanobot agent --logs` | Show runtime logs during chat |
|
| `nanobot agent --logs` | Show runtime logs during chat |
|
||||||
| `nanobot gateway` | Start the gateway |
|
| `nanobot gateway` | Start the gateway |
|
||||||
| `nanobot status` | Show status |
|
| `nanobot status` | Show status |
|
||||||
|
| `nanobot provider login openai-codex` | OAuth login for providers |
|
||||||
| `nanobot channels login` | Link WhatsApp (scan QR) |
|
| `nanobot channels login` | Link WhatsApp (scan QR) |
|
||||||
| `nanobot channels status` | Show channel status |
|
| `nanobot channels status` | Show channel status |
|
||||||
|
|
||||||
@ -705,7 +812,21 @@ nanobot cron remove <job_id>
|
|||||||
> [!TIP]
|
> [!TIP]
|
||||||
> The `-v ~/.nanobot:/root/.nanobot` flag mounts your local config directory into the container, so your config and workspace persist across container restarts.
|
> The `-v ~/.nanobot:/root/.nanobot` flag mounts your local config directory into the container, so your config and workspace persist across container restarts.
|
||||||
|
|
||||||
Build and run nanobot in a container:
|
### Docker Compose
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose run --rm nanobot-cli onboard # first-time setup
|
||||||
|
vim ~/.nanobot/config.json # add API keys
|
||||||
|
docker compose up -d nanobot-gateway # start gateway
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose run --rm nanobot-cli agent -m "Hello!" # run CLI
|
||||||
|
docker compose logs -f nanobot-gateway # view logs
|
||||||
|
docker compose down # stop
|
||||||
|
```
|
||||||
|
|
||||||
|
### Docker
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Build the image
|
# Build the image
|
||||||
@ -753,7 +874,6 @@ PRs welcome! The codebase is intentionally small and readable. 🤗
|
|||||||
|
|
||||||
**Roadmap** — Pick an item and [open a PR](https://github.com/HKUDS/nanobot/pulls)!
|
**Roadmap** — Pick an item and [open a PR](https://github.com/HKUDS/nanobot/pulls)!
|
||||||
|
|
||||||
- [x] **Voice Transcription** — Support for Groq Whisper (Issue #13)
|
|
||||||
- [ ] **Multi-modal** — See and hear (images, voice, video)
|
- [ ] **Multi-modal** — See and hear (images, voice, video)
|
||||||
- [ ] **Long-term memory** — Never forget important context
|
- [ ] **Long-term memory** — Never forget important context
|
||||||
- [ ] **Better reasoning** — Multi-step planning and reflection
|
- [ ] **Better reasoning** — Multi-step planning and reflection
|
||||||
|
|||||||
@ -5,7 +5,7 @@
|
|||||||
If you discover a security vulnerability in nanobot, please report it by:
|
If you discover a security vulnerability in nanobot, please report it by:
|
||||||
|
|
||||||
1. **DO NOT** open a public GitHub issue
|
1. **DO NOT** open a public GitHub issue
|
||||||
2. Create a private security advisory on GitHub or contact the repository maintainers
|
2. Create a private security advisory on GitHub or contact the repository maintainers (xubinrencs@gmail.com)
|
||||||
3. Include:
|
3. Include:
|
||||||
- Description of the vulnerability
|
- Description of the vulnerability
|
||||||
- Steps to reproduce
|
- Steps to reproduce
|
||||||
|
|||||||
31
docker-compose.yml
Normal file
31
docker-compose.yml
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
x-common-config: &common-config
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
volumes:
|
||||||
|
- ~/.nanobot:/root/.nanobot
|
||||||
|
|
||||||
|
services:
|
||||||
|
nanobot-gateway:
|
||||||
|
container_name: nanobot-gateway
|
||||||
|
<<: *common-config
|
||||||
|
command: ["gateway"]
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- 18790:18790
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpus: '1'
|
||||||
|
memory: 1G
|
||||||
|
reservations:
|
||||||
|
cpus: '0.25'
|
||||||
|
memory: 256M
|
||||||
|
|
||||||
|
nanobot-cli:
|
||||||
|
<<: *common-config
|
||||||
|
profiles:
|
||||||
|
- cli
|
||||||
|
command: ["status"]
|
||||||
|
stdin_open: true
|
||||||
|
tty: true
|
||||||
@ -225,14 +225,18 @@ To recall past events, grep {workspace_path}/memory/HISTORY.md"""
|
|||||||
Returns:
|
Returns:
|
||||||
Updated message list.
|
Updated message list.
|
||||||
"""
|
"""
|
||||||
msg: dict[str, Any] = {"role": "assistant", "content": content or ""}
|
msg: dict[str, Any] = {"role": "assistant"}
|
||||||
|
|
||||||
|
# Omit empty content — some backends reject empty text blocks
|
||||||
|
if content:
|
||||||
|
msg["content"] = content
|
||||||
|
|
||||||
if tool_calls:
|
if tool_calls:
|
||||||
msg["tool_calls"] = tool_calls
|
msg["tool_calls"] = tool_calls
|
||||||
|
|
||||||
# Thinking models reject history without this
|
# Include reasoning content when provided (required by some thinking models)
|
||||||
if reasoning_content:
|
if reasoning_content:
|
||||||
msg["reasoning_content"] = reasoning_content
|
msg["reasoning_content"] = reasoning_content
|
||||||
|
|
||||||
messages.append(msg)
|
messages.append(msg)
|
||||||
return messages
|
return messages
|
||||||
|
|||||||
@ -1,7 +1,9 @@
|
|||||||
"""Agent loop: the core processing engine."""
|
"""Agent loop: the core processing engine."""
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
|
from contextlib import AsyncExitStack
|
||||||
import json
|
import json
|
||||||
|
import json_repair
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
@ -50,6 +52,7 @@ class AgentLoop:
|
|||||||
cron_service: "CronService | None" = None,
|
cron_service: "CronService | None" = None,
|
||||||
restrict_to_workspace: bool = False,
|
restrict_to_workspace: bool = False,
|
||||||
session_manager: SessionManager | None = None,
|
session_manager: SessionManager | None = None,
|
||||||
|
mcp_servers: dict | None = None,
|
||||||
):
|
):
|
||||||
from nanobot.config.schema import ExecToolConfig
|
from nanobot.config.schema import ExecToolConfig
|
||||||
from nanobot.cron.service import CronService
|
from nanobot.cron.service import CronService
|
||||||
@ -82,6 +85,9 @@ class AgentLoop:
|
|||||||
)
|
)
|
||||||
|
|
||||||
self._running = False
|
self._running = False
|
||||||
|
self._mcp_servers = mcp_servers or {}
|
||||||
|
self._mcp_stack: AsyncExitStack | None = None
|
||||||
|
self._mcp_connected = False
|
||||||
self._register_default_tools()
|
self._register_default_tools()
|
||||||
|
|
||||||
def _register_default_tools(self) -> None:
|
def _register_default_tools(self) -> None:
|
||||||
@ -116,6 +122,16 @@ class AgentLoop:
|
|||||||
if self.cron_service:
|
if self.cron_service:
|
||||||
self.tools.register(CronTool(self.cron_service))
|
self.tools.register(CronTool(self.cron_service))
|
||||||
|
|
||||||
|
async def _connect_mcp(self) -> None:
|
||||||
|
"""Connect to configured MCP servers (one-time, lazy)."""
|
||||||
|
if self._mcp_connected or not self._mcp_servers:
|
||||||
|
return
|
||||||
|
self._mcp_connected = True
|
||||||
|
from nanobot.agent.tools.mcp import connect_mcp_servers
|
||||||
|
self._mcp_stack = AsyncExitStack()
|
||||||
|
await self._mcp_stack.__aenter__()
|
||||||
|
await connect_mcp_servers(self._mcp_servers, self.tools, self._mcp_stack)
|
||||||
|
|
||||||
def _set_tool_context(self, channel: str, chat_id: str) -> None:
|
def _set_tool_context(self, channel: str, chat_id: str) -> None:
|
||||||
"""Update context for all tools that need routing info."""
|
"""Update context for all tools that need routing info."""
|
||||||
if message_tool := self.tools.get("message"):
|
if message_tool := self.tools.get("message"):
|
||||||
@ -191,6 +207,7 @@ class AgentLoop:
|
|||||||
async def run(self) -> None:
|
async def run(self) -> None:
|
||||||
"""Run the agent loop, processing messages from the bus."""
|
"""Run the agent loop, processing messages from the bus."""
|
||||||
self._running = True
|
self._running = True
|
||||||
|
await self._connect_mcp()
|
||||||
logger.info("Agent loop started")
|
logger.info("Agent loop started")
|
||||||
|
|
||||||
while self._running:
|
while self._running:
|
||||||
@ -213,6 +230,15 @@ class AgentLoop:
|
|||||||
except asyncio.TimeoutError:
|
except asyncio.TimeoutError:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
async def close_mcp(self) -> None:
|
||||||
|
"""Close MCP connections."""
|
||||||
|
if self._mcp_stack:
|
||||||
|
try:
|
||||||
|
await self._mcp_stack.aclose()
|
||||||
|
except (RuntimeError, BaseExceptionGroup):
|
||||||
|
pass # MCP SDK cancel scope cleanup is noisy but harmless
|
||||||
|
self._mcp_stack = None
|
||||||
|
|
||||||
def stop(self) -> None:
|
def stop(self) -> None:
|
||||||
"""Stop the agent loop."""
|
"""Stop the agent loop."""
|
||||||
self._running = False
|
self._running = False
|
||||||
@ -395,9 +421,15 @@ Respond with ONLY valid JSON, no markdown fences."""
|
|||||||
model=self.model,
|
model=self.model,
|
||||||
)
|
)
|
||||||
text = (response.content or "").strip()
|
text = (response.content or "").strip()
|
||||||
|
if not text:
|
||||||
|
logger.warning("Memory consolidation: LLM returned empty response, skipping")
|
||||||
|
return
|
||||||
if text.startswith("```"):
|
if text.startswith("```"):
|
||||||
text = text.split("\n", 1)[-1].rsplit("```", 1)[0].strip()
|
text = text.split("\n", 1)[-1].rsplit("```", 1)[0].strip()
|
||||||
result = json.loads(text)
|
result = json_repair.loads(text)
|
||||||
|
if not isinstance(result, dict):
|
||||||
|
logger.warning(f"Memory consolidation: unexpected response type, skipping. Response: {text[:200]}")
|
||||||
|
return
|
||||||
|
|
||||||
if entry := result.get("history_entry"):
|
if entry := result.get("history_entry"):
|
||||||
memory.append_history(entry)
|
memory.append_history(entry)
|
||||||
@ -432,6 +464,7 @@ Respond with ONLY valid JSON, no markdown fences."""
|
|||||||
Returns:
|
Returns:
|
||||||
The agent's response.
|
The agent's response.
|
||||||
"""
|
"""
|
||||||
|
await self._connect_mcp()
|
||||||
msg = InboundMessage(
|
msg = InboundMessage(
|
||||||
channel=channel,
|
channel=channel,
|
||||||
sender_id="user",
|
sender_id="user",
|
||||||
|
|||||||
@ -167,10 +167,10 @@ class SkillsLoader:
|
|||||||
return content
|
return content
|
||||||
|
|
||||||
def _parse_nanobot_metadata(self, raw: str) -> dict:
|
def _parse_nanobot_metadata(self, raw: str) -> dict:
|
||||||
"""Parse nanobot metadata JSON from frontmatter."""
|
"""Parse skill metadata JSON from frontmatter (supports nanobot and openclaw keys)."""
|
||||||
try:
|
try:
|
||||||
data = json.loads(raw)
|
data = json.loads(raw)
|
||||||
return data.get("nanobot", {}) if isinstance(data, dict) else {}
|
return data.get("nanobot", data.get("openclaw", {})) if isinstance(data, dict) else {}
|
||||||
except (json.JSONDecodeError, TypeError):
|
except (json.JSONDecodeError, TypeError):
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
|
|||||||
@ -50,6 +50,10 @@ class CronTool(Tool):
|
|||||||
"type": "string",
|
"type": "string",
|
||||||
"description": "Cron expression like '0 9 * * *' (for scheduled tasks)"
|
"description": "Cron expression like '0 9 * * *' (for scheduled tasks)"
|
||||||
},
|
},
|
||||||
|
"tz": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "IANA timezone for cron expressions (e.g. 'America/Vancouver')"
|
||||||
|
},
|
||||||
"at": {
|
"at": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"description": "ISO datetime for one-time execution (e.g. '2026-02-12T10:30:00')"
|
"description": "ISO datetime for one-time execution (e.g. '2026-02-12T10:30:00')"
|
||||||
@ -68,30 +72,46 @@ class CronTool(Tool):
|
|||||||
message: str = "",
|
message: str = "",
|
||||||
every_seconds: int | None = None,
|
every_seconds: int | None = None,
|
||||||
cron_expr: str | None = None,
|
cron_expr: str | None = None,
|
||||||
|
tz: str | None = None,
|
||||||
at: str | None = None,
|
at: str | None = None,
|
||||||
job_id: str | None = None,
|
job_id: str | None = None,
|
||||||
**kwargs: Any
|
**kwargs: Any
|
||||||
) -> str:
|
) -> str:
|
||||||
if action == "add":
|
if action == "add":
|
||||||
return self._add_job(message, every_seconds, cron_expr, at)
|
return self._add_job(message, every_seconds, cron_expr, tz, at)
|
||||||
elif action == "list":
|
elif action == "list":
|
||||||
return self._list_jobs()
|
return self._list_jobs()
|
||||||
elif action == "remove":
|
elif action == "remove":
|
||||||
return self._remove_job(job_id)
|
return self._remove_job(job_id)
|
||||||
return f"Unknown action: {action}"
|
return f"Unknown action: {action}"
|
||||||
|
|
||||||
def _add_job(self, message: str, every_seconds: int | None, cron_expr: str | None, at: str | None) -> str:
|
def _add_job(
|
||||||
|
self,
|
||||||
|
message: str,
|
||||||
|
every_seconds: int | None,
|
||||||
|
cron_expr: str | None,
|
||||||
|
tz: str | None,
|
||||||
|
at: str | None,
|
||||||
|
) -> str:
|
||||||
if not message:
|
if not message:
|
||||||
return "Error: message is required for add"
|
return "Error: message is required for add"
|
||||||
if not self._channel or not self._chat_id:
|
if not self._channel or not self._chat_id:
|
||||||
return "Error: no session context (channel/chat_id)"
|
return "Error: no session context (channel/chat_id)"
|
||||||
|
if tz and not cron_expr:
|
||||||
|
return "Error: tz can only be used with cron_expr"
|
||||||
|
if tz:
|
||||||
|
from zoneinfo import ZoneInfo
|
||||||
|
try:
|
||||||
|
ZoneInfo(tz)
|
||||||
|
except (KeyError, Exception):
|
||||||
|
return f"Error: unknown timezone '{tz}'"
|
||||||
|
|
||||||
# Build schedule
|
# Build schedule
|
||||||
delete_after = False
|
delete_after = False
|
||||||
if every_seconds:
|
if every_seconds:
|
||||||
schedule = CronSchedule(kind="every", every_ms=every_seconds * 1000)
|
schedule = CronSchedule(kind="every", every_ms=every_seconds * 1000)
|
||||||
elif cron_expr:
|
elif cron_expr:
|
||||||
schedule = CronSchedule(kind="cron", expr=cron_expr)
|
schedule = CronSchedule(kind="cron", expr=cron_expr, tz=tz)
|
||||||
elif at:
|
elif at:
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
dt = datetime.fromisoformat(at)
|
dt = datetime.fromisoformat(at)
|
||||||
|
|||||||
80
nanobot/agent/tools/mcp.py
Normal file
80
nanobot/agent/tools/mcp.py
Normal file
@ -0,0 +1,80 @@
|
|||||||
|
"""MCP client: connects to MCP servers and wraps their tools as native nanobot tools."""
|
||||||
|
|
||||||
|
from contextlib import AsyncExitStack
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from nanobot.agent.tools.base import Tool
|
||||||
|
from nanobot.agent.tools.registry import ToolRegistry
|
||||||
|
|
||||||
|
|
||||||
|
class MCPToolWrapper(Tool):
|
||||||
|
"""Wraps a single MCP server tool as a nanobot Tool."""
|
||||||
|
|
||||||
|
def __init__(self, session, server_name: str, tool_def):
|
||||||
|
self._session = session
|
||||||
|
self._original_name = tool_def.name
|
||||||
|
self._name = f"mcp_{server_name}_{tool_def.name}"
|
||||||
|
self._description = tool_def.description or tool_def.name
|
||||||
|
self._parameters = tool_def.inputSchema or {"type": "object", "properties": {}}
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
return self._name
|
||||||
|
|
||||||
|
@property
|
||||||
|
def description(self) -> str:
|
||||||
|
return self._description
|
||||||
|
|
||||||
|
@property
|
||||||
|
def parameters(self) -> dict[str, Any]:
|
||||||
|
return self._parameters
|
||||||
|
|
||||||
|
async def execute(self, **kwargs: Any) -> str:
|
||||||
|
from mcp import types
|
||||||
|
result = await self._session.call_tool(self._original_name, arguments=kwargs)
|
||||||
|
parts = []
|
||||||
|
for block in result.content:
|
||||||
|
if isinstance(block, types.TextContent):
|
||||||
|
parts.append(block.text)
|
||||||
|
else:
|
||||||
|
parts.append(str(block))
|
||||||
|
return "\n".join(parts) or "(no output)"
|
||||||
|
|
||||||
|
|
||||||
|
async def connect_mcp_servers(
|
||||||
|
mcp_servers: dict, registry: ToolRegistry, stack: AsyncExitStack
|
||||||
|
) -> None:
|
||||||
|
"""Connect to configured MCP servers and register their tools."""
|
||||||
|
from mcp import ClientSession, StdioServerParameters
|
||||||
|
from mcp.client.stdio import stdio_client
|
||||||
|
|
||||||
|
for name, cfg in mcp_servers.items():
|
||||||
|
try:
|
||||||
|
if cfg.command:
|
||||||
|
params = StdioServerParameters(
|
||||||
|
command=cfg.command, args=cfg.args, env=cfg.env or None
|
||||||
|
)
|
||||||
|
read, write = await stack.enter_async_context(stdio_client(params))
|
||||||
|
elif cfg.url:
|
||||||
|
from mcp.client.streamable_http import streamable_http_client
|
||||||
|
read, write, _ = await stack.enter_async_context(
|
||||||
|
streamable_http_client(cfg.url)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
logger.warning(f"MCP server '{name}': no command or url configured, skipping")
|
||||||
|
continue
|
||||||
|
|
||||||
|
session = await stack.enter_async_context(ClientSession(read, write))
|
||||||
|
await session.initialize()
|
||||||
|
|
||||||
|
tools = await session.list_tools()
|
||||||
|
for tool_def in tools.tools:
|
||||||
|
wrapper = MCPToolWrapper(session, name, tool_def)
|
||||||
|
registry.register(wrapper)
|
||||||
|
logger.debug(f"MCP: registered tool '{wrapper.name}' from server '{name}'")
|
||||||
|
|
||||||
|
logger.info(f"MCP server '{name}': connected, {len(tools.tools)} tools registered")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"MCP server '{name}': failed to connect: {e}")
|
||||||
@ -52,6 +52,11 @@ class MessageTool(Tool):
|
|||||||
"chat_id": {
|
"chat_id": {
|
||||||
"type": "string",
|
"type": "string",
|
||||||
"description": "Optional: target chat/user ID"
|
"description": "Optional: target chat/user ID"
|
||||||
|
},
|
||||||
|
"media": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {"type": "string"},
|
||||||
|
"description": "Optional: list of file paths to attach (images, audio, documents)"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"required": ["content"]
|
"required": ["content"]
|
||||||
@ -62,6 +67,7 @@ class MessageTool(Tool):
|
|||||||
content: str,
|
content: str,
|
||||||
channel: str | None = None,
|
channel: str | None = None,
|
||||||
chat_id: str | None = None,
|
chat_id: str | None = None,
|
||||||
|
media: list[str] | None = None,
|
||||||
**kwargs: Any
|
**kwargs: Any
|
||||||
) -> str:
|
) -> str:
|
||||||
channel = channel or self._default_channel
|
channel = channel or self._default_channel
|
||||||
@ -76,11 +82,13 @@ class MessageTool(Tool):
|
|||||||
msg = OutboundMessage(
|
msg = OutboundMessage(
|
||||||
channel=channel,
|
channel=channel,
|
||||||
chat_id=chat_id,
|
chat_id=chat_id,
|
||||||
content=content
|
content=content,
|
||||||
|
media=media or []
|
||||||
)
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
await self._send_callback(msg)
|
await self._send_callback(msg)
|
||||||
return f"Message sent to {channel}:{chat_id}"
|
media_info = f" with {len(media)} attachments" if media else ""
|
||||||
|
return f"Message sent to {channel}:{chat_id}{media_info}"
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return f"Error sending message: {str(e)}"
|
return f"Error sending message: {str(e)}"
|
||||||
|
|||||||
@ -10,6 +10,8 @@ from slack_sdk.socket_mode.request import SocketModeRequest
|
|||||||
from slack_sdk.socket_mode.response import SocketModeResponse
|
from slack_sdk.socket_mode.response import SocketModeResponse
|
||||||
from slack_sdk.web.async_client import AsyncWebClient
|
from slack_sdk.web.async_client import AsyncWebClient
|
||||||
|
|
||||||
|
from slackify_markdown import slackify_markdown
|
||||||
|
|
||||||
from nanobot.bus.events import OutboundMessage
|
from nanobot.bus.events import OutboundMessage
|
||||||
from nanobot.bus.queue import MessageBus
|
from nanobot.bus.queue import MessageBus
|
||||||
from nanobot.channels.base import BaseChannel
|
from nanobot.channels.base import BaseChannel
|
||||||
@ -84,7 +86,7 @@ class SlackChannel(BaseChannel):
|
|||||||
use_thread = thread_ts and channel_type != "im"
|
use_thread = thread_ts and channel_type != "im"
|
||||||
await self._web_client.chat_postMessage(
|
await self._web_client.chat_postMessage(
|
||||||
channel=msg.chat_id,
|
channel=msg.chat_id,
|
||||||
text=msg.content or "",
|
text=self._to_mrkdwn(msg.content),
|
||||||
thread_ts=thread_ts if use_thread else None,
|
thread_ts=thread_ts if use_thread else None,
|
||||||
)
|
)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@ -150,13 +152,15 @@ class SlackChannel(BaseChannel):
|
|||||||
|
|
||||||
text = self._strip_bot_mention(text)
|
text = self._strip_bot_mention(text)
|
||||||
|
|
||||||
thread_ts = event.get("thread_ts") or event.get("ts")
|
thread_ts = event.get("thread_ts")
|
||||||
|
if self.config.reply_in_thread and not thread_ts:
|
||||||
|
thread_ts = event.get("ts")
|
||||||
# Add :eyes: reaction to the triggering message (best-effort)
|
# Add :eyes: reaction to the triggering message (best-effort)
|
||||||
try:
|
try:
|
||||||
if self._web_client and event.get("ts"):
|
if self._web_client and event.get("ts"):
|
||||||
await self._web_client.reactions_add(
|
await self._web_client.reactions_add(
|
||||||
channel=chat_id,
|
channel=chat_id,
|
||||||
name="eyes",
|
name=self.config.react_emoji,
|
||||||
timestamp=event.get("ts"),
|
timestamp=event.get("ts"),
|
||||||
)
|
)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@ -203,3 +207,31 @@ class SlackChannel(BaseChannel):
|
|||||||
if not text or not self._bot_user_id:
|
if not text or not self._bot_user_id:
|
||||||
return text
|
return text
|
||||||
return re.sub(rf"<@{re.escape(self._bot_user_id)}>\s*", "", text).strip()
|
return re.sub(rf"<@{re.escape(self._bot_user_id)}>\s*", "", text).strip()
|
||||||
|
|
||||||
|
_TABLE_RE = re.compile(r"(?m)^\|.*\|$(?:\n\|[\s:|-]*\|$)(?:\n\|.*\|$)*")
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def _to_mrkdwn(cls, text: str) -> str:
|
||||||
|
"""Convert Markdown to Slack mrkdwn, including tables."""
|
||||||
|
if not text:
|
||||||
|
return ""
|
||||||
|
text = cls._TABLE_RE.sub(cls._convert_table, text)
|
||||||
|
return slackify_markdown(text)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _convert_table(match: re.Match) -> str:
|
||||||
|
"""Convert a Markdown table to a Slack-readable list."""
|
||||||
|
lines = [ln.strip() for ln in match.group(0).strip().splitlines() if ln.strip()]
|
||||||
|
if len(lines) < 2:
|
||||||
|
return match.group(0)
|
||||||
|
headers = [h.strip() for h in lines[0].strip("|").split("|")]
|
||||||
|
start = 2 if re.fullmatch(r"[|\s:\-]+", lines[1]) else 1
|
||||||
|
rows: list[str] = []
|
||||||
|
for line in lines[start:]:
|
||||||
|
cells = [c.strip() for c in line.strip("|").split("|")]
|
||||||
|
cells = (cells + [""] * len(headers))[: len(headers)]
|
||||||
|
parts = [f"**{headers[i]}**: {cells[i]}" for i in range(len(headers)) if cells[i]]
|
||||||
|
if parts:
|
||||||
|
rows.append(" · ".join(parts))
|
||||||
|
return "\n".join(rows)
|
||||||
|
|
||||||
|
|||||||
@ -78,6 +78,26 @@ def _markdown_to_telegram_html(text: str) -> str:
|
|||||||
return text
|
return text
|
||||||
|
|
||||||
|
|
||||||
|
def _split_message(content: str, max_len: int = 4000) -> list[str]:
|
||||||
|
"""Split content into chunks within max_len, preferring line breaks."""
|
||||||
|
if len(content) <= max_len:
|
||||||
|
return [content]
|
||||||
|
chunks: list[str] = []
|
||||||
|
while content:
|
||||||
|
if len(content) <= max_len:
|
||||||
|
chunks.append(content)
|
||||||
|
break
|
||||||
|
cut = content[:max_len]
|
||||||
|
pos = cut.rfind('\n')
|
||||||
|
if pos == -1:
|
||||||
|
pos = cut.rfind(' ')
|
||||||
|
if pos == -1:
|
||||||
|
pos = max_len
|
||||||
|
chunks.append(content[:pos])
|
||||||
|
content = content[pos:].lstrip()
|
||||||
|
return chunks
|
||||||
|
|
||||||
|
|
||||||
class TelegramChannel(BaseChannel):
|
class TelegramChannel(BaseChannel):
|
||||||
"""
|
"""
|
||||||
Telegram channel using long polling.
|
Telegram channel using long polling.
|
||||||
@ -178,37 +198,61 @@ class TelegramChannel(BaseChannel):
|
|||||||
await self._app.shutdown()
|
await self._app.shutdown()
|
||||||
self._app = None
|
self._app = None
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _get_media_type(path: str) -> str:
|
||||||
|
"""Guess media type from file extension."""
|
||||||
|
ext = path.rsplit(".", 1)[-1].lower() if "." in path else ""
|
||||||
|
if ext in ("jpg", "jpeg", "png", "gif", "webp"):
|
||||||
|
return "photo"
|
||||||
|
if ext == "ogg":
|
||||||
|
return "voice"
|
||||||
|
if ext in ("mp3", "m4a", "wav", "aac"):
|
||||||
|
return "audio"
|
||||||
|
return "document"
|
||||||
|
|
||||||
async def send(self, msg: OutboundMessage) -> None:
|
async def send(self, msg: OutboundMessage) -> None:
|
||||||
"""Send a message through Telegram."""
|
"""Send a message through Telegram."""
|
||||||
if not self._app:
|
if not self._app:
|
||||||
logger.warning("Telegram bot not running")
|
logger.warning("Telegram bot not running")
|
||||||
return
|
return
|
||||||
|
|
||||||
# Stop typing indicator for this chat
|
|
||||||
self._stop_typing(msg.chat_id)
|
self._stop_typing(msg.chat_id)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# chat_id should be the Telegram chat ID (integer)
|
|
||||||
chat_id = int(msg.chat_id)
|
chat_id = int(msg.chat_id)
|
||||||
# Convert markdown to Telegram HTML
|
|
||||||
html_content = _markdown_to_telegram_html(msg.content)
|
|
||||||
await self._app.bot.send_message(
|
|
||||||
chat_id=chat_id,
|
|
||||||
text=html_content,
|
|
||||||
parse_mode="HTML"
|
|
||||||
)
|
|
||||||
except ValueError:
|
except ValueError:
|
||||||
logger.error(f"Invalid chat_id: {msg.chat_id}")
|
logger.error(f"Invalid chat_id: {msg.chat_id}")
|
||||||
except Exception as e:
|
return
|
||||||
# Fallback to plain text if HTML parsing fails
|
|
||||||
logger.warning(f"HTML parse failed, falling back to plain text: {e}")
|
# Send media files
|
||||||
|
for media_path in (msg.media or []):
|
||||||
try:
|
try:
|
||||||
await self._app.bot.send_message(
|
media_type = self._get_media_type(media_path)
|
||||||
chat_id=int(msg.chat_id),
|
sender = {
|
||||||
text=msg.content
|
"photo": self._app.bot.send_photo,
|
||||||
)
|
"voice": self._app.bot.send_voice,
|
||||||
except Exception as e2:
|
"audio": self._app.bot.send_audio,
|
||||||
logger.error(f"Error sending Telegram message: {e2}")
|
}.get(media_type, self._app.bot.send_document)
|
||||||
|
param = "photo" if media_type == "photo" else media_type if media_type in ("voice", "audio") else "document"
|
||||||
|
with open(media_path, 'rb') as f:
|
||||||
|
await sender(chat_id=chat_id, **{param: f})
|
||||||
|
except Exception as e:
|
||||||
|
filename = media_path.rsplit("/", 1)[-1]
|
||||||
|
logger.error(f"Failed to send media {media_path}: {e}")
|
||||||
|
await self._app.bot.send_message(chat_id=chat_id, text=f"[Failed to send: {filename}]")
|
||||||
|
|
||||||
|
# Send text content
|
||||||
|
if msg.content and msg.content != "[empty message]":
|
||||||
|
for chunk in _split_message(msg.content):
|
||||||
|
try:
|
||||||
|
html = _markdown_to_telegram_html(chunk)
|
||||||
|
await self._app.bot.send_message(chat_id=chat_id, text=html, parse_mode="HTML")
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"HTML parse failed, falling back to plain text: {e}")
|
||||||
|
try:
|
||||||
|
await self._app.bot.send_message(chat_id=chat_id, text=chunk)
|
||||||
|
except Exception as e2:
|
||||||
|
logger.error(f"Error sending Telegram message: {e2}")
|
||||||
|
|
||||||
async def _on_start(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
|
async def _on_start(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
|
||||||
"""Handle /start command."""
|
"""Handle /start command."""
|
||||||
@ -222,12 +266,18 @@ class TelegramChannel(BaseChannel):
|
|||||||
"Type /help to see available commands."
|
"Type /help to see available commands."
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _sender_id(user) -> str:
|
||||||
|
"""Build sender_id with username for allowlist matching."""
|
||||||
|
sid = str(user.id)
|
||||||
|
return f"{sid}|{user.username}" if user.username else sid
|
||||||
|
|
||||||
async def _forward_command(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
|
async def _forward_command(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
|
||||||
"""Forward slash commands to the bus for unified handling in AgentLoop."""
|
"""Forward slash commands to the bus for unified handling in AgentLoop."""
|
||||||
if not update.message or not update.effective_user:
|
if not update.message or not update.effective_user:
|
||||||
return
|
return
|
||||||
await self._handle_message(
|
await self._handle_message(
|
||||||
sender_id=str(update.effective_user.id),
|
sender_id=self._sender_id(update.effective_user),
|
||||||
chat_id=str(update.message.chat_id),
|
chat_id=str(update.message.chat_id),
|
||||||
content=update.message.text,
|
content=update.message.text,
|
||||||
)
|
)
|
||||||
@ -240,11 +290,7 @@ class TelegramChannel(BaseChannel):
|
|||||||
message = update.message
|
message = update.message
|
||||||
user = update.effective_user
|
user = update.effective_user
|
||||||
chat_id = message.chat_id
|
chat_id = message.chat_id
|
||||||
|
sender_id = self._sender_id(user)
|
||||||
# Use stable numeric ID, but keep username for allowlist compatibility
|
|
||||||
sender_id = str(user.id)
|
|
||||||
if user.username:
|
|
||||||
sender_id = f"{sender_id}|{user.username}"
|
|
||||||
|
|
||||||
# Store chat_id for replies
|
# Store chat_id for replies
|
||||||
self._chat_ids[sender_id] = chat_id
|
self._chat_ids[sender_id] = chat_id
|
||||||
|
|||||||
@ -19,6 +19,7 @@ from prompt_toolkit.history import FileHistory
|
|||||||
from prompt_toolkit.patch_stdout import patch_stdout
|
from prompt_toolkit.patch_stdout import patch_stdout
|
||||||
|
|
||||||
from nanobot import __version__, __logo__
|
from nanobot import __version__, __logo__
|
||||||
|
from nanobot.config.schema import Config
|
||||||
|
|
||||||
app = typer.Typer(
|
app = typer.Typer(
|
||||||
name="nanobot",
|
name="nanobot",
|
||||||
@ -278,21 +279,41 @@ This file stores important information that should persist across sessions.
|
|||||||
skills_dir.mkdir(exist_ok=True)
|
skills_dir.mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
|
||||||
def _make_provider(config):
|
def _make_provider(config: Config):
|
||||||
"""Create LiteLLMProvider from config. Exits if no API key found."""
|
"""Create the appropriate LLM provider from config."""
|
||||||
from nanobot.providers.litellm_provider import LiteLLMProvider
|
from nanobot.providers.litellm_provider import LiteLLMProvider
|
||||||
p = config.get_provider()
|
from nanobot.providers.openai_codex_provider import OpenAICodexProvider
|
||||||
|
from nanobot.providers.custom_provider import CustomProvider
|
||||||
|
|
||||||
model = config.agents.defaults.model
|
model = config.agents.defaults.model
|
||||||
if not (p and p.api_key) and not model.startswith("bedrock/"):
|
provider_name = config.get_provider_name(model)
|
||||||
|
p = config.get_provider(model)
|
||||||
|
|
||||||
|
# OpenAI Codex (OAuth)
|
||||||
|
if provider_name == "openai_codex" or model.startswith("openai-codex/"):
|
||||||
|
return OpenAICodexProvider(default_model=model)
|
||||||
|
|
||||||
|
# Custom: direct OpenAI-compatible endpoint, bypasses LiteLLM
|
||||||
|
if provider_name == "custom":
|
||||||
|
return CustomProvider(
|
||||||
|
api_key=p.api_key if p else "no-key",
|
||||||
|
api_base=config.get_api_base(model) or "http://localhost:8000/v1",
|
||||||
|
default_model=model,
|
||||||
|
)
|
||||||
|
|
||||||
|
from nanobot.providers.registry import find_by_name
|
||||||
|
spec = find_by_name(provider_name)
|
||||||
|
if not model.startswith("bedrock/") and not (p and p.api_key) and not (spec and spec.is_oauth):
|
||||||
console.print("[red]Error: No API key configured.[/red]")
|
console.print("[red]Error: No API key configured.[/red]")
|
||||||
console.print("Set one in ~/.nanobot/config.json under providers section")
|
console.print("Set one in ~/.nanobot/config.json under providers section")
|
||||||
raise typer.Exit(1)
|
raise typer.Exit(1)
|
||||||
|
|
||||||
return LiteLLMProvider(
|
return LiteLLMProvider(
|
||||||
api_key=p.api_key if p else None,
|
api_key=p.api_key if p else None,
|
||||||
api_base=config.get_api_base(),
|
api_base=config.get_api_base(model),
|
||||||
default_model=model,
|
default_model=model,
|
||||||
extra_headers=p.extra_headers if p else None,
|
extra_headers=p.extra_headers if p else None,
|
||||||
provider_name=config.get_provider_name(),
|
provider_name=provider_name,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@ -346,6 +367,7 @@ def gateway(
|
|||||||
cron_service=cron,
|
cron_service=cron,
|
||||||
restrict_to_workspace=config.tools.restrict_to_workspace,
|
restrict_to_workspace=config.tools.restrict_to_workspace,
|
||||||
session_manager=session_manager,
|
session_manager=session_manager,
|
||||||
|
mcp_servers=config.tools.mcp_servers,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Set cron callback (needs agent)
|
# Set cron callback (needs agent)
|
||||||
@ -403,6 +425,8 @@ def gateway(
|
|||||||
)
|
)
|
||||||
except KeyboardInterrupt:
|
except KeyboardInterrupt:
|
||||||
console.print("\nShutting down...")
|
console.print("\nShutting down...")
|
||||||
|
finally:
|
||||||
|
await agent.close_mcp()
|
||||||
heartbeat.stop()
|
heartbeat.stop()
|
||||||
cron.stop()
|
cron.stop()
|
||||||
agent.stop()
|
agent.stop()
|
||||||
@ -426,9 +450,10 @@ def agent(
|
|||||||
logs: bool = typer.Option(False, "--logs/--no-logs", help="Show nanobot runtime logs during chat"),
|
logs: bool = typer.Option(False, "--logs/--no-logs", help="Show nanobot runtime logs during chat"),
|
||||||
):
|
):
|
||||||
"""Interact with the agent directly."""
|
"""Interact with the agent directly."""
|
||||||
from nanobot.config.loader import load_config
|
from nanobot.config.loader import load_config, get_data_dir
|
||||||
from nanobot.bus.queue import MessageBus
|
from nanobot.bus.queue import MessageBus
|
||||||
from nanobot.agent.loop import AgentLoop
|
from nanobot.agent.loop import AgentLoop
|
||||||
|
from nanobot.cron.service import CronService
|
||||||
from loguru import logger
|
from loguru import logger
|
||||||
|
|
||||||
config = load_config()
|
config = load_config()
|
||||||
@ -436,6 +461,10 @@ def agent(
|
|||||||
bus = MessageBus()
|
bus = MessageBus()
|
||||||
provider = _make_provider(config)
|
provider = _make_provider(config)
|
||||||
|
|
||||||
|
# Create cron service for tool usage (no callback needed for CLI unless running)
|
||||||
|
cron_store_path = get_data_dir() / "cron" / "jobs.json"
|
||||||
|
cron = CronService(cron_store_path)
|
||||||
|
|
||||||
if logs:
|
if logs:
|
||||||
logger.enable("nanobot")
|
logger.enable("nanobot")
|
||||||
else:
|
else:
|
||||||
@ -452,7 +481,9 @@ def agent(
|
|||||||
memory_window=config.agents.defaults.memory_window,
|
memory_window=config.agents.defaults.memory_window,
|
||||||
brave_api_key=config.tools.web.search.api_key or None,
|
brave_api_key=config.tools.web.search.api_key or None,
|
||||||
exec_config=config.tools.exec,
|
exec_config=config.tools.exec,
|
||||||
|
cron_service=cron,
|
||||||
restrict_to_workspace=config.tools.restrict_to_workspace,
|
restrict_to_workspace=config.tools.restrict_to_workspace,
|
||||||
|
mcp_servers=config.tools.mcp_servers,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Show spinner when logs are off (no output to miss); skip when logs are on
|
# Show spinner when logs are off (no output to miss); skip when logs are on
|
||||||
@ -469,6 +500,7 @@ def agent(
|
|||||||
with _thinking_ctx():
|
with _thinking_ctx():
|
||||||
response = await agent_loop.process_direct(message, session_id)
|
response = await agent_loop.process_direct(message, session_id)
|
||||||
_print_agent_response(response, render_markdown=markdown)
|
_print_agent_response(response, render_markdown=markdown)
|
||||||
|
await agent_loop.close_mcp()
|
||||||
|
|
||||||
asyncio.run(run_once())
|
asyncio.run(run_once())
|
||||||
else:
|
else:
|
||||||
@ -484,30 +516,33 @@ def agent(
|
|||||||
signal.signal(signal.SIGINT, _exit_on_sigint)
|
signal.signal(signal.SIGINT, _exit_on_sigint)
|
||||||
|
|
||||||
async def run_interactive():
|
async def run_interactive():
|
||||||
while True:
|
try:
|
||||||
try:
|
while True:
|
||||||
_flush_pending_tty_input()
|
try:
|
||||||
user_input = await _read_interactive_input_async()
|
_flush_pending_tty_input()
|
||||||
command = user_input.strip()
|
user_input = await _read_interactive_input_async()
|
||||||
if not command:
|
command = user_input.strip()
|
||||||
continue
|
if not command:
|
||||||
|
continue
|
||||||
|
|
||||||
if _is_exit_command(command):
|
if _is_exit_command(command):
|
||||||
|
_restore_terminal()
|
||||||
|
console.print("\nGoodbye!")
|
||||||
|
break
|
||||||
|
|
||||||
|
with _thinking_ctx():
|
||||||
|
response = await agent_loop.process_direct(user_input, session_id)
|
||||||
|
_print_agent_response(response, render_markdown=markdown)
|
||||||
|
except KeyboardInterrupt:
|
||||||
_restore_terminal()
|
_restore_terminal()
|
||||||
console.print("\nGoodbye!")
|
console.print("\nGoodbye!")
|
||||||
break
|
break
|
||||||
|
except EOFError:
|
||||||
with _thinking_ctx():
|
_restore_terminal()
|
||||||
response = await agent_loop.process_direct(user_input, session_id)
|
console.print("\nGoodbye!")
|
||||||
_print_agent_response(response, render_markdown=markdown)
|
break
|
||||||
except KeyboardInterrupt:
|
finally:
|
||||||
_restore_terminal()
|
await agent_loop.close_mcp()
|
||||||
console.print("\nGoodbye!")
|
|
||||||
break
|
|
||||||
except EOFError:
|
|
||||||
_restore_terminal()
|
|
||||||
console.print("\nGoodbye!")
|
|
||||||
break
|
|
||||||
|
|
||||||
asyncio.run(run_interactive())
|
asyncio.run(run_interactive())
|
||||||
|
|
||||||
@ -702,20 +737,26 @@ def cron_list(
|
|||||||
table.add_column("Next Run")
|
table.add_column("Next Run")
|
||||||
|
|
||||||
import time
|
import time
|
||||||
|
from datetime import datetime as _dt
|
||||||
|
from zoneinfo import ZoneInfo
|
||||||
for job in jobs:
|
for job in jobs:
|
||||||
# Format schedule
|
# Format schedule
|
||||||
if job.schedule.kind == "every":
|
if job.schedule.kind == "every":
|
||||||
sched = f"every {(job.schedule.every_ms or 0) // 1000}s"
|
sched = f"every {(job.schedule.every_ms or 0) // 1000}s"
|
||||||
elif job.schedule.kind == "cron":
|
elif job.schedule.kind == "cron":
|
||||||
sched = job.schedule.expr or ""
|
sched = f"{job.schedule.expr or ''} ({job.schedule.tz})" if job.schedule.tz else (job.schedule.expr or "")
|
||||||
else:
|
else:
|
||||||
sched = "one-time"
|
sched = "one-time"
|
||||||
|
|
||||||
# Format next run
|
# Format next run
|
||||||
next_run = ""
|
next_run = ""
|
||||||
if job.state.next_run_at_ms:
|
if job.state.next_run_at_ms:
|
||||||
next_time = time.strftime("%Y-%m-%d %H:%M", time.localtime(job.state.next_run_at_ms / 1000))
|
ts = job.state.next_run_at_ms / 1000
|
||||||
next_run = next_time
|
try:
|
||||||
|
tz = ZoneInfo(job.schedule.tz) if job.schedule.tz else None
|
||||||
|
next_run = _dt.fromtimestamp(ts, tz).strftime("%Y-%m-%d %H:%M")
|
||||||
|
except Exception:
|
||||||
|
next_run = time.strftime("%Y-%m-%d %H:%M", time.localtime(ts))
|
||||||
|
|
||||||
status = "[green]enabled[/green]" if job.enabled else "[dim]disabled[/dim]"
|
status = "[green]enabled[/green]" if job.enabled else "[dim]disabled[/dim]"
|
||||||
|
|
||||||
@ -730,6 +771,7 @@ def cron_add(
|
|||||||
message: str = typer.Option(..., "--message", "-m", help="Message for agent"),
|
message: str = typer.Option(..., "--message", "-m", help="Message for agent"),
|
||||||
every: int = typer.Option(None, "--every", "-e", help="Run every N seconds"),
|
every: int = typer.Option(None, "--every", "-e", help="Run every N seconds"),
|
||||||
cron_expr: str = typer.Option(None, "--cron", "-c", help="Cron expression (e.g. '0 9 * * *')"),
|
cron_expr: str = typer.Option(None, "--cron", "-c", help="Cron expression (e.g. '0 9 * * *')"),
|
||||||
|
tz: str | None = typer.Option(None, "--tz", help="IANA timezone for cron (e.g. 'America/Vancouver')"),
|
||||||
at: str = typer.Option(None, "--at", help="Run once at time (ISO format)"),
|
at: str = typer.Option(None, "--at", help="Run once at time (ISO format)"),
|
||||||
deliver: bool = typer.Option(False, "--deliver", "-d", help="Deliver response to channel"),
|
deliver: bool = typer.Option(False, "--deliver", "-d", help="Deliver response to channel"),
|
||||||
to: str = typer.Option(None, "--to", help="Recipient for delivery"),
|
to: str = typer.Option(None, "--to", help="Recipient for delivery"),
|
||||||
@ -740,11 +782,15 @@ def cron_add(
|
|||||||
from nanobot.cron.service import CronService
|
from nanobot.cron.service import CronService
|
||||||
from nanobot.cron.types import CronSchedule
|
from nanobot.cron.types import CronSchedule
|
||||||
|
|
||||||
|
if tz and not cron_expr:
|
||||||
|
console.print("[red]Error: --tz can only be used with --cron[/red]")
|
||||||
|
raise typer.Exit(1)
|
||||||
|
|
||||||
# Determine schedule type
|
# Determine schedule type
|
||||||
if every:
|
if every:
|
||||||
schedule = CronSchedule(kind="every", every_ms=every * 1000)
|
schedule = CronSchedule(kind="every", every_ms=every * 1000)
|
||||||
elif cron_expr:
|
elif cron_expr:
|
||||||
schedule = CronSchedule(kind="cron", expr=cron_expr)
|
schedule = CronSchedule(kind="cron", expr=cron_expr, tz=tz)
|
||||||
elif at:
|
elif at:
|
||||||
import datetime
|
import datetime
|
||||||
dt = datetime.datetime.fromisoformat(at)
|
dt = datetime.datetime.fromisoformat(at)
|
||||||
@ -855,7 +901,9 @@ def status():
|
|||||||
p = getattr(config.providers, spec.name, None)
|
p = getattr(config.providers, spec.name, None)
|
||||||
if p is None:
|
if p is None:
|
||||||
continue
|
continue
|
||||||
if spec.is_local:
|
if spec.is_oauth:
|
||||||
|
console.print(f"{spec.label}: [green]✓ (OAuth)[/green]")
|
||||||
|
elif spec.is_local:
|
||||||
# Local deployments show api_base instead of api_key
|
# Local deployments show api_base instead of api_key
|
||||||
if p.api_base:
|
if p.api_base:
|
||||||
console.print(f"{spec.label}: [green]✓ {p.api_base}[/green]")
|
console.print(f"{spec.label}: [green]✓ {p.api_base}[/green]")
|
||||||
@ -866,5 +914,88 @@ def status():
|
|||||||
console.print(f"{spec.label}: {'[green]✓[/green]' if has_key else '[dim]not set[/dim]'}")
|
console.print(f"{spec.label}: {'[green]✓[/green]' if has_key else '[dim]not set[/dim]'}")
|
||||||
|
|
||||||
|
|
||||||
|
# ============================================================================
|
||||||
|
# OAuth Login
|
||||||
|
# ============================================================================
|
||||||
|
|
||||||
|
provider_app = typer.Typer(help="Manage providers")
|
||||||
|
app.add_typer(provider_app, name="provider")
|
||||||
|
|
||||||
|
|
||||||
|
_LOGIN_HANDLERS: dict[str, callable] = {}
|
||||||
|
|
||||||
|
|
||||||
|
def _register_login(name: str):
|
||||||
|
def decorator(fn):
|
||||||
|
_LOGIN_HANDLERS[name] = fn
|
||||||
|
return fn
|
||||||
|
return decorator
|
||||||
|
|
||||||
|
|
||||||
|
@provider_app.command("login")
|
||||||
|
def provider_login(
|
||||||
|
provider: str = typer.Argument(..., help="OAuth provider (e.g. 'openai-codex', 'github-copilot')"),
|
||||||
|
):
|
||||||
|
"""Authenticate with an OAuth provider."""
|
||||||
|
from nanobot.providers.registry import PROVIDERS
|
||||||
|
|
||||||
|
key = provider.replace("-", "_")
|
||||||
|
spec = next((s for s in PROVIDERS if s.name == key and s.is_oauth), None)
|
||||||
|
if not spec:
|
||||||
|
names = ", ".join(s.name.replace("_", "-") for s in PROVIDERS if s.is_oauth)
|
||||||
|
console.print(f"[red]Unknown OAuth provider: {provider}[/red] Supported: {names}")
|
||||||
|
raise typer.Exit(1)
|
||||||
|
|
||||||
|
handler = _LOGIN_HANDLERS.get(spec.name)
|
||||||
|
if not handler:
|
||||||
|
console.print(f"[red]Login not implemented for {spec.label}[/red]")
|
||||||
|
raise typer.Exit(1)
|
||||||
|
|
||||||
|
console.print(f"{__logo__} OAuth Login - {spec.label}\n")
|
||||||
|
handler()
|
||||||
|
|
||||||
|
|
||||||
|
@_register_login("openai_codex")
|
||||||
|
def _login_openai_codex() -> None:
|
||||||
|
try:
|
||||||
|
from oauth_cli_kit import get_token, login_oauth_interactive
|
||||||
|
token = None
|
||||||
|
try:
|
||||||
|
token = get_token()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
if not (token and token.access):
|
||||||
|
console.print("[cyan]Starting interactive OAuth login...[/cyan]\n")
|
||||||
|
token = login_oauth_interactive(
|
||||||
|
print_fn=lambda s: console.print(s),
|
||||||
|
prompt_fn=lambda s: typer.prompt(s),
|
||||||
|
)
|
||||||
|
if not (token and token.access):
|
||||||
|
console.print("[red]✗ Authentication failed[/red]")
|
||||||
|
raise typer.Exit(1)
|
||||||
|
console.print(f"[green]✓ Authenticated with OpenAI Codex[/green] [dim]{token.account_id}[/dim]")
|
||||||
|
except ImportError:
|
||||||
|
console.print("[red]oauth_cli_kit not installed. Run: pip install oauth-cli-kit[/red]")
|
||||||
|
raise typer.Exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
@_register_login("github_copilot")
|
||||||
|
def _login_github_copilot() -> None:
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
console.print("[cyan]Starting GitHub Copilot device flow...[/cyan]\n")
|
||||||
|
|
||||||
|
async def _trigger():
|
||||||
|
from litellm import acompletion
|
||||||
|
await acompletion(model="github_copilot/gpt-4o", messages=[{"role": "user", "content": "hi"}], max_tokens=1)
|
||||||
|
|
||||||
|
try:
|
||||||
|
asyncio.run(_trigger())
|
||||||
|
console.print("[green]✓ Authenticated with GitHub Copilot[/green]")
|
||||||
|
except Exception as e:
|
||||||
|
console.print(f"[red]Authentication error: {e}[/red]")
|
||||||
|
raise typer.Exit(1)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
app()
|
app()
|
||||||
|
|||||||
@ -2,7 +2,6 @@
|
|||||||
|
|
||||||
import json
|
import json
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
from nanobot.config.schema import Config
|
from nanobot.config.schema import Config
|
||||||
|
|
||||||
@ -21,43 +20,41 @@ def get_data_dir() -> Path:
|
|||||||
def load_config(config_path: Path | None = None) -> Config:
|
def load_config(config_path: Path | None = None) -> Config:
|
||||||
"""
|
"""
|
||||||
Load configuration from file or create default.
|
Load configuration from file or create default.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
config_path: Optional path to config file. Uses default if not provided.
|
config_path: Optional path to config file. Uses default if not provided.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Loaded configuration object.
|
Loaded configuration object.
|
||||||
"""
|
"""
|
||||||
path = config_path or get_config_path()
|
path = config_path or get_config_path()
|
||||||
|
|
||||||
if path.exists():
|
if path.exists():
|
||||||
try:
|
try:
|
||||||
with open(path) as f:
|
with open(path) as f:
|
||||||
data = json.load(f)
|
data = json.load(f)
|
||||||
data = _migrate_config(data)
|
data = _migrate_config(data)
|
||||||
return Config.model_validate(convert_keys(data))
|
return Config.model_validate(data)
|
||||||
except (json.JSONDecodeError, ValueError) as e:
|
except (json.JSONDecodeError, ValueError) as e:
|
||||||
print(f"Warning: Failed to load config from {path}: {e}")
|
print(f"Warning: Failed to load config from {path}: {e}")
|
||||||
print("Using default configuration.")
|
print("Using default configuration.")
|
||||||
|
|
||||||
return Config()
|
return Config()
|
||||||
|
|
||||||
|
|
||||||
def save_config(config: Config, config_path: Path | None = None) -> None:
|
def save_config(config: Config, config_path: Path | None = None) -> None:
|
||||||
"""
|
"""
|
||||||
Save configuration to file.
|
Save configuration to file.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
config: Configuration to save.
|
config: Configuration to save.
|
||||||
config_path: Optional path to save to. Uses default if not provided.
|
config_path: Optional path to save to. Uses default if not provided.
|
||||||
"""
|
"""
|
||||||
path = config_path or get_config_path()
|
path = config_path or get_config_path()
|
||||||
path.parent.mkdir(parents=True, exist_ok=True)
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
# Convert to camelCase format
|
data = config.model_dump(by_alias=True)
|
||||||
data = config.model_dump()
|
|
||||||
data = convert_to_camel(data)
|
|
||||||
|
|
||||||
with open(path, "w") as f:
|
with open(path, "w") as f:
|
||||||
json.dump(data, f, indent=2)
|
json.dump(data, f, indent=2)
|
||||||
|
|
||||||
@ -70,37 +67,3 @@ def _migrate_config(data: dict) -> dict:
|
|||||||
if "restrictToWorkspace" in exec_cfg and "restrictToWorkspace" not in tools:
|
if "restrictToWorkspace" in exec_cfg and "restrictToWorkspace" not in tools:
|
||||||
tools["restrictToWorkspace"] = exec_cfg.pop("restrictToWorkspace")
|
tools["restrictToWorkspace"] = exec_cfg.pop("restrictToWorkspace")
|
||||||
return data
|
return data
|
||||||
|
|
||||||
|
|
||||||
def convert_keys(data: Any) -> Any:
|
|
||||||
"""Convert camelCase keys to snake_case for Pydantic."""
|
|
||||||
if isinstance(data, dict):
|
|
||||||
return {camel_to_snake(k): convert_keys(v) for k, v in data.items()}
|
|
||||||
if isinstance(data, list):
|
|
||||||
return [convert_keys(item) for item in data]
|
|
||||||
return data
|
|
||||||
|
|
||||||
|
|
||||||
def convert_to_camel(data: Any) -> Any:
|
|
||||||
"""Convert snake_case keys to camelCase."""
|
|
||||||
if isinstance(data, dict):
|
|
||||||
return {snake_to_camel(k): convert_to_camel(v) for k, v in data.items()}
|
|
||||||
if isinstance(data, list):
|
|
||||||
return [convert_to_camel(item) for item in data]
|
|
||||||
return data
|
|
||||||
|
|
||||||
|
|
||||||
def camel_to_snake(name: str) -> str:
|
|
||||||
"""Convert camelCase to snake_case."""
|
|
||||||
result = []
|
|
||||||
for i, char in enumerate(name):
|
|
||||||
if char.isupper() and i > 0:
|
|
||||||
result.append("_")
|
|
||||||
result.append(char.lower())
|
|
||||||
return "".join(result)
|
|
||||||
|
|
||||||
|
|
||||||
def snake_to_camel(name: str) -> str:
|
|
||||||
"""Convert snake_case to camelCase."""
|
|
||||||
components = name.split("_")
|
|
||||||
return components[0] + "".join(x.title() for x in components[1:])
|
|
||||||
|
|||||||
@ -2,27 +2,37 @@
|
|||||||
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from pydantic import BaseModel, Field, ConfigDict
|
from pydantic import BaseModel, Field, ConfigDict
|
||||||
|
from pydantic.alias_generators import to_camel
|
||||||
from pydantic_settings import BaseSettings
|
from pydantic_settings import BaseSettings
|
||||||
|
|
||||||
|
|
||||||
class WhatsAppConfig(BaseModel):
|
class Base(BaseModel):
|
||||||
|
"""Base model that accepts both camelCase and snake_case keys."""
|
||||||
|
|
||||||
|
model_config = ConfigDict(alias_generator=to_camel, populate_by_name=True)
|
||||||
|
|
||||||
|
|
||||||
|
class WhatsAppConfig(Base):
|
||||||
"""WhatsApp channel configuration."""
|
"""WhatsApp channel configuration."""
|
||||||
|
|
||||||
enabled: bool = False
|
enabled: bool = False
|
||||||
bridge_url: str = "ws://localhost:3001"
|
bridge_url: str = "ws://localhost:3001"
|
||||||
bridge_token: str = "" # Shared token for bridge auth (optional, recommended)
|
bridge_token: str = "" # Shared token for bridge auth (optional, recommended)
|
||||||
allow_from: list[str] = Field(default_factory=list) # Allowed phone numbers
|
allow_from: list[str] = Field(default_factory=list) # Allowed phone numbers
|
||||||
|
|
||||||
|
|
||||||
class TelegramConfig(BaseModel):
|
class TelegramConfig(Base):
|
||||||
"""Telegram channel configuration."""
|
"""Telegram channel configuration."""
|
||||||
|
|
||||||
enabled: bool = False
|
enabled: bool = False
|
||||||
token: str = "" # Bot token from @BotFather
|
token: str = "" # Bot token from @BotFather
|
||||||
allow_from: list[str] = Field(default_factory=list) # Allowed user IDs or usernames
|
allow_from: list[str] = Field(default_factory=list) # Allowed user IDs or usernames
|
||||||
proxy: str | None = None # HTTP/SOCKS5 proxy URL, e.g. "http://127.0.0.1:7890" or "socks5://127.0.0.1:1080"
|
proxy: str | None = None # HTTP/SOCKS5 proxy URL, e.g. "http://127.0.0.1:7890" or "socks5://127.0.0.1:1080"
|
||||||
|
|
||||||
|
|
||||||
class FeishuConfig(BaseModel):
|
class FeishuConfig(Base):
|
||||||
"""Feishu/Lark channel configuration using WebSocket long connection."""
|
"""Feishu/Lark channel configuration using WebSocket long connection."""
|
||||||
|
|
||||||
enabled: bool = False
|
enabled: bool = False
|
||||||
app_id: str = "" # App ID from Feishu Open Platform
|
app_id: str = "" # App ID from Feishu Open Platform
|
||||||
app_secret: str = "" # App Secret from Feishu Open Platform
|
app_secret: str = "" # App Secret from Feishu Open Platform
|
||||||
@ -31,24 +41,28 @@ class FeishuConfig(BaseModel):
|
|||||||
allow_from: list[str] = Field(default_factory=list) # Allowed user open_ids
|
allow_from: list[str] = Field(default_factory=list) # Allowed user open_ids
|
||||||
|
|
||||||
|
|
||||||
class DingTalkConfig(BaseModel):
|
class DingTalkConfig(Base):
|
||||||
"""DingTalk channel configuration using Stream mode."""
|
"""DingTalk channel configuration using Stream mode."""
|
||||||
|
|
||||||
enabled: bool = False
|
enabled: bool = False
|
||||||
client_id: str = "" # AppKey
|
client_id: str = "" # AppKey
|
||||||
client_secret: str = "" # AppSecret
|
client_secret: str = "" # AppSecret
|
||||||
allow_from: list[str] = Field(default_factory=list) # Allowed staff_ids
|
allow_from: list[str] = Field(default_factory=list) # Allowed staff_ids
|
||||||
|
|
||||||
|
|
||||||
class DiscordConfig(BaseModel):
|
class DiscordConfig(Base):
|
||||||
"""Discord channel configuration."""
|
"""Discord channel configuration."""
|
||||||
|
|
||||||
enabled: bool = False
|
enabled: bool = False
|
||||||
token: str = "" # Bot token from Discord Developer Portal
|
token: str = "" # Bot token from Discord Developer Portal
|
||||||
allow_from: list[str] = Field(default_factory=list) # Allowed user IDs
|
allow_from: list[str] = Field(default_factory=list) # Allowed user IDs
|
||||||
gateway_url: str = "wss://gateway.discord.gg/?v=10&encoding=json"
|
gateway_url: str = "wss://gateway.discord.gg/?v=10&encoding=json"
|
||||||
intents: int = 37377 # GUILDS + GUILD_MESSAGES + DIRECT_MESSAGES + MESSAGE_CONTENT
|
intents: int = 37377 # GUILDS + GUILD_MESSAGES + DIRECT_MESSAGES + MESSAGE_CONTENT
|
||||||
|
|
||||||
class EmailConfig(BaseModel):
|
|
||||||
|
class EmailConfig(Base):
|
||||||
"""Email channel configuration (IMAP inbound + SMTP outbound)."""
|
"""Email channel configuration (IMAP inbound + SMTP outbound)."""
|
||||||
|
|
||||||
enabled: bool = False
|
enabled: bool = False
|
||||||
consent_granted: bool = False # Explicit owner permission to access mailbox data
|
consent_granted: bool = False # Explicit owner permission to access mailbox data
|
||||||
|
|
||||||
@ -78,18 +92,21 @@ class EmailConfig(BaseModel):
|
|||||||
allow_from: list[str] = Field(default_factory=list) # Allowed sender email addresses
|
allow_from: list[str] = Field(default_factory=list) # Allowed sender email addresses
|
||||||
|
|
||||||
|
|
||||||
class MochatMentionConfig(BaseModel):
|
class MochatMentionConfig(Base):
|
||||||
"""Mochat mention behavior configuration."""
|
"""Mochat mention behavior configuration."""
|
||||||
|
|
||||||
require_in_groups: bool = False
|
require_in_groups: bool = False
|
||||||
|
|
||||||
|
|
||||||
class MochatGroupRule(BaseModel):
|
class MochatGroupRule(Base):
|
||||||
"""Mochat per-group mention requirement."""
|
"""Mochat per-group mention requirement."""
|
||||||
|
|
||||||
require_mention: bool = False
|
require_mention: bool = False
|
||||||
|
|
||||||
|
|
||||||
class MochatConfig(BaseModel):
|
class MochatConfig(Base):
|
||||||
"""Mochat channel configuration."""
|
"""Mochat channel configuration."""
|
||||||
|
|
||||||
enabled: bool = False
|
enabled: bool = False
|
||||||
base_url: str = "https://mochat.io"
|
base_url: str = "https://mochat.io"
|
||||||
socket_url: str = ""
|
socket_url: str = ""
|
||||||
@ -114,36 +131,42 @@ class MochatConfig(BaseModel):
|
|||||||
reply_delay_ms: int = 120000
|
reply_delay_ms: int = 120000
|
||||||
|
|
||||||
|
|
||||||
class SlackDMConfig(BaseModel):
|
class SlackDMConfig(Base):
|
||||||
"""Slack DM policy configuration."""
|
"""Slack DM policy configuration."""
|
||||||
|
|
||||||
enabled: bool = True
|
enabled: bool = True
|
||||||
policy: str = "open" # "open" or "allowlist"
|
policy: str = "open" # "open" or "allowlist"
|
||||||
allow_from: list[str] = Field(default_factory=list) # Allowed Slack user IDs
|
allow_from: list[str] = Field(default_factory=list) # Allowed Slack user IDs
|
||||||
|
|
||||||
|
|
||||||
class SlackConfig(BaseModel):
|
class SlackConfig(Base):
|
||||||
"""Slack channel configuration."""
|
"""Slack channel configuration."""
|
||||||
|
|
||||||
enabled: bool = False
|
enabled: bool = False
|
||||||
mode: str = "socket" # "socket" supported
|
mode: str = "socket" # "socket" supported
|
||||||
webhook_path: str = "/slack/events"
|
webhook_path: str = "/slack/events"
|
||||||
bot_token: str = "" # xoxb-...
|
bot_token: str = "" # xoxb-...
|
||||||
app_token: str = "" # xapp-...
|
app_token: str = "" # xapp-...
|
||||||
user_token_read_only: bool = True
|
user_token_read_only: bool = True
|
||||||
|
reply_in_thread: bool = True
|
||||||
|
react_emoji: str = "eyes"
|
||||||
group_policy: str = "mention" # "mention", "open", "allowlist"
|
group_policy: str = "mention" # "mention", "open", "allowlist"
|
||||||
group_allow_from: list[str] = Field(default_factory=list) # Allowed channel IDs if allowlist
|
group_allow_from: list[str] = Field(default_factory=list) # Allowed channel IDs if allowlist
|
||||||
dm: SlackDMConfig = Field(default_factory=SlackDMConfig)
|
dm: SlackDMConfig = Field(default_factory=SlackDMConfig)
|
||||||
|
|
||||||
|
|
||||||
class QQConfig(BaseModel):
|
class QQConfig(Base):
|
||||||
"""QQ channel configuration using botpy SDK."""
|
"""QQ channel configuration using botpy SDK."""
|
||||||
|
|
||||||
enabled: bool = False
|
enabled: bool = False
|
||||||
app_id: str = "" # 机器人 ID (AppID) from q.qq.com
|
app_id: str = "" # 机器人 ID (AppID) from q.qq.com
|
||||||
secret: str = "" # 机器人密钥 (AppSecret) from q.qq.com
|
secret: str = "" # 机器人密钥 (AppSecret) from q.qq.com
|
||||||
allow_from: list[str] = Field(default_factory=list) # Allowed user openids (empty = public access)
|
allow_from: list[str] = Field(default_factory=list) # Allowed user openids (empty = public access)
|
||||||
|
|
||||||
|
|
||||||
class ChannelsConfig(BaseModel):
|
class ChannelsConfig(Base):
|
||||||
"""Configuration for chat channels."""
|
"""Configuration for chat channels."""
|
||||||
|
|
||||||
whatsapp: WhatsAppConfig = Field(default_factory=WhatsAppConfig)
|
whatsapp: WhatsAppConfig = Field(default_factory=WhatsAppConfig)
|
||||||
telegram: TelegramConfig = Field(default_factory=TelegramConfig)
|
telegram: TelegramConfig = Field(default_factory=TelegramConfig)
|
||||||
discord: DiscordConfig = Field(default_factory=DiscordConfig)
|
discord: DiscordConfig = Field(default_factory=DiscordConfig)
|
||||||
@ -155,8 +178,9 @@ class ChannelsConfig(BaseModel):
|
|||||||
qq: QQConfig = Field(default_factory=QQConfig)
|
qq: QQConfig = Field(default_factory=QQConfig)
|
||||||
|
|
||||||
|
|
||||||
class AgentDefaults(BaseModel):
|
class AgentDefaults(Base):
|
||||||
"""Default agent configuration."""
|
"""Default agent configuration."""
|
||||||
|
|
||||||
workspace: str = "~/.nanobot/workspace"
|
workspace: str = "~/.nanobot/workspace"
|
||||||
model: str = "anthropic/claude-opus-4-5"
|
model: str = "anthropic/claude-opus-4-5"
|
||||||
max_tokens: int = 8192
|
max_tokens: int = 8192
|
||||||
@ -165,20 +189,23 @@ class AgentDefaults(BaseModel):
|
|||||||
memory_window: int = 50
|
memory_window: int = 50
|
||||||
|
|
||||||
|
|
||||||
class AgentsConfig(BaseModel):
|
class AgentsConfig(Base):
|
||||||
"""Agent configuration."""
|
"""Agent configuration."""
|
||||||
|
|
||||||
defaults: AgentDefaults = Field(default_factory=AgentDefaults)
|
defaults: AgentDefaults = Field(default_factory=AgentDefaults)
|
||||||
|
|
||||||
|
|
||||||
class ProviderConfig(BaseModel):
|
class ProviderConfig(Base):
|
||||||
"""LLM provider configuration."""
|
"""LLM provider configuration."""
|
||||||
|
|
||||||
api_key: str = ""
|
api_key: str = ""
|
||||||
api_base: str | None = None
|
api_base: str | None = None
|
||||||
extra_headers: dict[str, str] | None = None # Custom headers (e.g. APP-Code for AiHubMix)
|
extra_headers: dict[str, str] | None = None # Custom headers (e.g. APP-Code for AiHubMix)
|
||||||
|
|
||||||
|
|
||||||
class ProvidersConfig(BaseModel):
|
class ProvidersConfig(Base):
|
||||||
"""Configuration for LLM providers."""
|
"""Configuration for LLM providers."""
|
||||||
|
|
||||||
custom: ProviderConfig = Field(default_factory=ProviderConfig) # Any OpenAI-compatible endpoint
|
custom: ProviderConfig = Field(default_factory=ProviderConfig) # Any OpenAI-compatible endpoint
|
||||||
anthropic: ProviderConfig = Field(default_factory=ProviderConfig)
|
anthropic: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||||
openai: ProviderConfig = Field(default_factory=ProviderConfig)
|
openai: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||||
@ -192,63 +219,87 @@ class ProvidersConfig(BaseModel):
|
|||||||
moonshot: ProviderConfig = Field(default_factory=ProviderConfig)
|
moonshot: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||||
minimax: ProviderConfig = Field(default_factory=ProviderConfig)
|
minimax: ProviderConfig = Field(default_factory=ProviderConfig)
|
||||||
aihubmix: ProviderConfig = Field(default_factory=ProviderConfig) # AiHubMix API gateway
|
aihubmix: ProviderConfig = Field(default_factory=ProviderConfig) # AiHubMix API gateway
|
||||||
|
siliconflow: ProviderConfig = Field(default_factory=ProviderConfig) # SiliconFlow (硅基流动) API gateway
|
||||||
|
openai_codex: ProviderConfig = Field(default_factory=ProviderConfig) # OpenAI Codex (OAuth)
|
||||||
|
github_copilot: ProviderConfig = Field(default_factory=ProviderConfig) # Github Copilot (OAuth)
|
||||||
|
|
||||||
|
|
||||||
class GatewayConfig(BaseModel):
|
class GatewayConfig(Base):
|
||||||
"""Gateway/server configuration."""
|
"""Gateway/server configuration."""
|
||||||
|
|
||||||
host: str = "0.0.0.0"
|
host: str = "0.0.0.0"
|
||||||
port: int = 18790
|
port: int = 18790
|
||||||
|
|
||||||
|
|
||||||
class WebSearchConfig(BaseModel):
|
class WebSearchConfig(Base):
|
||||||
"""Web search tool configuration."""
|
"""Web search tool configuration."""
|
||||||
|
|
||||||
api_key: str = "" # Brave Search API key
|
api_key: str = "" # Brave Search API key
|
||||||
max_results: int = 5
|
max_results: int = 5
|
||||||
|
|
||||||
|
|
||||||
class WebToolsConfig(BaseModel):
|
class WebToolsConfig(Base):
|
||||||
"""Web tools configuration."""
|
"""Web tools configuration."""
|
||||||
|
|
||||||
search: WebSearchConfig = Field(default_factory=WebSearchConfig)
|
search: WebSearchConfig = Field(default_factory=WebSearchConfig)
|
||||||
|
|
||||||
|
|
||||||
class ExecToolConfig(BaseModel):
|
class ExecToolConfig(Base):
|
||||||
"""Shell exec tool configuration."""
|
"""Shell exec tool configuration."""
|
||||||
|
|
||||||
timeout: int = 60
|
timeout: int = 60
|
||||||
|
|
||||||
|
|
||||||
class ToolsConfig(BaseModel):
|
class MCPServerConfig(Base):
|
||||||
|
"""MCP server connection configuration (stdio or HTTP)."""
|
||||||
|
|
||||||
|
command: str = "" # Stdio: command to run (e.g. "npx")
|
||||||
|
args: list[str] = Field(default_factory=list) # Stdio: command arguments
|
||||||
|
env: dict[str, str] = Field(default_factory=dict) # Stdio: extra env vars
|
||||||
|
url: str = "" # HTTP: streamable HTTP endpoint URL
|
||||||
|
|
||||||
|
|
||||||
|
class ToolsConfig(Base):
|
||||||
"""Tools configuration."""
|
"""Tools configuration."""
|
||||||
|
|
||||||
web: WebToolsConfig = Field(default_factory=WebToolsConfig)
|
web: WebToolsConfig = Field(default_factory=WebToolsConfig)
|
||||||
exec: ExecToolConfig = Field(default_factory=ExecToolConfig)
|
exec: ExecToolConfig = Field(default_factory=ExecToolConfig)
|
||||||
restrict_to_workspace: bool = False # If true, restrict all tool access to workspace directory
|
restrict_to_workspace: bool = False # If true, restrict all tool access to workspace directory
|
||||||
|
mcp_servers: dict[str, MCPServerConfig] = Field(default_factory=dict)
|
||||||
|
|
||||||
|
|
||||||
class Config(BaseSettings):
|
class Config(BaseSettings):
|
||||||
"""Root configuration for nanobot."""
|
"""Root configuration for nanobot."""
|
||||||
|
|
||||||
agents: AgentsConfig = Field(default_factory=AgentsConfig)
|
agents: AgentsConfig = Field(default_factory=AgentsConfig)
|
||||||
channels: ChannelsConfig = Field(default_factory=ChannelsConfig)
|
channels: ChannelsConfig = Field(default_factory=ChannelsConfig)
|
||||||
providers: ProvidersConfig = Field(default_factory=ProvidersConfig)
|
providers: ProvidersConfig = Field(default_factory=ProvidersConfig)
|
||||||
gateway: GatewayConfig = Field(default_factory=GatewayConfig)
|
gateway: GatewayConfig = Field(default_factory=GatewayConfig)
|
||||||
tools: ToolsConfig = Field(default_factory=ToolsConfig)
|
tools: ToolsConfig = Field(default_factory=ToolsConfig)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def workspace_path(self) -> Path:
|
def workspace_path(self) -> Path:
|
||||||
"""Get expanded workspace path."""
|
"""Get expanded workspace path."""
|
||||||
return Path(self.agents.defaults.workspace).expanduser()
|
return Path(self.agents.defaults.workspace).expanduser()
|
||||||
|
|
||||||
def _match_provider(self, model: str | None = None) -> tuple["ProviderConfig | None", str | None]:
|
def _match_provider(self, model: str | None = None) -> tuple["ProviderConfig | None", str | None]:
|
||||||
"""Match provider config and its registry name. Returns (config, spec_name)."""
|
"""Match provider config and its registry name. Returns (config, spec_name)."""
|
||||||
from nanobot.providers.registry import PROVIDERS
|
from nanobot.providers.registry import PROVIDERS
|
||||||
|
|
||||||
model_lower = (model or self.agents.defaults.model).lower()
|
model_lower = (model or self.agents.defaults.model).lower()
|
||||||
|
|
||||||
# Match by keyword (order follows PROVIDERS registry)
|
# Match by keyword (order follows PROVIDERS registry)
|
||||||
for spec in PROVIDERS:
|
for spec in PROVIDERS:
|
||||||
p = getattr(self.providers, spec.name, None)
|
p = getattr(self.providers, spec.name, None)
|
||||||
if p and any(kw in model_lower for kw in spec.keywords) and p.api_key:
|
if p and any(kw in model_lower for kw in spec.keywords):
|
||||||
return p, spec.name
|
if spec.is_oauth or p.api_key:
|
||||||
|
return p, spec.name
|
||||||
|
|
||||||
# Fallback: gateways first, then others (follows registry order)
|
# Fallback: gateways first, then others (follows registry order)
|
||||||
|
# OAuth providers are NOT valid fallbacks — they require explicit model selection
|
||||||
for spec in PROVIDERS:
|
for spec in PROVIDERS:
|
||||||
|
if spec.is_oauth:
|
||||||
|
continue
|
||||||
p = getattr(self.providers, spec.name, None)
|
p = getattr(self.providers, spec.name, None)
|
||||||
if p and p.api_key:
|
if p and p.api_key:
|
||||||
return p, spec.name
|
return p, spec.name
|
||||||
@ -268,10 +319,11 @@ class Config(BaseSettings):
|
|||||||
"""Get API key for the given model. Falls back to first available key."""
|
"""Get API key for the given model. Falls back to first available key."""
|
||||||
p = self.get_provider(model)
|
p = self.get_provider(model)
|
||||||
return p.api_key if p else None
|
return p.api_key if p else None
|
||||||
|
|
||||||
def get_api_base(self, model: str | None = None) -> str | None:
|
def get_api_base(self, model: str | None = None) -> str | None:
|
||||||
"""Get API base URL for the given model. Applies default URLs for known gateways."""
|
"""Get API base URL for the given model. Applies default URLs for known gateways."""
|
||||||
from nanobot.providers.registry import find_by_name
|
from nanobot.providers.registry import find_by_name
|
||||||
|
|
||||||
p, name = self._match_provider(model)
|
p, name = self._match_provider(model)
|
||||||
if p and p.api_base:
|
if p and p.api_base:
|
||||||
return p.api_base
|
return p.api_base
|
||||||
@ -283,8 +335,5 @@ class Config(BaseSettings):
|
|||||||
if spec and spec.is_gateway and spec.default_api_base:
|
if spec and spec.is_gateway and spec.default_api_base:
|
||||||
return spec.default_api_base
|
return spec.default_api_base
|
||||||
return None
|
return None
|
||||||
|
|
||||||
model_config = ConfigDict(
|
model_config = ConfigDict(env_prefix="NANOBOT_", env_nested_delimiter="__")
|
||||||
env_prefix="NANOBOT_",
|
|
||||||
env_nested_delimiter="__"
|
|
||||||
)
|
|
||||||
|
|||||||
@ -4,6 +4,7 @@ import asyncio
|
|||||||
import json
|
import json
|
||||||
import time
|
import time
|
||||||
import uuid
|
import uuid
|
||||||
|
from datetime import datetime
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Callable, Coroutine
|
from typing import Any, Callable, Coroutine
|
||||||
|
|
||||||
@ -30,9 +31,14 @@ def _compute_next_run(schedule: CronSchedule, now_ms: int) -> int | None:
|
|||||||
if schedule.kind == "cron" and schedule.expr:
|
if schedule.kind == "cron" and schedule.expr:
|
||||||
try:
|
try:
|
||||||
from croniter import croniter
|
from croniter import croniter
|
||||||
cron = croniter(schedule.expr, time.time())
|
from zoneinfo import ZoneInfo
|
||||||
next_time = cron.get_next()
|
# Use caller-provided reference time for deterministic scheduling
|
||||||
return int(next_time * 1000)
|
base_time = now_ms / 1000
|
||||||
|
tz = ZoneInfo(schedule.tz) if schedule.tz else datetime.now().astimezone().tzinfo
|
||||||
|
base_dt = datetime.fromtimestamp(base_time, tz=tz)
|
||||||
|
cron = croniter(schedule.expr, base_dt)
|
||||||
|
next_dt = cron.get_next(datetime)
|
||||||
|
return int(next_dt.timestamp() * 1000)
|
||||||
except Exception:
|
except Exception:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|||||||
@ -2,5 +2,6 @@
|
|||||||
|
|
||||||
from nanobot.providers.base import LLMProvider, LLMResponse
|
from nanobot.providers.base import LLMProvider, LLMResponse
|
||||||
from nanobot.providers.litellm_provider import LiteLLMProvider
|
from nanobot.providers.litellm_provider import LiteLLMProvider
|
||||||
|
from nanobot.providers.openai_codex_provider import OpenAICodexProvider
|
||||||
|
|
||||||
__all__ = ["LLMProvider", "LLMResponse", "LiteLLMProvider"]
|
__all__ = ["LLMProvider", "LLMResponse", "LiteLLMProvider", "OpenAICodexProvider"]
|
||||||
|
|||||||
47
nanobot/providers/custom_provider.py
Normal file
47
nanobot/providers/custom_provider.py
Normal file
@ -0,0 +1,47 @@
|
|||||||
|
"""Direct OpenAI-compatible provider — bypasses LiteLLM."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
import json_repair
|
||||||
|
from openai import AsyncOpenAI
|
||||||
|
|
||||||
|
from nanobot.providers.base import LLMProvider, LLMResponse, ToolCallRequest
|
||||||
|
|
||||||
|
|
||||||
|
class CustomProvider(LLMProvider):
|
||||||
|
|
||||||
|
def __init__(self, api_key: str = "no-key", api_base: str = "http://localhost:8000/v1", default_model: str = "default"):
|
||||||
|
super().__init__(api_key, api_base)
|
||||||
|
self.default_model = default_model
|
||||||
|
self._client = AsyncOpenAI(api_key=api_key, base_url=api_base)
|
||||||
|
|
||||||
|
async def chat(self, messages: list[dict[str, Any]], tools: list[dict[str, Any]] | None = None,
|
||||||
|
model: str | None = None, max_tokens: int = 4096, temperature: float = 0.7) -> LLMResponse:
|
||||||
|
kwargs: dict[str, Any] = {"model": model or self.default_model, "messages": messages,
|
||||||
|
"max_tokens": max(1, max_tokens), "temperature": temperature}
|
||||||
|
if tools:
|
||||||
|
kwargs.update(tools=tools, tool_choice="auto")
|
||||||
|
try:
|
||||||
|
return self._parse(await self._client.chat.completions.create(**kwargs))
|
||||||
|
except Exception as e:
|
||||||
|
return LLMResponse(content=f"Error: {e}", finish_reason="error")
|
||||||
|
|
||||||
|
def _parse(self, response: Any) -> LLMResponse:
|
||||||
|
choice = response.choices[0]
|
||||||
|
msg = choice.message
|
||||||
|
tool_calls = [
|
||||||
|
ToolCallRequest(id=tc.id, name=tc.function.name,
|
||||||
|
arguments=json_repair.loads(tc.function.arguments) if isinstance(tc.function.arguments, str) else tc.function.arguments)
|
||||||
|
for tc in (msg.tool_calls or [])
|
||||||
|
]
|
||||||
|
u = response.usage
|
||||||
|
return LLMResponse(
|
||||||
|
content=msg.content, tool_calls=tool_calls, finish_reason=choice.finish_reason or "stop",
|
||||||
|
usage={"prompt_tokens": u.prompt_tokens, "completion_tokens": u.completion_tokens, "total_tokens": u.total_tokens} if u else {},
|
||||||
|
reasoning_content=getattr(msg, "reasoning_content", None),
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_default_model(self) -> str:
|
||||||
|
return self.default_model
|
||||||
@ -1,6 +1,7 @@
|
|||||||
"""LiteLLM provider implementation for multi-provider support."""
|
"""LiteLLM provider implementation for multi-provider support."""
|
||||||
|
|
||||||
import json
|
import json
|
||||||
|
import json_repair
|
||||||
import os
|
import os
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
@ -54,6 +55,9 @@ class LiteLLMProvider(LLMProvider):
|
|||||||
spec = self._gateway or find_by_model(model)
|
spec = self._gateway or find_by_model(model)
|
||||||
if not spec:
|
if not spec:
|
||||||
return
|
return
|
||||||
|
if not spec.env_key:
|
||||||
|
# OAuth/provider-only specs (for example: openai_codex)
|
||||||
|
return
|
||||||
|
|
||||||
# Gateway/local overrides existing env; standard provider doesn't
|
# Gateway/local overrides existing env; standard provider doesn't
|
||||||
if self._gateway:
|
if self._gateway:
|
||||||
@ -173,10 +177,7 @@ class LiteLLMProvider(LLMProvider):
|
|||||||
# Parse arguments from JSON string if needed
|
# Parse arguments from JSON string if needed
|
||||||
args = tc.function.arguments
|
args = tc.function.arguments
|
||||||
if isinstance(args, str):
|
if isinstance(args, str):
|
||||||
try:
|
args = json_repair.loads(args)
|
||||||
args = json.loads(args)
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
args = {"raw": args}
|
|
||||||
|
|
||||||
tool_calls.append(ToolCallRequest(
|
tool_calls.append(ToolCallRequest(
|
||||||
id=tc.id,
|
id=tc.id,
|
||||||
|
|||||||
312
nanobot/providers/openai_codex_provider.py
Normal file
312
nanobot/providers/openai_codex_provider.py
Normal file
@ -0,0 +1,312 @@
|
|||||||
|
"""OpenAI Codex Responses Provider."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import hashlib
|
||||||
|
import json
|
||||||
|
from typing import Any, AsyncGenerator
|
||||||
|
|
||||||
|
import httpx
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from oauth_cli_kit import get_token as get_codex_token
|
||||||
|
from nanobot.providers.base import LLMProvider, LLMResponse, ToolCallRequest
|
||||||
|
|
||||||
|
DEFAULT_CODEX_URL = "https://chatgpt.com/backend-api/codex/responses"
|
||||||
|
DEFAULT_ORIGINATOR = "nanobot"
|
||||||
|
|
||||||
|
|
||||||
|
class OpenAICodexProvider(LLMProvider):
|
||||||
|
"""Use Codex OAuth to call the Responses API."""
|
||||||
|
|
||||||
|
def __init__(self, default_model: str = "openai-codex/gpt-5.1-codex"):
|
||||||
|
super().__init__(api_key=None, api_base=None)
|
||||||
|
self.default_model = default_model
|
||||||
|
|
||||||
|
async def chat(
|
||||||
|
self,
|
||||||
|
messages: list[dict[str, Any]],
|
||||||
|
tools: list[dict[str, Any]] | None = None,
|
||||||
|
model: str | None = None,
|
||||||
|
max_tokens: int = 4096,
|
||||||
|
temperature: float = 0.7,
|
||||||
|
) -> LLMResponse:
|
||||||
|
model = model or self.default_model
|
||||||
|
system_prompt, input_items = _convert_messages(messages)
|
||||||
|
|
||||||
|
token = await asyncio.to_thread(get_codex_token)
|
||||||
|
headers = _build_headers(token.account_id, token.access)
|
||||||
|
|
||||||
|
body: dict[str, Any] = {
|
||||||
|
"model": _strip_model_prefix(model),
|
||||||
|
"store": False,
|
||||||
|
"stream": True,
|
||||||
|
"instructions": system_prompt,
|
||||||
|
"input": input_items,
|
||||||
|
"text": {"verbosity": "medium"},
|
||||||
|
"include": ["reasoning.encrypted_content"],
|
||||||
|
"prompt_cache_key": _prompt_cache_key(messages),
|
||||||
|
"tool_choice": "auto",
|
||||||
|
"parallel_tool_calls": True,
|
||||||
|
}
|
||||||
|
|
||||||
|
if tools:
|
||||||
|
body["tools"] = _convert_tools(tools)
|
||||||
|
|
||||||
|
url = DEFAULT_CODEX_URL
|
||||||
|
|
||||||
|
try:
|
||||||
|
try:
|
||||||
|
content, tool_calls, finish_reason = await _request_codex(url, headers, body, verify=True)
|
||||||
|
except Exception as e:
|
||||||
|
if "CERTIFICATE_VERIFY_FAILED" not in str(e):
|
||||||
|
raise
|
||||||
|
logger.warning("SSL certificate verification failed for Codex API; retrying with verify=False")
|
||||||
|
content, tool_calls, finish_reason = await _request_codex(url, headers, body, verify=False)
|
||||||
|
return LLMResponse(
|
||||||
|
content=content,
|
||||||
|
tool_calls=tool_calls,
|
||||||
|
finish_reason=finish_reason,
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
return LLMResponse(
|
||||||
|
content=f"Error calling Codex: {str(e)}",
|
||||||
|
finish_reason="error",
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_default_model(self) -> str:
|
||||||
|
return self.default_model
|
||||||
|
|
||||||
|
|
||||||
|
def _strip_model_prefix(model: str) -> str:
|
||||||
|
if model.startswith("openai-codex/"):
|
||||||
|
return model.split("/", 1)[1]
|
||||||
|
return model
|
||||||
|
|
||||||
|
|
||||||
|
def _build_headers(account_id: str, token: str) -> dict[str, str]:
|
||||||
|
return {
|
||||||
|
"Authorization": f"Bearer {token}",
|
||||||
|
"chatgpt-account-id": account_id,
|
||||||
|
"OpenAI-Beta": "responses=experimental",
|
||||||
|
"originator": DEFAULT_ORIGINATOR,
|
||||||
|
"User-Agent": "nanobot (python)",
|
||||||
|
"accept": "text/event-stream",
|
||||||
|
"content-type": "application/json",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
async def _request_codex(
|
||||||
|
url: str,
|
||||||
|
headers: dict[str, str],
|
||||||
|
body: dict[str, Any],
|
||||||
|
verify: bool,
|
||||||
|
) -> tuple[str, list[ToolCallRequest], str]:
|
||||||
|
async with httpx.AsyncClient(timeout=60.0, verify=verify) as client:
|
||||||
|
async with client.stream("POST", url, headers=headers, json=body) as response:
|
||||||
|
if response.status_code != 200:
|
||||||
|
text = await response.aread()
|
||||||
|
raise RuntimeError(_friendly_error(response.status_code, text.decode("utf-8", "ignore")))
|
||||||
|
return await _consume_sse(response)
|
||||||
|
|
||||||
|
|
||||||
|
def _convert_tools(tools: list[dict[str, Any]]) -> list[dict[str, Any]]:
|
||||||
|
"""Convert OpenAI function-calling schema to Codex flat format."""
|
||||||
|
converted: list[dict[str, Any]] = []
|
||||||
|
for tool in tools:
|
||||||
|
fn = (tool.get("function") or {}) if tool.get("type") == "function" else tool
|
||||||
|
name = fn.get("name")
|
||||||
|
if not name:
|
||||||
|
continue
|
||||||
|
params = fn.get("parameters") or {}
|
||||||
|
converted.append({
|
||||||
|
"type": "function",
|
||||||
|
"name": name,
|
||||||
|
"description": fn.get("description") or "",
|
||||||
|
"parameters": params if isinstance(params, dict) else {},
|
||||||
|
})
|
||||||
|
return converted
|
||||||
|
|
||||||
|
|
||||||
|
def _convert_messages(messages: list[dict[str, Any]]) -> tuple[str, list[dict[str, Any]]]:
|
||||||
|
system_prompt = ""
|
||||||
|
input_items: list[dict[str, Any]] = []
|
||||||
|
|
||||||
|
for idx, msg in enumerate(messages):
|
||||||
|
role = msg.get("role")
|
||||||
|
content = msg.get("content")
|
||||||
|
|
||||||
|
if role == "system":
|
||||||
|
system_prompt = content if isinstance(content, str) else ""
|
||||||
|
continue
|
||||||
|
|
||||||
|
if role == "user":
|
||||||
|
input_items.append(_convert_user_message(content))
|
||||||
|
continue
|
||||||
|
|
||||||
|
if role == "assistant":
|
||||||
|
# Handle text first.
|
||||||
|
if isinstance(content, str) and content:
|
||||||
|
input_items.append(
|
||||||
|
{
|
||||||
|
"type": "message",
|
||||||
|
"role": "assistant",
|
||||||
|
"content": [{"type": "output_text", "text": content}],
|
||||||
|
"status": "completed",
|
||||||
|
"id": f"msg_{idx}",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
# Then handle tool calls.
|
||||||
|
for tool_call in msg.get("tool_calls", []) or []:
|
||||||
|
fn = tool_call.get("function") or {}
|
||||||
|
call_id, item_id = _split_tool_call_id(tool_call.get("id"))
|
||||||
|
call_id = call_id or f"call_{idx}"
|
||||||
|
item_id = item_id or f"fc_{idx}"
|
||||||
|
input_items.append(
|
||||||
|
{
|
||||||
|
"type": "function_call",
|
||||||
|
"id": item_id,
|
||||||
|
"call_id": call_id,
|
||||||
|
"name": fn.get("name"),
|
||||||
|
"arguments": fn.get("arguments") or "{}",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if role == "tool":
|
||||||
|
call_id, _ = _split_tool_call_id(msg.get("tool_call_id"))
|
||||||
|
output_text = content if isinstance(content, str) else json.dumps(content)
|
||||||
|
input_items.append(
|
||||||
|
{
|
||||||
|
"type": "function_call_output",
|
||||||
|
"call_id": call_id,
|
||||||
|
"output": output_text,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
|
return system_prompt, input_items
|
||||||
|
|
||||||
|
|
||||||
|
def _convert_user_message(content: Any) -> dict[str, Any]:
|
||||||
|
if isinstance(content, str):
|
||||||
|
return {"role": "user", "content": [{"type": "input_text", "text": content}]}
|
||||||
|
if isinstance(content, list):
|
||||||
|
converted: list[dict[str, Any]] = []
|
||||||
|
for item in content:
|
||||||
|
if not isinstance(item, dict):
|
||||||
|
continue
|
||||||
|
if item.get("type") == "text":
|
||||||
|
converted.append({"type": "input_text", "text": item.get("text", "")})
|
||||||
|
elif item.get("type") == "image_url":
|
||||||
|
url = (item.get("image_url") or {}).get("url")
|
||||||
|
if url:
|
||||||
|
converted.append({"type": "input_image", "image_url": url, "detail": "auto"})
|
||||||
|
if converted:
|
||||||
|
return {"role": "user", "content": converted}
|
||||||
|
return {"role": "user", "content": [{"type": "input_text", "text": ""}]}
|
||||||
|
|
||||||
|
|
||||||
|
def _split_tool_call_id(tool_call_id: Any) -> tuple[str, str | None]:
|
||||||
|
if isinstance(tool_call_id, str) and tool_call_id:
|
||||||
|
if "|" in tool_call_id:
|
||||||
|
call_id, item_id = tool_call_id.split("|", 1)
|
||||||
|
return call_id, item_id or None
|
||||||
|
return tool_call_id, None
|
||||||
|
return "call_0", None
|
||||||
|
|
||||||
|
|
||||||
|
def _prompt_cache_key(messages: list[dict[str, Any]]) -> str:
|
||||||
|
raw = json.dumps(messages, ensure_ascii=True, sort_keys=True)
|
||||||
|
return hashlib.sha256(raw.encode("utf-8")).hexdigest()
|
||||||
|
|
||||||
|
|
||||||
|
async def _iter_sse(response: httpx.Response) -> AsyncGenerator[dict[str, Any], None]:
|
||||||
|
buffer: list[str] = []
|
||||||
|
async for line in response.aiter_lines():
|
||||||
|
if line == "":
|
||||||
|
if buffer:
|
||||||
|
data_lines = [l[5:].strip() for l in buffer if l.startswith("data:")]
|
||||||
|
buffer = []
|
||||||
|
if not data_lines:
|
||||||
|
continue
|
||||||
|
data = "\n".join(data_lines).strip()
|
||||||
|
if not data or data == "[DONE]":
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
yield json.loads(data)
|
||||||
|
except Exception:
|
||||||
|
continue
|
||||||
|
continue
|
||||||
|
buffer.append(line)
|
||||||
|
|
||||||
|
|
||||||
|
async def _consume_sse(response: httpx.Response) -> tuple[str, list[ToolCallRequest], str]:
|
||||||
|
content = ""
|
||||||
|
tool_calls: list[ToolCallRequest] = []
|
||||||
|
tool_call_buffers: dict[str, dict[str, Any]] = {}
|
||||||
|
finish_reason = "stop"
|
||||||
|
|
||||||
|
async for event in _iter_sse(response):
|
||||||
|
event_type = event.get("type")
|
||||||
|
if event_type == "response.output_item.added":
|
||||||
|
item = event.get("item") or {}
|
||||||
|
if item.get("type") == "function_call":
|
||||||
|
call_id = item.get("call_id")
|
||||||
|
if not call_id:
|
||||||
|
continue
|
||||||
|
tool_call_buffers[call_id] = {
|
||||||
|
"id": item.get("id") or "fc_0",
|
||||||
|
"name": item.get("name"),
|
||||||
|
"arguments": item.get("arguments") or "",
|
||||||
|
}
|
||||||
|
elif event_type == "response.output_text.delta":
|
||||||
|
content += event.get("delta") or ""
|
||||||
|
elif event_type == "response.function_call_arguments.delta":
|
||||||
|
call_id = event.get("call_id")
|
||||||
|
if call_id and call_id in tool_call_buffers:
|
||||||
|
tool_call_buffers[call_id]["arguments"] += event.get("delta") or ""
|
||||||
|
elif event_type == "response.function_call_arguments.done":
|
||||||
|
call_id = event.get("call_id")
|
||||||
|
if call_id and call_id in tool_call_buffers:
|
||||||
|
tool_call_buffers[call_id]["arguments"] = event.get("arguments") or ""
|
||||||
|
elif event_type == "response.output_item.done":
|
||||||
|
item = event.get("item") or {}
|
||||||
|
if item.get("type") == "function_call":
|
||||||
|
call_id = item.get("call_id")
|
||||||
|
if not call_id:
|
||||||
|
continue
|
||||||
|
buf = tool_call_buffers.get(call_id) or {}
|
||||||
|
args_raw = buf.get("arguments") or item.get("arguments") or "{}"
|
||||||
|
try:
|
||||||
|
args = json.loads(args_raw)
|
||||||
|
except Exception:
|
||||||
|
args = {"raw": args_raw}
|
||||||
|
tool_calls.append(
|
||||||
|
ToolCallRequest(
|
||||||
|
id=f"{call_id}|{buf.get('id') or item.get('id') or 'fc_0'}",
|
||||||
|
name=buf.get("name") or item.get("name"),
|
||||||
|
arguments=args,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
elif event_type == "response.completed":
|
||||||
|
status = (event.get("response") or {}).get("status")
|
||||||
|
finish_reason = _map_finish_reason(status)
|
||||||
|
elif event_type in {"error", "response.failed"}:
|
||||||
|
raise RuntimeError("Codex response failed")
|
||||||
|
|
||||||
|
return content, tool_calls, finish_reason
|
||||||
|
|
||||||
|
|
||||||
|
_FINISH_REASON_MAP = {"completed": "stop", "incomplete": "length", "failed": "error", "cancelled": "error"}
|
||||||
|
|
||||||
|
|
||||||
|
def _map_finish_reason(status: str | None) -> str:
|
||||||
|
return _FINISH_REASON_MAP.get(status or "completed", "stop")
|
||||||
|
|
||||||
|
|
||||||
|
def _friendly_error(status_code: int, raw: str) -> str:
|
||||||
|
if status_code == 429:
|
||||||
|
return "ChatGPT usage quota exceeded or rate limit triggered. Please try again later."
|
||||||
|
return f"HTTP {status_code}: {raw}"
|
||||||
@ -51,6 +51,12 @@ class ProviderSpec:
|
|||||||
# per-model param overrides, e.g. (("kimi-k2.5", {"temperature": 1.0}),)
|
# per-model param overrides, e.g. (("kimi-k2.5", {"temperature": 1.0}),)
|
||||||
model_overrides: tuple[tuple[str, dict[str, Any]], ...] = ()
|
model_overrides: tuple[tuple[str, dict[str, Any]], ...] = ()
|
||||||
|
|
||||||
|
# OAuth-based providers (e.g., OpenAI Codex) don't use API keys
|
||||||
|
is_oauth: bool = False # if True, uses OAuth flow instead of API key
|
||||||
|
|
||||||
|
# Direct providers bypass LiteLLM entirely (e.g., CustomProvider)
|
||||||
|
is_direct: bool = False
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def label(self) -> str:
|
def label(self) -> str:
|
||||||
return self.display_name or self.name.title()
|
return self.display_name or self.name.title()
|
||||||
@ -62,18 +68,14 @@ class ProviderSpec:
|
|||||||
|
|
||||||
PROVIDERS: tuple[ProviderSpec, ...] = (
|
PROVIDERS: tuple[ProviderSpec, ...] = (
|
||||||
|
|
||||||
# === Custom (user-provided OpenAI-compatible endpoint) =================
|
# === Custom (direct OpenAI-compatible endpoint, bypasses LiteLLM) ======
|
||||||
# No auto-detection — only activates when user explicitly configures "custom".
|
|
||||||
|
|
||||||
ProviderSpec(
|
ProviderSpec(
|
||||||
name="custom",
|
name="custom",
|
||||||
keywords=(),
|
keywords=(),
|
||||||
env_key="OPENAI_API_KEY",
|
env_key="",
|
||||||
display_name="Custom",
|
display_name="Custom",
|
||||||
litellm_prefix="openai",
|
litellm_prefix="",
|
||||||
skip_prefixes=("openai/",),
|
is_direct=True,
|
||||||
is_gateway=True,
|
|
||||||
strip_model_prefix=True,
|
|
||||||
),
|
),
|
||||||
|
|
||||||
# === Gateways (detected by api_key / api_base, not model name) =========
|
# === Gateways (detected by api_key / api_base, not model name) =========
|
||||||
@ -117,6 +119,24 @@ PROVIDERS: tuple[ProviderSpec, ...] = (
|
|||||||
model_overrides=(),
|
model_overrides=(),
|
||||||
),
|
),
|
||||||
|
|
||||||
|
# SiliconFlow (硅基流动): OpenAI-compatible gateway, model names keep org prefix
|
||||||
|
ProviderSpec(
|
||||||
|
name="siliconflow",
|
||||||
|
keywords=("siliconflow",),
|
||||||
|
env_key="OPENAI_API_KEY",
|
||||||
|
display_name="SiliconFlow",
|
||||||
|
litellm_prefix="openai",
|
||||||
|
skip_prefixes=(),
|
||||||
|
env_extras=(),
|
||||||
|
is_gateway=True,
|
||||||
|
is_local=False,
|
||||||
|
detect_by_key_prefix="",
|
||||||
|
detect_by_base_keyword="siliconflow",
|
||||||
|
default_api_base="https://api.siliconflow.cn/v1",
|
||||||
|
strip_model_prefix=False,
|
||||||
|
model_overrides=(),
|
||||||
|
),
|
||||||
|
|
||||||
# === Standard providers (matched by model-name keywords) ===============
|
# === Standard providers (matched by model-name keywords) ===============
|
||||||
|
|
||||||
# Anthropic: LiteLLM recognizes "claude-*" natively, no prefix needed.
|
# Anthropic: LiteLLM recognizes "claude-*" natively, no prefix needed.
|
||||||
@ -155,6 +175,44 @@ PROVIDERS: tuple[ProviderSpec, ...] = (
|
|||||||
model_overrides=(),
|
model_overrides=(),
|
||||||
),
|
),
|
||||||
|
|
||||||
|
# OpenAI Codex: uses OAuth, not API key.
|
||||||
|
ProviderSpec(
|
||||||
|
name="openai_codex",
|
||||||
|
keywords=("openai-codex", "codex"),
|
||||||
|
env_key="", # OAuth-based, no API key
|
||||||
|
display_name="OpenAI Codex",
|
||||||
|
litellm_prefix="", # Not routed through LiteLLM
|
||||||
|
skip_prefixes=(),
|
||||||
|
env_extras=(),
|
||||||
|
is_gateway=False,
|
||||||
|
is_local=False,
|
||||||
|
detect_by_key_prefix="",
|
||||||
|
detect_by_base_keyword="codex",
|
||||||
|
default_api_base="https://chatgpt.com/backend-api",
|
||||||
|
strip_model_prefix=False,
|
||||||
|
model_overrides=(),
|
||||||
|
is_oauth=True, # OAuth-based authentication
|
||||||
|
),
|
||||||
|
|
||||||
|
# Github Copilot: uses OAuth, not API key.
|
||||||
|
ProviderSpec(
|
||||||
|
name="github_copilot",
|
||||||
|
keywords=("github_copilot", "copilot"),
|
||||||
|
env_key="", # OAuth-based, no API key
|
||||||
|
display_name="Github Copilot",
|
||||||
|
litellm_prefix="github_copilot", # github_copilot/model → github_copilot/model
|
||||||
|
skip_prefixes=("github_copilot/",),
|
||||||
|
env_extras=(),
|
||||||
|
is_gateway=False,
|
||||||
|
is_local=False,
|
||||||
|
detect_by_key_prefix="",
|
||||||
|
detect_by_base_keyword="",
|
||||||
|
default_api_base="",
|
||||||
|
strip_model_prefix=False,
|
||||||
|
model_overrides=(),
|
||||||
|
is_oauth=True, # OAuth-based authentication
|
||||||
|
),
|
||||||
|
|
||||||
# DeepSeek: needs "deepseek/" prefix for LiteLLM routing.
|
# DeepSeek: needs "deepseek/" prefix for LiteLLM routing.
|
||||||
ProviderSpec(
|
ProviderSpec(
|
||||||
name="deepseek",
|
name="deepseek",
|
||||||
|
|||||||
@ -21,4 +21,5 @@ The skill format and metadata structure follow OpenClaw's conventions to maintai
|
|||||||
| `weather` | Get weather info using wttr.in and Open-Meteo |
|
| `weather` | Get weather info using wttr.in and Open-Meteo |
|
||||||
| `summarize` | Summarize URLs, files, and YouTube videos |
|
| `summarize` | Summarize URLs, files, and YouTube videos |
|
||||||
| `tmux` | Remote-control tmux sessions |
|
| `tmux` | Remote-control tmux sessions |
|
||||||
|
| `clawhub` | Search and install skills from ClawHub registry |
|
||||||
| `skill-creator` | Create new skills |
|
| `skill-creator` | Create new skills |
|
||||||
53
nanobot/skills/clawhub/SKILL.md
Normal file
53
nanobot/skills/clawhub/SKILL.md
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
---
|
||||||
|
name: clawhub
|
||||||
|
description: Search and install agent skills from ClawHub, the public skill registry.
|
||||||
|
homepage: https://clawhub.ai
|
||||||
|
metadata: {"nanobot":{"emoji":"🦞"}}
|
||||||
|
---
|
||||||
|
|
||||||
|
# ClawHub
|
||||||
|
|
||||||
|
Public skill registry for AI agents. Search by natural language (vector search).
|
||||||
|
|
||||||
|
## When to use
|
||||||
|
|
||||||
|
Use this skill when the user asks any of:
|
||||||
|
- "find a skill for …"
|
||||||
|
- "search for skills"
|
||||||
|
- "install a skill"
|
||||||
|
- "what skills are available?"
|
||||||
|
- "update my skills"
|
||||||
|
|
||||||
|
## Search
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx --yes clawhub@latest search "web scraping" --limit 5
|
||||||
|
```
|
||||||
|
|
||||||
|
## Install
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx --yes clawhub@latest install <slug> --workdir ~/.nanobot/workspace
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace `<slug>` with the skill name from search results. This places the skill into `~/.nanobot/workspace/skills/`, where nanobot loads workspace skills from. Always include `--workdir`.
|
||||||
|
|
||||||
|
## Update
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx --yes clawhub@latest update --all --workdir ~/.nanobot/workspace
|
||||||
|
```
|
||||||
|
|
||||||
|
## List installed
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npx --yes clawhub@latest list --workdir ~/.nanobot/workspace
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Requires Node.js (`npx` comes with it).
|
||||||
|
- No API key needed for search and install.
|
||||||
|
- Login (`npx --yes clawhub@latest login`) is only required for publishing.
|
||||||
|
- `--workdir ~/.nanobot/workspace` is critical — without it, skills install to the current directory instead of the nanobot workspace.
|
||||||
|
- After install, remind the user to start a new session to load the skill.
|
||||||
@ -30,6 +30,11 @@ One-time scheduled task (compute ISO datetime from current time):
|
|||||||
cron(action="add", message="Remind me about the meeting", at="<ISO datetime>")
|
cron(action="add", message="Remind me about the meeting", at="<ISO datetime>")
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Timezone-aware cron:
|
||||||
|
```
|
||||||
|
cron(action="add", message="Morning standup", cron_expr="0 9 * * 1-5", tz="America/Vancouver")
|
||||||
|
```
|
||||||
|
|
||||||
List/remove:
|
List/remove:
|
||||||
```
|
```
|
||||||
cron(action="list")
|
cron(action="list")
|
||||||
@ -44,4 +49,9 @@ cron(action="remove", job_id="abc123")
|
|||||||
| every hour | every_seconds: 3600 |
|
| every hour | every_seconds: 3600 |
|
||||||
| every day at 8am | cron_expr: "0 8 * * *" |
|
| every day at 8am | cron_expr: "0 8 * * *" |
|
||||||
| weekdays at 5pm | cron_expr: "0 17 * * 1-5" |
|
| weekdays at 5pm | cron_expr: "0 17 * * 1-5" |
|
||||||
|
| 9am Vancouver time daily | cron_expr: "0 9 * * *", tz: "America/Vancouver" |
|
||||||
| at a specific time | at: ISO datetime string (compute from current time) |
|
| at a specific time | at: ISO datetime string (compute from current time) |
|
||||||
|
|
||||||
|
## Timezone
|
||||||
|
|
||||||
|
Use `tz` with `cron_expr` to schedule in a specific IANA timezone. Without `tz`, the server's local timezone is used.
|
||||||
|
|||||||
@ -23,7 +23,8 @@ dependencies = [
|
|||||||
"pydantic-settings>=2.0.0",
|
"pydantic-settings>=2.0.0",
|
||||||
"websockets>=12.0",
|
"websockets>=12.0",
|
||||||
"websocket-client>=1.6.0",
|
"websocket-client>=1.6.0",
|
||||||
"httpx[socks]>=0.25.0",
|
"httpx>=0.25.0",
|
||||||
|
"oauth-cli-kit>=0.1.1",
|
||||||
"loguru>=0.7.0",
|
"loguru>=0.7.0",
|
||||||
"readability-lxml>=0.8.0",
|
"readability-lxml>=0.8.0",
|
||||||
"rich>=13.0.0",
|
"rich>=13.0.0",
|
||||||
@ -35,9 +36,12 @@ dependencies = [
|
|||||||
"python-socketio>=5.11.0",
|
"python-socketio>=5.11.0",
|
||||||
"msgpack>=1.0.8",
|
"msgpack>=1.0.8",
|
||||||
"slack-sdk>=3.26.0",
|
"slack-sdk>=3.26.0",
|
||||||
|
"slackify-markdown>=0.2.0",
|
||||||
"qq-botpy>=1.0.0",
|
"qq-botpy>=1.0.0",
|
||||||
"python-socks[asyncio]>=2.4.0",
|
"python-socks[asyncio]>=2.4.0",
|
||||||
"prompt-toolkit>=3.0.0",
|
"prompt-toolkit>=3.0.0",
|
||||||
|
"mcp>=1.0.0",
|
||||||
|
"json-repair>=0.30.0",
|
||||||
]
|
]
|
||||||
|
|
||||||
[project.optional-dependencies]
|
[project.optional-dependencies]
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user