refactor(api): tighten fixed-session API contract

Require a single user message, reject mismatched models, document the OpenAI-compatible API, and exclude api/ from core agent line counts so the interface matches nanobot's minimal fixed-session runtime.
This commit is contained in:
Xubin Ren 2026-03-30 14:43:22 +00:00
parent 55501057ac
commit d9a5080d66
2 changed files with 79 additions and 3 deletions

View File

@ -115,6 +115,7 @@
- [Configuration](#-configuration)
- [Multiple Instances](#-multiple-instances)
- [CLI Reference](#-cli-reference)
- [OpenAI-Compatible API](#-openai-compatible-api)
- [Docker](#-docker)
- [Linux Service](#-linux-service)
- [Project Structure](#-project-structure)
@ -1541,6 +1542,7 @@ nanobot gateway --config ~/.nanobot-telegram/config.json --workspace /tmp/nanobo
| `nanobot agent` | Interactive chat mode |
| `nanobot agent --no-markdown` | Show plain-text replies |
| `nanobot agent --logs` | Show runtime logs during chat |
| `nanobot serve` | Start the OpenAI-compatible API |
| `nanobot gateway` | Start the gateway |
| `nanobot status` | Show status |
| `nanobot provider login openai-codex` | OAuth login for providers |
@ -1569,6 +1571,80 @@ The agent can also manage this file itself — ask it to "add a periodic task" a
</details>
## 🔌 OpenAI-Compatible API
nanobot can expose a minimal OpenAI-compatible endpoint for local integrations:
```bash
pip install "nanobot-ai[api]"
nanobot serve
```
By default, the API binds to `127.0.0.1:8900`.
### Behavior
- Fixed session: all requests share the same nanobot session (`api:default`)
- Single-message input: each request must contain exactly one `user` message
- Fixed model: omit `model`, or pass the same model shown by `/v1/models`
- No streaming: `stream=true` is not supported
### Endpoints
- `GET /health`
- `GET /v1/models`
- `POST /v1/chat/completions`
### curl
```bash
curl http://127.0.0.1:8900/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [
{
"role": "user",
"content": "hi"
}
]
}'
```
### Python (`requests`)
```python
import requests
resp = requests.post(
"http://127.0.0.1:8900/v1/chat/completions",
json={
"messages": [
{"role": "user", "content": "hi"}
]
},
timeout=120,
)
resp.raise_for_status()
print(resp.json()["choices"][0]["message"]["content"])
```
### Python (`openai`)
```python
from openai import OpenAI
client = OpenAI(
base_url="http://127.0.0.1:8900/v1",
api_key="dummy",
)
resp = client.chat.completions.create(
model="MiniMax-M2.7",
messages=[{"role": "user", "content": "hi"}],
)
print(resp.choices[0].message.content)
```
## 🐳 Docker
> [!TIP]

View File

@ -1,5 +1,5 @@
#!/bin/bash
# Count core agent lines (excluding channels/, cli/, providers/ adapters)
# Count core agent lines (excluding channels/, cli/, api/, providers/ adapters)
cd "$(dirname "$0")" || exit 1
echo "nanobot core agent line count"
@ -15,7 +15,7 @@ root=$(cat nanobot/__init__.py nanobot/__main__.py | wc -l)
printf " %-16s %5s lines\n" "(root)" "$root"
echo ""
total=$(find nanobot -name "*.py" ! -path "*/channels/*" ! -path "*/cli/*" ! -path "*/command/*" ! -path "*/providers/*" ! -path "*/skills/*" | xargs cat | wc -l)
total=$(find nanobot -name "*.py" ! -path "*/channels/*" ! -path "*/cli/*" ! -path "*/api/*" ! -path "*/command/*" ! -path "*/providers/*" ! -path "*/skills/*" | xargs cat | wc -l)
echo " Core total: $total lines"
echo ""
echo " (excludes: channels/, cli/, command/, providers/, skills/)"
echo " (excludes: channels/, cli/, api/, command/, providers/, skills/)"