Merge remote-tracking branch 'origin/main' into pr-2717-review

This commit is contained in:
Xubin Ren 2026-04-04 04:42:52 +00:00
commit 30ea048f19
34 changed files with 1109 additions and 108 deletions

1
.gitignore vendored
View File

@ -2,6 +2,7 @@
.assets
.docs
.env
.web
*.pyc
dist/
build/

View File

@ -20,13 +20,20 @@
## 📢 News
> [!IMPORTANT]
> **Security note:** Due to `litellm` supply chain poisoning, **please check your Python environment ASAP** and refer to this [advisory](https://github.com/HKUDS/nanobot/discussions/2445) for details. We have fully removed the `litellm` since **v0.1.4.post6**.
- **2026-04-02** 🧱 **Long-running tasks** run more reliably — core runtime hardening.
- **2026-04-01** 🔑 GitHub Copilot auth restored; stricter workspace paths; OpenRouter Claude caching fix.
- **2026-03-31** 🛰️ WeChat multimodal alignment, Discord/Matrix polish, Python SDK facade, MCP and tool fixes.
- **2026-03-30** 🧩 OpenAI-compatible API tightened; composable agent lifecycle hooks.
- **2026-03-29** 💬 WeChat voice, typing, QR/media resilience; fixed-session OpenAI-compatible API.
- **2026-03-28** 📚 Provider docs refresh; skill template wording fix.
- **2026-03-27** 🚀 Released **v0.1.4.post6** — architecture decoupling, litellm removal, end-to-end streaming, WeChat channel, and a security fix. Please see [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4.post6) for details.
- **2026-03-26** 🏗️ Agent runner extracted and lifecycle hooks unified; stream delta coalescing at boundaries.
- **2026-03-25** 🌏 StepFun provider, configurable timezone, Gemini thought signatures.
- **2026-03-24** 🔧 WeChat compatibility, Feishu CardKit streaming, test suite restructured.
<details>
<summary>Earlier news</summary>
- **2026-03-23** 🔧 Command routing refactored for plugins, WhatsApp/WeChat media, unified channel login CLI.
- **2026-03-22** ⚡ End-to-end streaming, WeChat channel, Anthropic cache optimization, `/status` command.
- **2026-03-21** 🔒 Replace `litellm` with native `openai` + `anthropic` SDKs. Please see [commit](https://github.com/HKUDS/nanobot/commit/3dfdab7).
@ -34,10 +41,6 @@
- **2026-03-19** 💬 Telegram gets more resilient under load; Feishu now renders code blocks properly.
- **2026-03-18** 📷 Telegram can now send media via URL. Cron schedules show human-readable details.
- **2026-03-17** ✨ Feishu formatting glow-up, Slack reacts when done, custom endpoints support extra headers, and image handling is more reliable.
<details>
<summary>Earlier news</summary>
- **2026-03-16** 🚀 Released **v0.1.4.post5** — a refinement-focused release with stronger reliability and channel support, and a more dependable day-to-day experience. Please see [release notes](https://github.com/HKUDS/nanobot/releases/tag/v0.1.4.post5) for details.
- **2026-03-15** 🧩 DingTalk rich media, smarter built-in skills, and cleaner model compatibility.
- **2026-03-14** 💬 Channel plugins, Feishu replies, and steadier MCP, QQ, and media handling.
@ -875,6 +878,7 @@ Config file: `~/.nanobot/config.json`
| `dashscope` | LLM (Qwen) | [dashscope.console.aliyun.com](https://dashscope.console.aliyun.com) |
| `moonshot` | LLM (Moonshot/Kimi) | [platform.moonshot.cn](https://platform.moonshot.cn) |
| `zhipu` | LLM (Zhipu GLM) | [open.bigmodel.cn](https://open.bigmodel.cn) |
| `mimo` | LLM (MiMo) | [platform.xiaomimimo.com](https://platform.xiaomimimo.com) |
| `ollama` | LLM (local, Ollama) | — |
| `mistral` | LLM | [docs.mistral.ai](https://docs.mistral.ai/) |
| `stepfun` | LLM (Step Fun/阶跃星辰) | [platform.stepfun.com](https://platform.stepfun.com) |
@ -1192,16 +1196,23 @@ Global settings that apply to all channels. Configure under the `channels` secti
#### Retry Behavior
When a channel send operation raises an error, nanobot retries with exponential backoff:
Retry is intentionally simple.
- **Attempt 1**: Initial send
- **Attempts 2-4**: Retry delays are 1s, 2s, 4s
- **Attempts 5+**: Retry delay caps at 4s
- **Transient failures** (network hiccups, temporary API limits): Retry usually succeeds
- **Permanent failures** (invalid token, channel banned): All retries fail
When a channel `send()` raises, nanobot retries at the channel-manager layer. By default, `channels.sendMaxRetries` is `3`, and that count includes the initial send.
- **Attempt 1**: Send immediately
- **Attempt 2**: Retry after `1s`
- **Attempt 3**: Retry after `2s`
- **Higher retry budgets**: Backoff continues as `1s`, `2s`, `4s`, then stays capped at `4s`
- **Transient failures**: Network hiccups and temporary API limits often recover on the next attempt
- **Permanent failures**: Invalid tokens, revoked access, or banned channels will exhaust the retry budget and fail cleanly
> [!NOTE]
> When a channel is completely unavailable, there's no way to notify the user since we cannot reach them through that channel. Monitor logs for "Failed to send to {channel} after N attempts" to detect persistent delivery failures.
> This design is deliberate: channel implementations should raise on delivery failure, and the channel manager owns the shared retry policy.
>
> Some channels may still apply small API-specific retries internally. For example, Telegram separately retries timeout and flood-control errors before surfacing a final failure to the manager.
>
> If a channel is completely unreachable, nanobot cannot notify the user through that same channel. Watch logs for `Failed to send to {channel} after N attempts` to spot persistent delivery failures.
### Web Search
@ -1213,17 +1224,30 @@ When a channel send operation raises an error, nanobot retries with exponential
nanobot supports multiple web search providers. Configure in `~/.nanobot/config.json` under `tools.web.search`.
By default, web tools are enabled and web search uses `duckduckgo`, so search works out of the box without an API key.
If you want to disable all built-in web tools entirely, set `tools.web.enable` to `false`. This removes both `web_search` and `web_fetch` from the tool list sent to the LLM.
| Provider | Config fields | Env var fallback | Free |
|----------|--------------|------------------|------|
| `brave` (default) | `apiKey` | `BRAVE_API_KEY` | No |
| `brave` | `apiKey` | `BRAVE_API_KEY` | No |
| `tavily` | `apiKey` | `TAVILY_API_KEY` | No |
| `jina` | `apiKey` | `JINA_API_KEY` | Free tier (10M tokens) |
| `searxng` | `baseUrl` | `SEARXNG_BASE_URL` | Yes (self-hosted) |
| `duckduckgo` | — | — | Yes |
| `duckduckgo` (default) | — | — | Yes |
When credentials are missing, nanobot automatically falls back to DuckDuckGo.
**Disable all built-in web tools:**
```json
{
"tools": {
"web": {
"enable": false
}
}
}
```
**Brave** (default):
**Brave:**
```json
{
"tools": {
@ -1294,7 +1318,14 @@ When credentials are missing, nanobot automatically falls back to DuckDuckGo.
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `provider` | string | `"brave"` | Search backend: `brave`, `tavily`, `jina`, `searxng`, `duckduckgo` |
| `enable` | boolean | `true` | Enable or disable all built-in web tools (`web_search` + `web_fetch`) |
| `proxy` | string or null | `null` | Proxy for all web requests, for example `http://127.0.0.1:7890` |
#### `tools.web.search`
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `provider` | string | `"duckduckgo"` | Search backend: `brave`, `tavily`, `jina`, `searxng`, `duckduckgo` |
| `apiKey` | string | `""` | API key for Brave or Tavily |
| `baseUrl` | string | `""` | Base URL for SearXNG |
| `maxResults` | integer | `5` | Results per search (110) |

View File

@ -36,7 +36,7 @@ from nanobot.utils.helpers import image_placeholder_text, truncate_text
from nanobot.utils.runtime import EMPTY_FINAL_RESPONSE_MESSAGE
if TYPE_CHECKING:
from nanobot.config.schema import ChannelsConfig, ExecToolConfig, WebSearchConfig
from nanobot.config.schema import ChannelsConfig, ExecToolConfig, WebToolsConfig
from nanobot.cron.service import CronService
@ -171,8 +171,7 @@ class AgentLoop:
context_block_limit: int | None = None,
max_tool_result_chars: int | None = None,
provider_retry_mode: str = "standard",
web_search_config: WebSearchConfig | None = None,
web_proxy: str | None = None,
web_config: WebToolsConfig | None = None,
exec_config: ExecToolConfig | None = None,
cron_service: CronService | None = None,
restrict_to_workspace: bool = False,
@ -182,7 +181,7 @@ class AgentLoop:
timezone: str | None = None,
hooks: list[AgentHook] | None = None,
):
from nanobot.config.schema import ExecToolConfig, WebSearchConfig
from nanobot.config.schema import ExecToolConfig, WebToolsConfig
defaults = AgentDefaults()
self.bus = bus
@ -205,8 +204,7 @@ class AgentLoop:
else defaults.max_tool_result_chars
)
self.provider_retry_mode = provider_retry_mode
self.web_search_config = web_search_config or WebSearchConfig()
self.web_proxy = web_proxy
self.web_config = web_config or WebToolsConfig()
self.exec_config = exec_config or ExecToolConfig()
self.cron_service = cron_service
self.restrict_to_workspace = restrict_to_workspace
@ -223,9 +221,8 @@ class AgentLoop:
workspace=workspace,
bus=bus,
model=self.model,
web_config=self.web_config,
max_tool_result_chars=self.max_tool_result_chars,
web_search_config=self.web_search_config,
web_proxy=web_proxy,
exec_config=self.exec_config,
restrict_to_workspace=restrict_to_workspace,
)
@ -276,8 +273,9 @@ class AgentLoop:
restrict_to_workspace=self.restrict_to_workspace,
path_append=self.exec_config.path_append,
))
self.tools.register(WebSearchTool(config=self.web_search_config, proxy=self.web_proxy))
self.tools.register(WebFetchTool(proxy=self.web_proxy))
if self.web_config.enable:
self.tools.register(WebSearchTool(config=self.web_config.search, proxy=self.web_config.proxy))
self.tools.register(WebFetchTool(proxy=self.web_config.proxy))
self.tools.register(MessageTool(send_callback=self.bus.publish_outbound))
self.tools.register(SpawnTool(manager=self.subagents))
if self.cron_service:

View File

@ -17,7 +17,7 @@ from nanobot.agent.tools.shell import ExecTool
from nanobot.agent.tools.web import WebFetchTool, WebSearchTool
from nanobot.bus.events import InboundMessage
from nanobot.bus.queue import MessageBus
from nanobot.config.schema import ExecToolConfig
from nanobot.config.schema import ExecToolConfig, WebToolsConfig
from nanobot.providers.base import LLMProvider
@ -46,20 +46,18 @@ class SubagentManager:
bus: MessageBus,
max_tool_result_chars: int,
model: str | None = None,
web_search_config: "WebSearchConfig | None" = None,
web_proxy: str | None = None,
web_config: "WebToolsConfig | None" = None,
exec_config: "ExecToolConfig | None" = None,
restrict_to_workspace: bool = False,
):
from nanobot.config.schema import ExecToolConfig, WebSearchConfig
from nanobot.config.schema import ExecToolConfig
self.provider = provider
self.workspace = workspace
self.bus = bus
self.model = model or provider.get_default_model()
self.web_config = web_config or WebToolsConfig()
self.max_tool_result_chars = max_tool_result_chars
self.web_search_config = web_search_config or WebSearchConfig()
self.web_proxy = web_proxy
self.exec_config = exec_config or ExecToolConfig()
self.restrict_to_workspace = restrict_to_workspace
self.runner = AgentRunner(provider)
@ -124,9 +122,9 @@ class SubagentManager:
restrict_to_workspace=self.restrict_to_workspace,
path_append=self.exec_config.path_append,
))
tools.register(WebSearchTool(config=self.web_search_config, proxy=self.web_proxy))
tools.register(WebFetchTool(proxy=self.web_proxy))
if self.web_config.enable:
tools.register(WebSearchTool(config=self.web_config.search, proxy=self.web_config.proxy))
tools.register(WebFetchTool(proxy=self.web_config.proxy))
system_prompt = self._build_subagent_prompt()
messages: list[dict[str, Any]] = [
{"role": "system", "content": system_prompt},

View File

@ -7,6 +7,7 @@ from typing import Any
from nanobot.agent.tools.base import Tool
from nanobot.utils.helpers import build_image_content_blocks, detect_image_mime
from nanobot.config.paths import get_media_dir
def _resolve_path(
@ -21,7 +22,8 @@ def _resolve_path(
p = workspace / p
resolved = p.resolve()
if allowed_dir:
all_dirs = [allowed_dir] + (extra_allowed_dirs or [])
media_path = get_media_dir().resolve()
all_dirs = [allowed_dir] + [media_path] + (extra_allowed_dirs or [])
if not any(_is_under(resolved, d) for d in all_dirs):
raise PermissionError(f"Path {path} is outside allowed directory {allowed_dir}")
return resolved

View File

@ -84,6 +84,9 @@ class MessageTool(Tool):
media: list[str] | None = None,
**kwargs: Any
) -> str:
from nanobot.utils.helpers import strip_think
content = strip_think(content)
channel = channel or self._default_channel
chat_id = chat_id or self._default_chat_id
# Only inherit default message_id when targeting the same channel+chat.

View File

@ -10,6 +10,7 @@ from typing import Any
from loguru import logger
from nanobot.agent.tools.base import Tool
from nanobot.config.paths import get_media_dir
class ExecTool(Tool):
@ -183,7 +184,14 @@ class ExecTool(Tool):
p = Path(expanded).expanduser().resolve()
except Exception:
continue
if p.is_absolute() and cwd_path not in p.parents and p != cwd_path:
media_path = get_media_dir().resolve()
if (p.is_absolute()
and cwd_path not in p.parents
and p != cwd_path
and media_path not in p.parents
and p != media_path
):
return "Error: Command blocked by safety guard (path outside working dir)"
return None

View File

@ -11,6 +11,7 @@ from nanobot.bus.events import OutboundMessage
from nanobot.bus.queue import MessageBus
from nanobot.channels.base import BaseChannel
from nanobot.config.schema import Config
from nanobot.utils.restart import consume_restart_notice_from_env, format_restart_completed_message
# Retry delays for message sending (exponential backoff: 1s, 2s, 4s)
_SEND_RETRY_DELAYS = (1, 2, 4)
@ -91,9 +92,28 @@ class ChannelManager:
logger.info("Starting {} channel...", name)
tasks.append(asyncio.create_task(self._start_channel(name, channel)))
self._notify_restart_done_if_needed()
# Wait for all to complete (they should run forever)
await asyncio.gather(*tasks, return_exceptions=True)
def _notify_restart_done_if_needed(self) -> None:
"""Send restart completion message when runtime env markers are present."""
notice = consume_restart_notice_from_env()
if not notice:
return
target = self.channels.get(notice.channel)
if not target:
return
asyncio.create_task(self._send_with_retry(
target,
OutboundMessage(
channel=notice.channel,
chat_id=notice.chat_id,
content=format_restart_completed_message(notice.started_at_raw),
),
))
async def stop_all(self) -> None:
"""Stop all channels and the dispatcher."""
logger.info("Stopping all channels...")

View File

@ -134,6 +134,7 @@ class QQConfig(Base):
secret: str = ""
allow_from: list[str] = Field(default_factory=list)
msg_format: Literal["plain", "markdown"] = "plain"
ack_message: str = "⏳ Processing..."
# Optional: directory to save inbound attachments. If empty, use nanobot get_media_dir("qq").
media_dir: str = ""
@ -484,6 +485,17 @@ class QQChannel(BaseChannel):
if not content and not media_paths:
return
if self.config.ack_message:
try:
await self._send_text_only(
chat_id=chat_id,
is_group=is_group,
msg_id=data.id,
content=self.config.ack_message,
)
except Exception:
logger.debug("QQ ack message failed for chat_id={}", chat_id)
await self._handle_message(
sender_id=user_id,
chat_id=chat_id,

View File

@ -275,13 +275,10 @@ class TelegramChannel(BaseChannel):
self._app = builder.build()
self._app.add_error_handler(self._on_error)
# Add command handlers
self._app.add_handler(CommandHandler("start", self._on_start))
self._app.add_handler(CommandHandler("new", self._forward_command))
self._app.add_handler(CommandHandler("stop", self._forward_command))
self._app.add_handler(CommandHandler("restart", self._forward_command))
self._app.add_handler(CommandHandler("status", self._forward_command))
self._app.add_handler(CommandHandler("help", self._on_help))
# Add command handlers (using Regex to support @username suffixes before bot initialization)
self._app.add_handler(MessageHandler(filters.Regex(r"^/start(?:@\w+)?$"), self._on_start))
self._app.add_handler(MessageHandler(filters.Regex(r"^/(new|stop|restart|status)(?:@\w+)?$"), self._forward_command))
self._app.add_handler(MessageHandler(filters.Regex(r"^/help(?:@\w+)?$"), self._on_help))
# Add message handler for text, photos, voice, documents
self._app.add_handler(
@ -313,7 +310,7 @@ class TelegramChannel(BaseChannel):
# Start polling (this runs until stopped)
await self._app.updater.start_polling(
allowed_updates=["message"],
drop_pending_updates=True # Ignore old messages on startup
drop_pending_updates=False # Process pending messages on startup
)
# Keep running until stopped
@ -362,9 +359,14 @@ class TelegramChannel(BaseChannel):
logger.warning("Telegram bot not running")
return
# Only stop typing indicator for final responses
# Only stop typing indicator and remove reaction for final responses
if not msg.metadata.get("_progress", False):
self._stop_typing(msg.chat_id)
if reply_to_message_id := msg.metadata.get("message_id"):
try:
await self._remove_reaction(msg.chat_id, int(reply_to_message_id))
except ValueError:
pass
try:
chat_id = int(msg.chat_id)
@ -435,7 +437,9 @@ class TelegramChannel(BaseChannel):
await self._send_text(chat_id, chunk, reply_params, thread_kwargs)
async def _call_with_retry(self, fn, *args, **kwargs):
"""Call an async Telegram API function with retry on pool/network timeout."""
"""Call an async Telegram API function with retry on pool/network timeout and RetryAfter."""
from telegram.error import RetryAfter
for attempt in range(1, _SEND_MAX_RETRIES + 1):
try:
return await fn(*args, **kwargs)
@ -448,6 +452,15 @@ class TelegramChannel(BaseChannel):
attempt, _SEND_MAX_RETRIES, delay,
)
await asyncio.sleep(delay)
except RetryAfter as e:
if attempt == _SEND_MAX_RETRIES:
raise
delay = float(e.retry_after)
logger.warning(
"Telegram Flood Control (attempt {}/{}), retrying in {:.1f}s",
attempt, _SEND_MAX_RETRIES, delay,
)
await asyncio.sleep(delay)
async def _send_text(
self,
@ -498,6 +511,11 @@ class TelegramChannel(BaseChannel):
if stream_id is not None and buf.stream_id is not None and buf.stream_id != stream_id:
return
self._stop_typing(chat_id)
if reply_to_message_id := meta.get("message_id"):
try:
await self._remove_reaction(chat_id, int(reply_to_message_id))
except ValueError:
pass
try:
html = _markdown_to_telegram_html(buf.text)
await self._call_with_retry(
@ -619,8 +637,7 @@ class TelegramChannel(BaseChannel):
"reply_to_message_id": getattr(reply_to, "message_id", None) if reply_to else None,
}
@staticmethod
def _extract_reply_context(message) -> str | None:
async def _extract_reply_context(self, message) -> str | None:
"""Extract text from the message being replied to, if any."""
reply = getattr(message, "reply_to_message", None)
if not reply:
@ -628,7 +645,21 @@ class TelegramChannel(BaseChannel):
text = getattr(reply, "text", None) or getattr(reply, "caption", None) or ""
if len(text) > TELEGRAM_REPLY_CONTEXT_MAX_LEN:
text = text[:TELEGRAM_REPLY_CONTEXT_MAX_LEN] + "..."
return f"[Reply to: {text}]" if text else None
if not text:
return None
bot_id, _ = await self._ensure_bot_identity()
reply_user = getattr(reply, "from_user", None)
if bot_id and reply_user and getattr(reply_user, "id", None) == bot_id:
return f"[Reply to bot: {text}]"
elif reply_user and getattr(reply_user, "username", None):
return f"[Reply to @{reply_user.username}: {text}]"
elif reply_user and getattr(reply_user, "first_name", None):
return f"[Reply to {reply_user.first_name}: {text}]"
else:
return f"[Reply to: {text}]"
async def _download_message_media(
self, msg, *, add_failure_content: bool = False
@ -765,10 +796,18 @@ class TelegramChannel(BaseChannel):
message = update.message
user = update.effective_user
self._remember_thread_context(message)
# Strip @bot_username suffix if present
content = message.text or ""
if content.startswith("/") and "@" in content:
cmd_part, *rest = content.split(" ", 1)
cmd_part = cmd_part.split("@")[0]
content = f"{cmd_part} {rest[0]}" if rest else cmd_part
await self._handle_message(
sender_id=self._sender_id(user),
chat_id=str(message.chat_id),
content=message.text or "",
content=content,
metadata=self._build_message_metadata(message, user),
session_key=self._derive_topic_session_key(message),
)
@ -812,7 +851,7 @@ class TelegramChannel(BaseChannel):
# Reply context: text and/or media from the replied-to message
reply = getattr(message, "reply_to_message", None)
if reply is not None:
reply_ctx = self._extract_reply_context(message)
reply_ctx = await self._extract_reply_context(message)
reply_media, reply_media_parts = await self._download_message_media(reply)
if reply_media:
media_paths = reply_media + media_paths
@ -903,6 +942,19 @@ class TelegramChannel(BaseChannel):
except Exception as e:
logger.debug("Telegram reaction failed: {}", e)
async def _remove_reaction(self, chat_id: str, message_id: int) -> None:
"""Remove emoji reaction from a message (best-effort, non-blocking)."""
if not self._app:
return
try:
await self._app.bot.set_message_reaction(
chat_id=int(chat_id),
message_id=message_id,
reaction=[],
)
except Exception as e:
logger.debug("Telegram reaction removal failed: {}", e)
async def _typing_loop(self, chat_id: str) -> None:
"""Repeatedly send 'typing' action until cancelled."""
try:

View File

@ -13,7 +13,6 @@ import asyncio
import base64
import hashlib
import json
import mimetypes
import os
import random
import re
@ -158,6 +157,7 @@ class WeixinChannel(BaseChannel):
self._poll_task: asyncio.Task | None = None
self._next_poll_timeout_s: int = DEFAULT_LONG_POLL_TIMEOUT_S
self._session_pause_until: float = 0.0
self._typing_tasks: dict[str, asyncio.Task] = {}
self._typing_tickets: dict[str, dict[str, Any]] = {}
# ------------------------------------------------------------------
@ -193,6 +193,15 @@ class WeixinChannel(BaseChannel):
}
else:
self._context_tokens = {}
typing_tickets = data.get("typing_tickets", {})
if isinstance(typing_tickets, dict):
self._typing_tickets = {
str(user_id): ticket
for user_id, ticket in typing_tickets.items()
if str(user_id).strip() and isinstance(ticket, dict)
}
else:
self._typing_tickets = {}
base_url = data.get("base_url", "")
if base_url:
self.config.base_url = base_url
@ -207,6 +216,7 @@ class WeixinChannel(BaseChannel):
"token": self._token,
"get_updates_buf": self._get_updates_buf,
"context_tokens": self._context_tokens,
"typing_tickets": self._typing_tickets,
"base_url": self.config.base_url,
}
state_file.write_text(json.dumps(data, ensure_ascii=False))
@ -488,6 +498,8 @@ class WeixinChannel(BaseChannel):
self._running = False
if self._poll_task and not self._poll_task.done():
self._poll_task.cancel()
for chat_id in list(self._typing_tasks):
await self._stop_typing(chat_id, clear_remote=False)
if self._client:
await self._client.aclose()
self._client = None
@ -746,6 +758,15 @@ class WeixinChannel(BaseChannel):
if not content:
return
logger.info(
"WeChat inbound: from={} items={} bodyLen={}",
from_user_id,
",".join(str(i.get("type", 0)) for i in item_list),
len(content),
)
await self._start_typing(from_user_id, ctx_token)
await self._handle_message(
sender_id=from_user_id,
chat_id=from_user_id,
@ -927,6 +948,10 @@ class WeixinChannel(BaseChannel):
except RuntimeError:
return
is_progress = bool((msg.metadata or {}).get("_progress", False))
if not is_progress:
await self._stop_typing(msg.chat_id, clear_remote=True)
content = msg.content.strip()
ctx_token = self._context_tokens.get(msg.chat_id, "")
if not ctx_token:
@ -987,12 +1012,68 @@ class WeixinChannel(BaseChannel):
except asyncio.CancelledError:
pass
if typing_ticket:
if typing_ticket and not is_progress:
try:
await self._send_typing(msg.chat_id, typing_ticket, TYPING_STATUS_CANCEL)
except Exception:
pass
async def _start_typing(self, chat_id: str, context_token: str = "") -> None:
"""Start typing indicator immediately when a message is received."""
if not self._client or not self._token or not chat_id:
return
await self._stop_typing(chat_id, clear_remote=False)
try:
ticket = await self._get_typing_ticket(chat_id, context_token)
if not ticket:
return
await self._send_typing(chat_id, ticket, TYPING_STATUS_TYPING)
except Exception as e:
logger.debug("WeChat typing indicator start failed for {}: {}", chat_id, e)
return
stop_event = asyncio.Event()
async def keepalive() -> None:
try:
while not stop_event.is_set():
await asyncio.sleep(TYPING_KEEPALIVE_INTERVAL_S)
if stop_event.is_set():
break
try:
await self._send_typing(chat_id, ticket, TYPING_STATUS_TYPING)
except Exception:
pass
finally:
pass
task = asyncio.create_task(keepalive())
task._typing_stop_event = stop_event # type: ignore[attr-defined]
self._typing_tasks[chat_id] = task
async def _stop_typing(self, chat_id: str, *, clear_remote: bool) -> None:
"""Stop typing indicator for a chat."""
task = self._typing_tasks.pop(chat_id, None)
if task and not task.done():
stop_event = getattr(task, "_typing_stop_event", None)
if stop_event:
stop_event.set()
task.cancel()
try:
await task
except asyncio.CancelledError:
pass
if not clear_remote:
return
entry = self._typing_tickets.get(chat_id)
ticket = str(entry.get("ticket", "") or "") if isinstance(entry, dict) else ""
if not ticket:
return
try:
await self._send_typing(chat_id, ticket, TYPING_STATUS_CANCEL)
except Exception as e:
logger.debug("WeChat typing clear failed for {}: {}", chat_id, e)
async def _send_text(
self,
to_user_id: str,

View File

@ -38,6 +38,11 @@ from nanobot.cli.stream import StreamRenderer, ThinkingSpinner
from nanobot.config.paths import get_workspace_path, is_default_workspace
from nanobot.config.schema import Config
from nanobot.utils.helpers import sync_workspace_templates
from nanobot.utils.restart import (
consume_restart_notice_from_env,
format_restart_completed_message,
should_show_cli_restart_notice,
)
app = typer.Typer(
name="nanobot",
@ -546,8 +551,7 @@ def serve(
context_block_limit=runtime_config.agents.defaults.context_block_limit,
max_tool_result_chars=runtime_config.agents.defaults.max_tool_result_chars,
provider_retry_mode=runtime_config.agents.defaults.provider_retry_mode,
web_search_config=runtime_config.tools.web.search,
web_proxy=runtime_config.tools.web.proxy or None,
web_config=runtime_config.tools.web,
exec_config=runtime_config.tools.exec,
restrict_to_workspace=runtime_config.tools.restrict_to_workspace,
session_manager=session_manager,
@ -633,11 +637,10 @@ def gateway(
model=config.agents.defaults.model,
max_iterations=config.agents.defaults.max_tool_iterations,
context_window_tokens=config.agents.defaults.context_window_tokens,
web_config=config.tools.web,
context_block_limit=config.agents.defaults.context_block_limit,
max_tool_result_chars=config.agents.defaults.max_tool_result_chars,
provider_retry_mode=config.agents.defaults.provider_retry_mode,
web_search_config=config.tools.web.search,
web_proxy=config.tools.web.proxy or None,
exec_config=config.tools.exec,
cron_service=cron,
restrict_to_workspace=config.tools.restrict_to_workspace,
@ -866,11 +869,10 @@ def agent(
model=config.agents.defaults.model,
max_iterations=config.agents.defaults.max_tool_iterations,
context_window_tokens=config.agents.defaults.context_window_tokens,
web_config=config.tools.web,
context_block_limit=config.agents.defaults.context_block_limit,
max_tool_result_chars=config.agents.defaults.max_tool_result_chars,
provider_retry_mode=config.agents.defaults.provider_retry_mode,
web_search_config=config.tools.web.search,
web_proxy=config.tools.web.proxy or None,
exec_config=config.tools.exec,
cron_service=cron,
restrict_to_workspace=config.tools.restrict_to_workspace,
@ -878,6 +880,12 @@ def agent(
channels_config=config.channels,
timezone=config.agents.defaults.timezone,
)
restart_notice = consume_restart_notice_from_env()
if restart_notice and should_show_cli_restart_notice(restart_notice, session_id):
_print_agent_response(
format_restart_completed_message(restart_notice.started_at_raw),
render_markdown=False,
)
# Shared reference for progress callbacks
_thinking: ThinkingSpinner | None = None

View File

@ -10,6 +10,7 @@ from nanobot import __version__
from nanobot.bus.events import OutboundMessage
from nanobot.command.router import CommandContext, CommandRouter
from nanobot.utils.helpers import build_status_content
from nanobot.utils.restart import set_restart_notice_to_env
async def cmd_stop(ctx: CommandContext) -> OutboundMessage:
@ -26,19 +27,26 @@ async def cmd_stop(ctx: CommandContext) -> OutboundMessage:
sub_cancelled = await loop.subagents.cancel_by_session(msg.session_key)
total = cancelled + sub_cancelled
content = f"Stopped {total} task(s)." if total else "No active task to stop."
return OutboundMessage(channel=msg.channel, chat_id=msg.chat_id, content=content)
return OutboundMessage(
channel=msg.channel, chat_id=msg.chat_id, content=content,
metadata=dict(msg.metadata or {})
)
async def cmd_restart(ctx: CommandContext) -> OutboundMessage:
"""Restart the process in-place via os.execv."""
msg = ctx.msg
set_restart_notice_to_env(channel=msg.channel, chat_id=msg.chat_id)
async def _do_restart():
await asyncio.sleep(1)
os.execv(sys.executable, [sys.executable, "-m", "nanobot"] + sys.argv[1:])
asyncio.create_task(_do_restart())
return OutboundMessage(channel=msg.channel, chat_id=msg.chat_id, content="Restarting...")
return OutboundMessage(
channel=msg.channel, chat_id=msg.chat_id, content="Restarting...",
metadata=dict(msg.metadata or {})
)
async def cmd_status(ctx: CommandContext) -> OutboundMessage:
@ -62,7 +70,7 @@ async def cmd_status(ctx: CommandContext) -> OutboundMessage:
session_msg_count=len(session.get_history(max_messages=0)),
context_tokens_estimate=ctx_est,
),
metadata={"render_as": "text"},
metadata={**dict(ctx.msg.metadata or {}), "render_as": "text"},
)
@ -79,6 +87,7 @@ async def cmd_new(ctx: CommandContext) -> OutboundMessage:
return OutboundMessage(
channel=ctx.msg.channel, chat_id=ctx.msg.chat_id,
content="New session started.",
metadata=dict(ctx.msg.metadata or {})
)
@ -185,7 +194,7 @@ async def cmd_help(ctx: CommandContext) -> OutboundMessage:
channel=ctx.msg.channel,
chat_id=ctx.msg.chat_id,
content=build_help_text(),
metadata={"render_as": "text"},
metadata={**dict(ctx.msg.metadata or {}), "render_as": "text"},
)

View File

@ -91,6 +91,7 @@ class ProvidersConfig(Base):
minimax: ProviderConfig = Field(default_factory=ProviderConfig)
mistral: ProviderConfig = Field(default_factory=ProviderConfig)
stepfun: ProviderConfig = Field(default_factory=ProviderConfig) # Step Fun (阶跃星辰)
xiaomi_mimo: ProviderConfig = Field(default_factory=ProviderConfig) # Xiaomi MIMO (小米)
aihubmix: ProviderConfig = Field(default_factory=ProviderConfig) # AiHubMix API gateway
siliconflow: ProviderConfig = Field(default_factory=ProviderConfig) # SiliconFlow (硅基流动)
volcengine: ProviderConfig = Field(default_factory=ProviderConfig) # VolcEngine (火山引擎)
@ -128,7 +129,7 @@ class GatewayConfig(Base):
class WebSearchConfig(Base):
"""Web search tool configuration."""
provider: str = "brave" # brave, tavily, duckduckgo, searxng, jina
provider: str = "duckduckgo" # brave, tavily, duckduckgo, searxng, jina
api_key: str = ""
base_url: str = "" # SearXNG base URL
max_results: int = 5
@ -137,6 +138,7 @@ class WebSearchConfig(Base):
class WebToolsConfig(Base):
"""Web tools configuration."""
enable: bool = True
proxy: str | None = (
None # HTTP/SOCKS5 proxy URL, e.g. "http://127.0.0.1:7890" or "socks5://127.0.0.1:1080"
)

View File

@ -76,8 +76,7 @@ class Nanobot:
context_block_limit=defaults.context_block_limit,
max_tool_result_chars=defaults.max_tool_result_chars,
provider_retry_mode=defaults.provider_retry_mode,
web_search_config=config.tools.web.search,
web_proxy=config.tools.web.proxy or None,
web_config=config.tools.web,
exec_config=config.tools.exec,
restrict_to_workspace=config.tools.restrict_to_workspace,
mcp_servers=config.tools.mcp_servers,

View File

@ -49,6 +49,8 @@ class AnthropicProvider(LLMProvider):
client_kw["base_url"] = api_base
if extra_headers:
client_kw["default_headers"] = extra_headers
# Keep retries centralized in LLMProvider._run_with_retry to avoid retry amplification.
client_kw["max_retries"] = 0
self._client = AsyncAnthropic(**client_kw)
@staticmethod
@ -401,6 +403,15 @@ class AnthropicProvider(LLMProvider):
# Public API
# ------------------------------------------------------------------
@staticmethod
def _handle_error(e: Exception) -> LLMResponse:
msg = f"Error calling LLM: {e}"
response = getattr(e, "response", None)
retry_after = LLMProvider._extract_retry_after_from_headers(getattr(response, "headers", None))
if retry_after is None:
retry_after = LLMProvider._extract_retry_after(msg)
return LLMResponse(content=msg, finish_reason="error", retry_after=retry_after)
async def chat(
self,
messages: list[dict[str, Any]],
@ -419,7 +430,7 @@ class AnthropicProvider(LLMProvider):
response = await self._client.messages.create(**kwargs)
return self._parse_response(response)
except Exception as e:
return LLMResponse(content=f"Error calling LLM: {e}", finish_reason="error")
return self._handle_error(e)
async def chat_stream(
self,
@ -464,7 +475,7 @@ class AnthropicProvider(LLMProvider):
finish_reason="error",
)
except Exception as e:
return LLMResponse(content=f"Error calling LLM: {e}", finish_reason="error")
return self._handle_error(e)
def get_default_model(self) -> str:
return self.default_model

View File

@ -58,6 +58,7 @@ class AzureOpenAIProvider(LLMProvider):
api_key=api_key,
base_url=base_url,
default_headers={"x-session-affinity": uuid.uuid4().hex},
max_retries=0,
)
# ------------------------------------------------------------------
@ -113,9 +114,14 @@ class AzureOpenAIProvider(LLMProvider):
@staticmethod
def _handle_error(e: Exception) -> LLMResponse:
body = getattr(e, "body", None) or getattr(getattr(e, "response", None), "text", None)
msg = f"Error: {str(body).strip()[:500]}" if body else f"Error calling Azure OpenAI: {e}"
return LLMResponse(content=msg, finish_reason="error")
response = getattr(e, "response", None)
body = getattr(e, "body", None) or getattr(response, "text", None)
body_text = str(body).strip() if body is not None else ""
msg = f"Error: {body_text[:500]}" if body_text else f"Error calling Azure OpenAI: {e}"
retry_after = LLMProvider._extract_retry_after_from_headers(getattr(response, "headers", None))
if retry_after is None:
retry_after = LLMProvider._extract_retry_after(msg)
return LLMResponse(content=msg, finish_reason="error", retry_after=retry_after)
# ------------------------------------------------------------------
# Public API
@ -174,4 +180,4 @@ class AzureOpenAIProvider(LLMProvider):
return self._handle_error(e)
def get_default_model(self) -> str:
return self.default_model
return self.default_model

View File

@ -6,6 +6,8 @@ import re
from abc import ABC, abstractmethod
from collections.abc import Awaitable, Callable
from dataclasses import dataclass, field
from datetime import datetime, timezone
from email.utils import parsedate_to_datetime
from typing import Any
from loguru import logger
@ -49,7 +51,8 @@ class LLMResponse:
tool_calls: list[ToolCallRequest] = field(default_factory=list)
finish_reason: str = "stop"
usage: dict[str, int] = field(default_factory=dict)
reasoning_content: str | None = None # Kimi, DeepSeek-R1 etc.
retry_after: float | None = None # Provider supplied retry wait in seconds.
reasoning_content: str | None = None # Kimi, DeepSeek-R1, MiMo etc.
thinking_blocks: list[dict] | None = None # Anthropic extended thinking
@property
@ -334,16 +337,57 @@ class LLMProvider(ABC):
@classmethod
def _extract_retry_after(cls, content: str | None) -> float | None:
text = (content or "").lower()
match = re.search(r"retry after\s+(\d+(?:\.\d+)?)\s*(ms|milliseconds|s|sec|secs|seconds|m|min|minutes)?", text)
if not match:
return None
value = float(match.group(1))
unit = (match.group(2) or "s").lower()
if unit in {"ms", "milliseconds"}:
patterns = (
r"retry after\s+(\d+(?:\.\d+)?)\s*(ms|milliseconds|s|sec|secs|seconds|m|min|minutes)?",
r"try again in\s+(\d+(?:\.\d+)?)\s*(ms|milliseconds|s|sec|secs|seconds|m|min|minutes)",
r"wait\s+(\d+(?:\.\d+)?)\s*(ms|milliseconds|s|sec|secs|seconds|m|min|minutes)\s*before retry",
r"retry[_-]?after[\"'\s:=]+(\d+(?:\.\d+)?)",
)
for idx, pattern in enumerate(patterns):
match = re.search(pattern, text)
if not match:
continue
value = float(match.group(1))
unit = match.group(2) if idx < 3 else "s"
return cls._to_retry_seconds(value, unit)
return None
@classmethod
def _to_retry_seconds(cls, value: float, unit: str | None = None) -> float:
normalized_unit = (unit or "s").lower()
if normalized_unit in {"ms", "milliseconds"}:
return max(0.1, value / 1000.0)
if unit in {"m", "min", "minutes"}:
return value * 60.0
return value
if normalized_unit in {"m", "min", "minutes"}:
return max(0.1, value * 60.0)
return max(0.1, value)
@classmethod
def _extract_retry_after_from_headers(cls, headers: Any) -> float | None:
if not headers:
return None
retry_after: Any = None
if hasattr(headers, "get"):
retry_after = headers.get("retry-after") or headers.get("Retry-After")
if retry_after is None and isinstance(headers, dict):
for key, value in headers.items():
if isinstance(key, str) and key.lower() == "retry-after":
retry_after = value
break
if retry_after is None:
return None
retry_after_text = str(retry_after).strip()
if not retry_after_text:
return None
if re.fullmatch(r"\d+(?:\.\d+)?", retry_after_text):
return cls._to_retry_seconds(float(retry_after_text), "s")
try:
retry_at = parsedate_to_datetime(retry_after_text)
except Exception:
return None
if retry_at.tzinfo is None:
retry_at = retry_at.replace(tzinfo=timezone.utc)
remaining = (retry_at - datetime.now(retry_at.tzinfo)).total_seconds()
return max(0.1, remaining)
async def _sleep_with_heartbeat(
self,
@ -416,7 +460,7 @@ class LLMProvider(ABC):
break
base_delay = delays[min(attempt - 1, len(delays) - 1)]
delay = self._extract_retry_after(response.content) or base_delay
delay = response.retry_after or self._extract_retry_after(response.content) or base_delay
if persistent:
delay = min(delay, self._PERSISTENT_MAX_DELAY)

View File

@ -79,7 +79,9 @@ class OpenAICodexProvider(LLMProvider):
)
return LLMResponse(content=content, tool_calls=tool_calls, finish_reason=finish_reason)
except Exception as e:
return LLMResponse(content=f"Error calling Codex: {e}", finish_reason="error")
msg = f"Error calling Codex: {e}"
retry_after = getattr(e, "retry_after", None) or self._extract_retry_after(msg)
return LLMResponse(content=msg, finish_reason="error", retry_after=retry_after)
async def chat(
self, messages: list[dict[str, Any]], tools: list[dict[str, Any]] | None = None,
@ -120,6 +122,12 @@ def _build_headers(account_id: str, token: str) -> dict[str, str]:
}
class _CodexHTTPError(RuntimeError):
def __init__(self, message: str, retry_after: float | None = None):
super().__init__(message)
self.retry_after = retry_after
async def _request_codex(
url: str,
headers: dict[str, str],
@ -131,7 +139,11 @@ async def _request_codex(
async with client.stream("POST", url, headers=headers, json=body) as response:
if response.status_code != 200:
text = await response.aread()
raise RuntimeError(_friendly_error(response.status_code, text.decode("utf-8", "ignore")))
retry_after = LLMProvider._extract_retry_after_from_headers(response.headers)
raise _CodexHTTPError(
_friendly_error(response.status_code, text.decode("utf-8", "ignore")),
retry_after=retry_after,
)
return await consume_sse(response, on_content_delta)

View File

@ -135,6 +135,7 @@ class OpenAICompatProvider(LLMProvider):
api_key=api_key or "no-key",
base_url=effective_base,
default_headers=default_headers,
max_retries=0,
)
def _setup_env(self, api_key: str, api_base: str | None) -> None:
@ -385,9 +386,13 @@ class OpenAICompatProvider(LLMProvider):
content = self._extract_text_content(
response_map.get("content") or response_map.get("output_text")
)
reasoning_content = self._extract_text_content(
response_map.get("reasoning_content")
)
if content is not None:
return LLMResponse(
content=content,
reasoning_content=reasoning_content,
finish_reason=str(response_map.get("finish_reason") or "stop"),
usage=self._extract_usage(response_map),
)
@ -482,6 +487,7 @@ class OpenAICompatProvider(LLMProvider):
@classmethod
def _parse_chunks(cls, chunks: list[Any]) -> LLMResponse:
content_parts: list[str] = []
reasoning_parts: list[str] = []
tc_bufs: dict[int, dict[str, Any]] = {}
finish_reason = "stop"
usage: dict[str, int] = {}
@ -535,6 +541,9 @@ class OpenAICompatProvider(LLMProvider):
text = cls._extract_text_content(delta.get("content"))
if text:
content_parts.append(text)
text = cls._extract_text_content(delta.get("reasoning_content"))
if text:
reasoning_parts.append(text)
for idx, tc in enumerate(delta.get("tool_calls") or []):
_accum_tc(tc, idx)
usage = cls._extract_usage(chunk_map) or usage
@ -549,6 +558,10 @@ class OpenAICompatProvider(LLMProvider):
delta = choice.delta
if delta and delta.content:
content_parts.append(delta.content)
if delta:
reasoning = getattr(delta, "reasoning_content", None)
if reasoning:
reasoning_parts.append(reasoning)
for tc in (delta.tool_calls or []) if delta else []:
_accum_tc(tc, getattr(tc, "index", 0))
@ -567,13 +580,19 @@ class OpenAICompatProvider(LLMProvider):
],
finish_reason=finish_reason,
usage=usage,
reasoning_content="".join(reasoning_parts) or None,
)
@staticmethod
def _handle_error(e: Exception) -> LLMResponse:
body = getattr(e, "doc", None) or getattr(getattr(e, "response", None), "text", None)
msg = f"Error: {body.strip()[:500]}" if body and body.strip() else f"Error calling LLM: {e}"
return LLMResponse(content=msg, finish_reason="error")
response = getattr(e, "response", None)
body = getattr(e, "doc", None) or getattr(response, "text", None)
body_text = str(body).strip() if body is not None else ""
msg = f"Error: {body_text[:500]}" if body_text else f"Error calling LLM: {e}"
retry_after = LLMProvider._extract_retry_after_from_headers(getattr(response, "headers", None))
if retry_after is None:
retry_after = LLMProvider._extract_retry_after(msg)
return LLMResponse(content=msg, finish_reason="error", retry_after=retry_after)
# ------------------------------------------------------------------
# Public API
@ -630,6 +649,9 @@ class OpenAICompatProvider(LLMProvider):
break
chunks.append(chunk)
if on_content_delta and chunk.choices:
text = getattr(chunk.choices[0].delta, "reasoning_content", None)
if text:
await on_content_delta(text)
text = getattr(chunk.choices[0].delta, "content", None)
if text:
await on_content_delta(text)
@ -646,4 +668,4 @@ class OpenAICompatProvider(LLMProvider):
return self._handle_error(e)
def get_default_model(self) -> str:
return self.default_model
return self.default_model

View File

@ -297,6 +297,15 @@ PROVIDERS: tuple[ProviderSpec, ...] = (
backend="openai_compat",
default_api_base="https://api.stepfun.com/v1",
),
# Xiaomi MIMO (小米): OpenAI-compatible API
ProviderSpec(
name="xiaomi_mimo",
keywords=("xiaomi_mimo", "mimo"),
env_key="XIAOMIMIMO_API_KEY",
display_name="Xiaomi MIMO",
backend="openai_compat",
default_api_base="https://api.xiaomimimo.com/v1",
),
# === Local deployment (matched by config key, NOT by api_base) =========
# vLLM / any OpenAI-compatible local server
ProviderSpec(

58
nanobot/utils/restart.py Normal file
View File

@ -0,0 +1,58 @@
"""Helpers for restart notification messages."""
from __future__ import annotations
import os
import time
from dataclasses import dataclass
RESTART_NOTIFY_CHANNEL_ENV = "NANOBOT_RESTART_NOTIFY_CHANNEL"
RESTART_NOTIFY_CHAT_ID_ENV = "NANOBOT_RESTART_NOTIFY_CHAT_ID"
RESTART_STARTED_AT_ENV = "NANOBOT_RESTART_STARTED_AT"
@dataclass(frozen=True)
class RestartNotice:
channel: str
chat_id: str
started_at_raw: str
def format_restart_completed_message(started_at_raw: str) -> str:
"""Build restart completion text and include elapsed time when available."""
elapsed_suffix = ""
if started_at_raw:
try:
elapsed_s = max(0.0, time.time() - float(started_at_raw))
elapsed_suffix = f" in {elapsed_s:.1f}s"
except ValueError:
pass
return f"Restart completed{elapsed_suffix}."
def set_restart_notice_to_env(*, channel: str, chat_id: str) -> None:
"""Write restart notice env values for the next process."""
os.environ[RESTART_NOTIFY_CHANNEL_ENV] = channel
os.environ[RESTART_NOTIFY_CHAT_ID_ENV] = chat_id
os.environ[RESTART_STARTED_AT_ENV] = str(time.time())
def consume_restart_notice_from_env() -> RestartNotice | None:
"""Read and clear restart notice env values once for this process."""
channel = os.environ.pop(RESTART_NOTIFY_CHANNEL_ENV, "").strip()
chat_id = os.environ.pop(RESTART_NOTIFY_CHAT_ID_ENV, "").strip()
started_at_raw = os.environ.pop(RESTART_STARTED_AT_ENV, "").strip()
if not (channel and chat_id):
return None
return RestartNotice(channel=channel, chat_id=chat_id, started_at_raw=started_at_raw)
def should_show_cli_restart_notice(notice: RestartNotice, session_id: str) -> bool:
"""Return True when a restart notice should be shown in this CLI session."""
if notice.channel != "cli":
return False
if ":" in session_id:
_, cli_chat_id = session_id.split(":", 1)
else:
cli_chat_id = session_id
return not notice.chat_id or notice.chat_id == cli_chat_id

View File

@ -13,6 +13,7 @@ from nanobot.bus.queue import MessageBus
from nanobot.channels.base import BaseChannel
from nanobot.channels.manager import ChannelManager
from nanobot.config.schema import ChannelsConfig
from nanobot.utils.restart import RestartNotice
# ---------------------------------------------------------------------------
@ -929,3 +930,30 @@ async def test_start_all_creates_dispatch_task():
# Dispatch task should have been created
assert mgr._dispatch_task is not None
@pytest.mark.asyncio
async def test_notify_restart_done_enqueues_outbound_message():
"""Restart notice should schedule send_with_retry for target channel."""
fake_config = SimpleNamespace(
channels=ChannelsConfig(),
providers=SimpleNamespace(groq=SimpleNamespace(api_key="")),
)
mgr = ChannelManager.__new__(ChannelManager)
mgr.config = fake_config
mgr.bus = MessageBus()
mgr.channels = {"feishu": _StartableChannel(fake_config, mgr.bus)}
mgr._dispatch_task = None
mgr._send_with_retry = AsyncMock()
notice = RestartNotice(channel="feishu", chat_id="oc_123", started_at_raw="100.0")
with patch("nanobot.channels.manager.consume_restart_notice_from_env", return_value=notice):
mgr._notify_restart_done_if_needed()
await asyncio.sleep(0)
mgr._send_with_retry.assert_awaited_once()
sent_channel, sent_msg = mgr._send_with_retry.await_args.args
assert sent_channel is mgr.channels["feishu"]
assert sent_msg.channel == "feishu"
assert sent_msg.chat_id == "oc_123"
assert sent_msg.content.startswith("Restart completed")

View File

@ -0,0 +1,172 @@
"""Tests for QQ channel ack_message feature.
Covers the four verification points from the PR:
1. C2C message: ack appears instantly
2. Group message: ack appears instantly
3. ack_message set to "": no ack sent
4. Custom ack_message text: correct text delivered
Each test also verifies that normal message processing is not blocked.
"""
from types import SimpleNamespace
import pytest
try:
from nanobot.channels import qq
QQ_AVAILABLE = getattr(qq, "QQ_AVAILABLE", False)
except ImportError:
QQ_AVAILABLE = False
if not QQ_AVAILABLE:
pytest.skip("QQ dependencies not installed (qq-botpy)", allow_module_level=True)
from nanobot.bus.queue import MessageBus
from nanobot.channels.qq import QQChannel, QQConfig
class _FakeApi:
def __init__(self) -> None:
self.c2c_calls: list[dict] = []
self.group_calls: list[dict] = []
async def post_c2c_message(self, **kwargs) -> None:
self.c2c_calls.append(kwargs)
async def post_group_message(self, **kwargs) -> None:
self.group_calls.append(kwargs)
class _FakeClient:
def __init__(self) -> None:
self.api = _FakeApi()
@pytest.mark.asyncio
async def test_ack_sent_on_c2c_message() -> None:
"""Ack is sent immediately for C2C messages, then normal processing continues."""
channel = QQChannel(
QQConfig(
app_id="app",
secret="secret",
allow_from=["*"],
ack_message="⏳ Processing...",
),
MessageBus(),
)
channel._client = _FakeClient()
data = SimpleNamespace(
id="msg1",
content="hello",
author=SimpleNamespace(user_openid="user1"),
attachments=[],
)
await channel._on_message(data, is_group=False)
assert len(channel._client.api.c2c_calls) >= 1
ack_call = channel._client.api.c2c_calls[0]
assert ack_call["content"] == "⏳ Processing..."
assert ack_call["openid"] == "user1"
assert ack_call["msg_id"] == "msg1"
assert ack_call["msg_type"] == 0
msg = await channel.bus.consume_inbound()
assert msg.content == "hello"
assert msg.sender_id == "user1"
@pytest.mark.asyncio
async def test_ack_sent_on_group_message() -> None:
"""Ack is sent immediately for group messages, then normal processing continues."""
channel = QQChannel(
QQConfig(
app_id="app",
secret="secret",
allow_from=["*"],
ack_message="⏳ Processing...",
),
MessageBus(),
)
channel._client = _FakeClient()
data = SimpleNamespace(
id="msg2",
content="hello group",
group_openid="group123",
author=SimpleNamespace(member_openid="user1"),
attachments=[],
)
await channel._on_message(data, is_group=True)
assert len(channel._client.api.group_calls) >= 1
ack_call = channel._client.api.group_calls[0]
assert ack_call["content"] == "⏳ Processing..."
assert ack_call["group_openid"] == "group123"
assert ack_call["msg_id"] == "msg2"
assert ack_call["msg_type"] == 0
msg = await channel.bus.consume_inbound()
assert msg.content == "hello group"
assert msg.chat_id == "group123"
@pytest.mark.asyncio
async def test_no_ack_when_ack_message_empty() -> None:
"""Setting ack_message to empty string disables the ack entirely."""
channel = QQChannel(
QQConfig(
app_id="app",
secret="secret",
allow_from=["*"],
ack_message="",
),
MessageBus(),
)
channel._client = _FakeClient()
data = SimpleNamespace(
id="msg3",
content="hello",
author=SimpleNamespace(user_openid="user1"),
attachments=[],
)
await channel._on_message(data, is_group=False)
assert len(channel._client.api.c2c_calls) == 0
assert len(channel._client.api.group_calls) == 0
msg = await channel.bus.consume_inbound()
assert msg.content == "hello"
@pytest.mark.asyncio
async def test_custom_ack_message_text() -> None:
"""Custom Chinese ack_message text is delivered correctly."""
custom = "正在处理中,请稍候..."
channel = QQChannel(
QQConfig(
app_id="app",
secret="secret",
allow_from=["*"],
ack_message=custom,
),
MessageBus(),
)
channel._client = _FakeClient()
data = SimpleNamespace(
id="msg4",
content="test input",
author=SimpleNamespace(user_openid="user1"),
attachments=[],
)
await channel._on_message(data, is_group=False)
assert len(channel._client.api.c2c_calls) >= 1
ack_call = channel._client.api.c2c_calls[0]
assert ack_call["content"] == custom
msg = await channel.bus.consume_inbound()
assert msg.content == "test input"

View File

@ -647,43 +647,56 @@ async def test_group_policy_open_accepts_plain_group_message() -> None:
assert channel._app.bot.get_me_calls == 0
def test_extract_reply_context_no_reply() -> None:
@pytest.mark.asyncio
async def test_extract_reply_context_no_reply() -> None:
"""When there is no reply_to_message, _extract_reply_context returns None."""
channel = TelegramChannel(TelegramConfig(enabled=True, token="123:abc"), MessageBus())
message = SimpleNamespace(reply_to_message=None)
assert TelegramChannel._extract_reply_context(message) is None
assert await channel._extract_reply_context(message) is None
def test_extract_reply_context_with_text() -> None:
@pytest.mark.asyncio
async def test_extract_reply_context_with_text() -> None:
"""When reply has text, return prefixed string."""
reply = SimpleNamespace(text="Hello world", caption=None)
channel = TelegramChannel(TelegramConfig(enabled=True, token="123:abc"), MessageBus())
channel._app = _FakeApp(lambda: None)
reply = SimpleNamespace(text="Hello world", caption=None, from_user=SimpleNamespace(id=2, username="testuser", first_name="Test"))
message = SimpleNamespace(reply_to_message=reply)
assert TelegramChannel._extract_reply_context(message) == "[Reply to: Hello world]"
assert await channel._extract_reply_context(message) == "[Reply to @testuser: Hello world]"
def test_extract_reply_context_with_caption_only() -> None:
@pytest.mark.asyncio
async def test_extract_reply_context_with_caption_only() -> None:
"""When reply has only caption (no text), caption is used."""
reply = SimpleNamespace(text=None, caption="Photo caption")
channel = TelegramChannel(TelegramConfig(enabled=True, token="123:abc"), MessageBus())
channel._app = _FakeApp(lambda: None)
reply = SimpleNamespace(text=None, caption="Photo caption", from_user=SimpleNamespace(id=2, username=None, first_name="Test"))
message = SimpleNamespace(reply_to_message=reply)
assert TelegramChannel._extract_reply_context(message) == "[Reply to: Photo caption]"
assert await channel._extract_reply_context(message) == "[Reply to Test: Photo caption]"
def test_extract_reply_context_truncation() -> None:
@pytest.mark.asyncio
async def test_extract_reply_context_truncation() -> None:
"""Reply text is truncated at TELEGRAM_REPLY_CONTEXT_MAX_LEN."""
channel = TelegramChannel(TelegramConfig(enabled=True, token="123:abc"), MessageBus())
channel._app = _FakeApp(lambda: None)
long_text = "x" * (TELEGRAM_REPLY_CONTEXT_MAX_LEN + 100)
reply = SimpleNamespace(text=long_text, caption=None)
reply = SimpleNamespace(text=long_text, caption=None, from_user=SimpleNamespace(id=2, username=None, first_name=None))
message = SimpleNamespace(reply_to_message=reply)
result = TelegramChannel._extract_reply_context(message)
result = await channel._extract_reply_context(message)
assert result is not None
assert result.startswith("[Reply to: ")
assert result.endswith("...]")
assert len(result) == len("[Reply to: ]") + TELEGRAM_REPLY_CONTEXT_MAX_LEN + len("...")
def test_extract_reply_context_no_text_returns_none() -> None:
@pytest.mark.asyncio
async def test_extract_reply_context_no_text_returns_none() -> None:
"""When reply has no text/caption, _extract_reply_context returns None (media handled separately)."""
channel = TelegramChannel(TelegramConfig(enabled=True, token="123:abc"), MessageBus())
reply = SimpleNamespace(text=None, caption=None)
message = SimpleNamespace(reply_to_message=reply)
assert TelegramChannel._extract_reply_context(message) is None
assert await channel._extract_reply_context(message) is None
@pytest.mark.asyncio

View File

@ -572,6 +572,85 @@ async def test_process_message_skips_bot_messages() -> None:
assert bus.inbound_size == 0
@pytest.mark.asyncio
async def test_process_message_starts_typing_on_inbound() -> None:
"""Typing indicator fires immediately when user message arrives."""
channel, _bus = _make_channel()
channel._running = True
channel._client = object()
channel._token = "token"
channel._start_typing = AsyncMock()
await channel._process_message(
{
"message_type": 1,
"message_id": "m-typing",
"from_user_id": "wx-user",
"context_token": "ctx-typing",
"item_list": [
{"type": ITEM_TEXT, "text_item": {"text": "hello"}},
],
}
)
channel._start_typing.assert_awaited_once_with("wx-user", "ctx-typing")
@pytest.mark.asyncio
async def test_send_final_message_clears_typing_indicator() -> None:
"""Non-progress send should cancel typing status."""
channel, _bus = _make_channel()
channel._client = object()
channel._token = "token"
channel._context_tokens["wx-user"] = "ctx-2"
channel._typing_tickets["wx-user"] = {"ticket": "ticket-2", "next_fetch_at": 9999999999}
channel._send_text = AsyncMock()
channel._api_post = AsyncMock(return_value={"ret": 0})
await channel.send(
type("Msg", (), {"chat_id": "wx-user", "content": "pong", "media": [], "metadata": {}})()
)
channel._send_text.assert_awaited_once_with("wx-user", "pong", "ctx-2")
typing_cancel_calls = [
c for c in channel._api_post.await_args_list
if c.args[0] == "ilink/bot/sendtyping" and c.args[1]["status"] == 2
]
assert len(typing_cancel_calls) >= 1
@pytest.mark.asyncio
async def test_send_progress_message_keeps_typing_indicator() -> None:
"""Progress messages must not cancel typing status."""
channel, _bus = _make_channel()
channel._client = object()
channel._token = "token"
channel._context_tokens["wx-user"] = "ctx-2"
channel._typing_tickets["wx-user"] = {"ticket": "ticket-2", "next_fetch_at": 9999999999}
channel._send_text = AsyncMock()
channel._api_post = AsyncMock(return_value={"ret": 0})
await channel.send(
type(
"Msg",
(),
{
"chat_id": "wx-user",
"content": "thinking",
"media": [],
"metadata": {"_progress": True},
},
)()
)
channel._send_text.assert_awaited_once_with("wx-user", "thinking", "ctx-2")
typing_cancel_calls = [
c for c in channel._api_post.await_args_list
if c.args and c.args[0] == "ilink/bot/sendtyping" and c.args[1].get("status") == 2
]
assert len(typing_cancel_calls) == 0
class _DummyHttpResponse:
def __init__(self, *, headers: dict[str, str] | None = None, status_code: int = 200) -> None:
self.headers = headers or {}

View File

@ -3,6 +3,7 @@
from __future__ import annotations
import asyncio
import os
import time
from unittest.mock import AsyncMock, MagicMock, patch
@ -36,14 +37,23 @@ class TestRestartCommand:
async def test_restart_sends_message_and_calls_execv(self):
from nanobot.command.builtin import cmd_restart
from nanobot.command.router import CommandContext
from nanobot.utils.restart import (
RESTART_NOTIFY_CHANNEL_ENV,
RESTART_NOTIFY_CHAT_ID_ENV,
RESTART_STARTED_AT_ENV,
)
loop, bus = _make_loop()
msg = InboundMessage(channel="cli", sender_id="user", chat_id="direct", content="/restart")
ctx = CommandContext(msg=msg, session=None, key=msg.session_key, raw="/restart", loop=loop)
with patch("nanobot.command.builtin.os.execv") as mock_execv:
with patch.dict(os.environ, {}, clear=False), \
patch("nanobot.command.builtin.os.execv") as mock_execv:
out = await cmd_restart(ctx)
assert "Restarting" in out.content
assert os.environ.get(RESTART_NOTIFY_CHANNEL_ENV) == "cli"
assert os.environ.get(RESTART_NOTIFY_CHAT_ID_ENV) == "direct"
assert os.environ.get(RESTART_STARTED_AT_ENV)
await asyncio.sleep(1.5)
mock_execv.assert_called_once()

View File

@ -240,6 +240,39 @@ async def test_chat_with_retry_uses_retry_after_and_emits_wait_progress(monkeypa
assert progress and "7s" in progress[0]
def test_extract_retry_after_supports_common_provider_formats() -> None:
assert LLMProvider._extract_retry_after('{"error":{"retry_after":20}}') == 20.0
assert LLMProvider._extract_retry_after("Rate limit reached, please try again in 20s") == 20.0
assert LLMProvider._extract_retry_after("retry-after: 20") == 20.0
def test_extract_retry_after_from_headers_supports_numeric_and_http_date() -> None:
assert LLMProvider._extract_retry_after_from_headers({"Retry-After": "20"}) == 20.0
assert LLMProvider._extract_retry_after_from_headers({"retry-after": "20"}) == 20.0
assert LLMProvider._extract_retry_after_from_headers(
{"Retry-After": "Wed, 21 Oct 2015 07:28:00 GMT"},
) == 0.1
@pytest.mark.asyncio
async def test_chat_with_retry_prefers_structured_retry_after_when_present(monkeypatch) -> None:
provider = ScriptedProvider([
LLMResponse(content="429 rate limit", finish_reason="error", retry_after=9.0),
LLMResponse(content="ok"),
])
delays: list[float] = []
async def _fake_sleep(delay: float) -> None:
delays.append(delay)
monkeypatch.setattr("nanobot.providers.base.asyncio.sleep", _fake_sleep)
response = await provider.chat_with_retry(messages=[{"role": "user", "content": "hello"}])
assert response.content == "ok"
assert delays == [9.0]
@pytest.mark.asyncio
async def test_persistent_retry_aborts_after_ten_identical_transient_errors(monkeypatch) -> None:
provider = ScriptedProvider([
@ -263,4 +296,3 @@ async def test_persistent_retry_aborts_after_ten_identical_transient_errors(monk
assert provider.calls == 10
assert delays == [1, 2, 4, 4, 4, 4, 4, 4, 4]

View File

@ -0,0 +1,42 @@
from types import SimpleNamespace
from nanobot.providers.anthropic_provider import AnthropicProvider
from nanobot.providers.azure_openai_provider import AzureOpenAIProvider
from nanobot.providers.openai_compat_provider import OpenAICompatProvider
def test_openai_compat_error_captures_retry_after_from_headers() -> None:
err = Exception("boom")
err.doc = None
err.response = SimpleNamespace(
text='{"error":{"message":"Rate limit exceeded"}}',
headers={"Retry-After": "20"},
)
response = OpenAICompatProvider._handle_error(err)
assert response.retry_after == 20.0
def test_azure_openai_error_captures_retry_after_from_headers() -> None:
err = Exception("boom")
err.body = {"message": "Rate limit exceeded"}
err.response = SimpleNamespace(
text='{"error":{"message":"Rate limit exceeded"}}',
headers={"Retry-After": "20"},
)
response = AzureOpenAIProvider._handle_error(err)
assert response.retry_after == 20.0
def test_anthropic_error_captures_retry_after_from_headers() -> None:
err = Exception("boom")
err.response = SimpleNamespace(
headers={"Retry-After": "20"},
)
response = AnthropicProvider._handle_error(err)
assert response.retry_after == 20.0

View File

@ -0,0 +1,33 @@
from unittest.mock import patch
from nanobot.providers.anthropic_provider import AnthropicProvider
from nanobot.providers.azure_openai_provider import AzureOpenAIProvider
from nanobot.providers.openai_compat_provider import OpenAICompatProvider
def test_openai_compat_disables_sdk_retries_by_default() -> None:
with patch("nanobot.providers.openai_compat_provider.AsyncOpenAI") as mock_client:
OpenAICompatProvider(api_key="sk-test", default_model="gpt-4o")
kwargs = mock_client.call_args.kwargs
assert kwargs["max_retries"] == 0
def test_anthropic_disables_sdk_retries_by_default() -> None:
with patch("anthropic.AsyncAnthropic") as mock_client:
AnthropicProvider(api_key="sk-test", default_model="claude-sonnet-4-5")
kwargs = mock_client.call_args.kwargs
assert kwargs["max_retries"] == 0
def test_azure_openai_disables_sdk_retries_by_default() -> None:
with patch("nanobot.providers.azure_openai_provider.AsyncOpenAI") as mock_client:
AzureOpenAIProvider(
api_key="sk-test",
api_base="https://example.openai.azure.com",
default_model="gpt-4.1",
)
kwargs = mock_client.call_args.kwargs
assert kwargs["max_retries"] == 0

View File

@ -0,0 +1,128 @@
"""Tests for reasoning_content extraction in OpenAICompatProvider.
Covers non-streaming (_parse) and streaming (_parse_chunks) paths for
providers that return a reasoning_content field (e.g. MiMo, DeepSeek-R1).
"""
from types import SimpleNamespace
from unittest.mock import patch
from nanobot.providers.openai_compat_provider import OpenAICompatProvider
# ── _parse: non-streaming ─────────────────────────────────────────────────
def test_parse_dict_extracts_reasoning_content() -> None:
"""reasoning_content at message level is surfaced in LLMResponse."""
with patch("nanobot.providers.openai_compat_provider.AsyncOpenAI"):
provider = OpenAICompatProvider()
response = {
"choices": [{
"message": {
"content": "42",
"reasoning_content": "Let me think step by step…",
},
"finish_reason": "stop",
}],
"usage": {"prompt_tokens": 5, "completion_tokens": 10, "total_tokens": 15},
}
result = provider._parse(response)
assert result.content == "42"
assert result.reasoning_content == "Let me think step by step…"
def test_parse_dict_reasoning_content_none_when_absent() -> None:
"""reasoning_content is None when the response doesn't include it."""
with patch("nanobot.providers.openai_compat_provider.AsyncOpenAI"):
provider = OpenAICompatProvider()
response = {
"choices": [{
"message": {"content": "hello"},
"finish_reason": "stop",
}],
}
result = provider._parse(response)
assert result.reasoning_content is None
# ── _parse_chunks: streaming dict branch ─────────────────────────────────
def test_parse_chunks_dict_accumulates_reasoning_content() -> None:
"""reasoning_content deltas in dict chunks are joined into one string."""
chunks = [
{
"choices": [{
"finish_reason": None,
"delta": {"content": None, "reasoning_content": "Step 1. "},
}],
},
{
"choices": [{
"finish_reason": None,
"delta": {"content": None, "reasoning_content": "Step 2."},
}],
},
{
"choices": [{
"finish_reason": "stop",
"delta": {"content": "answer"},
}],
},
]
result = OpenAICompatProvider._parse_chunks(chunks)
assert result.content == "answer"
assert result.reasoning_content == "Step 1. Step 2."
def test_parse_chunks_dict_reasoning_content_none_when_absent() -> None:
"""reasoning_content is None when no chunk contains it."""
chunks = [
{"choices": [{"finish_reason": "stop", "delta": {"content": "hi"}}]},
]
result = OpenAICompatProvider._parse_chunks(chunks)
assert result.content == "hi"
assert result.reasoning_content is None
# ── _parse_chunks: streaming SDK-object branch ────────────────────────────
def _make_reasoning_chunk(reasoning: str | None, content: str | None, finish: str | None):
delta = SimpleNamespace(content=content, reasoning_content=reasoning, tool_calls=None)
choice = SimpleNamespace(finish_reason=finish, delta=delta)
return SimpleNamespace(choices=[choice], usage=None)
def test_parse_chunks_sdk_accumulates_reasoning_content() -> None:
"""reasoning_content on SDK delta objects is joined across chunks."""
chunks = [
_make_reasoning_chunk("Think… ", None, None),
_make_reasoning_chunk("Done.", None, None),
_make_reasoning_chunk(None, "result", "stop"),
]
result = OpenAICompatProvider._parse_chunks(chunks)
assert result.content == "result"
assert result.reasoning_content == "Think… Done."
def test_parse_chunks_sdk_reasoning_content_none_when_absent() -> None:
"""reasoning_content is None when SDK deltas carry no reasoning_content."""
chunks = [_make_reasoning_chunk(None, "hello", "stop")]
result = OpenAICompatProvider._parse_chunks(chunks)
assert result.reasoning_content is None

View File

@ -321,6 +321,22 @@ class TestWorkspaceRestriction:
assert "Test Skill" in result
assert "Error" not in result
@pytest.mark.asyncio
async def test_read_allowed_in_media_dir(self, tmp_path, monkeypatch):
workspace = tmp_path / "ws"
workspace.mkdir()
media_dir = tmp_path / "media"
media_dir.mkdir()
media_file = media_dir / "photo.txt"
media_file.write_text("shared media", encoding="utf-8")
monkeypatch.setattr("nanobot.agent.tools.filesystem.get_media_dir", lambda: media_dir)
tool = ReadFileTool(workspace=workspace, allowed_dir=workspace)
result = await tool.execute(path=str(media_file))
assert "shared media" in result
assert "Error" not in result
@pytest.mark.asyncio
async def test_extra_dirs_does_not_widen_write(self, tmp_path):
from nanobot.agent.tools.filesystem import WriteFileTool

View File

@ -142,6 +142,19 @@ def test_exec_guard_blocks_quoted_home_path_outside_workspace(tmp_path) -> None:
assert error == "Error: Command blocked by safety guard (path outside working dir)"
def test_exec_guard_allows_media_path_outside_workspace(tmp_path, monkeypatch) -> None:
media_dir = tmp_path / "media"
media_dir.mkdir()
media_file = media_dir / "photo.jpg"
media_file.write_text("ok", encoding="utf-8")
monkeypatch.setattr("nanobot.agent.tools.shell.get_media_dir", lambda: media_dir)
tool = ExecTool(restrict_to_workspace=True)
error = tool._guard_command(f'cat "{media_file}"', str(tmp_path / "workspace"))
assert error is None
def test_exec_guard_blocks_windows_drive_root_outside_workspace(monkeypatch) -> None:
import nanobot.agent.tools.shell as shell_mod

View File

@ -0,0 +1,49 @@
"""Tests for restart notice helpers."""
from __future__ import annotations
import os
from nanobot.utils.restart import (
RestartNotice,
consume_restart_notice_from_env,
format_restart_completed_message,
set_restart_notice_to_env,
should_show_cli_restart_notice,
)
def test_set_and_consume_restart_notice_env_roundtrip(monkeypatch):
monkeypatch.delenv("NANOBOT_RESTART_NOTIFY_CHANNEL", raising=False)
monkeypatch.delenv("NANOBOT_RESTART_NOTIFY_CHAT_ID", raising=False)
monkeypatch.delenv("NANOBOT_RESTART_STARTED_AT", raising=False)
set_restart_notice_to_env(channel="feishu", chat_id="oc_123")
notice = consume_restart_notice_from_env()
assert notice is not None
assert notice.channel == "feishu"
assert notice.chat_id == "oc_123"
assert notice.started_at_raw
# Consumed values should be cleared from env.
assert consume_restart_notice_from_env() is None
assert "NANOBOT_RESTART_NOTIFY_CHANNEL" not in os.environ
assert "NANOBOT_RESTART_NOTIFY_CHAT_ID" not in os.environ
assert "NANOBOT_RESTART_STARTED_AT" not in os.environ
def test_format_restart_completed_message_with_elapsed(monkeypatch):
monkeypatch.setattr("nanobot.utils.restart.time.time", lambda: 102.0)
assert format_restart_completed_message("100.0") == "Restart completed in 2.0s."
def test_should_show_cli_restart_notice():
notice = RestartNotice(channel="cli", chat_id="direct", started_at_raw="100")
assert should_show_cli_restart_notice(notice, "cli:direct") is True
assert should_show_cli_restart_notice(notice, "cli:other") is False
assert should_show_cli_restart_notice(notice, "direct") is True
non_cli = RestartNotice(channel="feishu", chat_id="oc_1", started_at_raw="100")
assert should_show_cli_restart_notice(non_cli, "cli:direct") is False