Add ack_message config field to QQConfig (default: Processing...). When non-empty, sends an instant text reply before agent processing begins, filling the silence gap for users. Uses existing _send_text_only method; failure is logged but never blocks normal message handling.
Made-with: Cursor
Fixes#2591
The "nanobot is thinking..." spinner was printing ANSI escape codes
literally in some terminals, causing garbled output like:
?[2K?[32m⠧?[0m ?[2mnanobot is thinking...?[0m
Root causes:
1. Console created without force_terminal=True, so Rich couldn't
reliably detect terminal capabilities
2. Spinner continued running during user input prompt, conflicting
with prompt_toolkit
Changes:
- Set force_terminal=True in _make_console() for proper ANSI handling
- Add stop_for_input() method to StreamRenderer
- Call stop_for_input() before reading user input in interactive mode
- Add tests for the new functionality
Replace single-stage MemoryConsolidator with a two-stage architecture:
- Consolidator: lightweight token-budget triggered summarization,
appends to HISTORY.md with cursor-based tracking
- Dream: cron-scheduled two-phase processor that analyzes HISTORY.md
and updates SOUL.md, USER.md, MEMORY.md via AgentRunner with
edit_file tools for surgical, fault-tolerant updates
New files: MemoryStore (pure file I/O), Dream class, DreamConfig,
/dream and /dream-log commands. 89 tests covering all components.
Introduce a CompositeHook that fans out lifecycle callbacks to an
ordered list of AgentHook instances with per-hook error isolation.
Extract the nested _LoopHook and _SubagentHook to module scope as
public LoopHook / SubagentHook so downstream users can subclass or
compose them. Add `hooks` parameter to AgentLoop.__init__ for
registering custom hooks at construction time.
Closes#2603
Make the fixed-session API surface explicit, document its usage, exclude api/ from core agent line counts, and remove implicit aiohttp pytest fixture dependencies from API tests.
Require a single user message, reject mismatched models, document the OpenAI-compatible API, and exclude api/ from core agent line counts so the interface matches nanobot's minimal fixed-session runtime.
Reject mismatched models and require a single user message so the OpenAI-compatible endpoint reflects the fixed-session nanobot runtime without extra compatibility noise.
* feat(channel): add iMessage integration
Add iMessage as a built-in channel via Photon's iMessage platform.
Local mode (macOS) reads ~/Library/Messages/chat.db via sqlite3 with
attachment support, sends via AppleScript, and handles voice
transcription. Remote mode connects to a Photon endpoint over pure
HTTP with proxy support, implementing send, attachments, tapback
reactions, typing indicators, mark-as-read, polls, groups, contacts,
and health checks.
Paragraph splitting sends each \n\n-separated block as a separate
iMessage bubble. Startup seeds existing message IDs to avoid
replaying old messages on restart.
Tested end-to-end on both modes.
* fix(imessage): address review feedback
- Add retry with exponential backoff for SQLite lock contention
- Escape newlines/tabs in AppleScript to prevent parse failures
- Lower _MAX_MESSAGE_LEN to 6000 (paragraph splitting handles UX)
- Add privacy note in README for Photon remote mode
- Merge poll interval constants; make local mode configurable too
* feat(discord): channel-side read receipt and subagent indicator
- Add 👀 reaction on message receipt, removed after bot reply
- Add 🔧 reaction on first progress message, removed on final reply
- Both managed purely in discord.py channel layer, no subagent.py changes
- Config: read_receipt_emoji, subagent_emoji with sensible defaults
Addresses maintainer feedback on HKUDS/nanobot#2330
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(discord): add both reactions on inbound, not on progress
_progress flag is for streaming chunks, not subagent lifecycle.
Add 👀 + 🔧 immediately on message receipt, clear both on final reply.
* fix: remove stale _subagent_active reference in _clear_reactions
* fix(discord): clean up reactions on message handling failure
Previously, if _handle_message raised an exception, pending reactions
(read receipt + subagent indicator) would remain on the user's message
indefinitely since send() — which handles normal cleanup — would never
be called.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(discord): replace subagent_emoji with delayed working indicator
- Rename subagent_emoji → working_emoji (honest naming: not tied to
subagent lifecycle)
- Add working_emoji_delay (default 2s) — cosmetic delay so 🔧 appears
after 👀, cancelled if bot replies before delay fires
- Clean up: cancel pending task + remove both reactions on reply/error
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
The on_stream and on_stream_end closures in _dispatch hardcoded
their metadata dicts, dropping channel-specific fields like
message_thread_id. Copy msg.metadata first, then add internal
streaming flags, matching the pattern already used by _bus_progress.
Read serve host, port, and timeout from config by default, keep CLI flags higher priority, and bind the API to localhost by default for safer local usage.
Expose OpenAI-compatible chat completions and models endpoints through a single persistent API session, keeping the integration simple without adding multi-session isolation yet.
* feat(feishu): add streaming support via CardKit PATCH API
Implement send_delta() for Feishu channel using interactive card
progressive editing:
- First delta creates a card with markdown content and typing cursor
- Subsequent deltas throttled at 0.5s to respect 5 QPS PATCH limit
- stream_end finalizes with full formatted card (tables, rich markdown)
Also refactors _send_message_sync to return message_id (str | None)
and adds _patch_card_sync for card updates.
Includes 17 new unit tests covering streaming lifecycle, config,
card building, and edge cases.
Made-with: Cursor
* feat(feishu): close CardKit streaming_mode on stream end
Call cardkit card.settings after final content update so chat preview
leaves default [生成中...] summary (Feishu streaming docs).
Made-with: Cursor
* style: polish Feishu streaming (PEP8 spacing, drop unused test imports)
Made-with: Cursor
* docs(feishu): document cardkit:card:write for streaming
- README: permissions, upgrade note for existing apps, streaming toggle
- CHANNEL_PLUGIN_GUIDE: Feishu CardKit scope and when to disable streaming
Made-with: Cursor
* docs: address PR 2382 review (test path, plugin guide, README, English docstrings)
- Move Feishu streaming tests to tests/channels/
- Remove Feishu CardKit scope from CHANNEL_PLUGIN_GUIDE (plugin-dev doc only)
- README Feishu permissions: consistent English
- feishu.py: replace Chinese in streaming docstrings/comments
Made-with: Cursor
When LLM generates faster than channel can process, asyncio.Queue
accumulates multiple _stream_delta messages. Each delta triggers a
separate API call (~700ms each), causing visible delay after LLM
finishes.
Solution: In _dispatch_outbound, drain all queued deltas for the same
(channel, chat_id) before sending, combining them into a single API
call. Non-matching messages are preserved in a pending buffer for
subsequent processing.
This reduces N API calls to 1 when queue has N accumulated deltas.
Make channel delivery failures raise consistently so retry policy lives in ChannelManager rather than being split across individual channels. Tighten Telegram stream finalization, clarify sendMaxRetries semantics, and align the docs with the behavior the system actually guarantees.
Read the default timezone from the agent context when wiring the cron tool so startup no longer depends on an out-of-scope local variable. Add a regression test to ensure AgentLoop passes the configured timezone through to cron.
Made-with: Cursor