* feat(dream): enhance memory cleanup with staleness detection
- Phase 1: add [FILE-REMOVE] directive and staleness patterns (14-day
threshold, completed tasks, superseded info, resolved tracking)
- Phase 2: add explicit cleanup rules, file paths section, and deletion
guidance to prevent LLM path confusion
- Inject current date and file sizes into Phase 1 context for age-aware
analysis
- Add _dream_debug() helper for observability (dream-debug.log in workspace)
- Log Phase 1 analysis output and Phase 2 tool events for debugging
Tested with glm-5-turbo: MEMORY.md reduced from 149 to 108-129 lines
across two rounds, correctly identifying and removing weather data,
detailed incident info, completed research, and stale discussions.
* refactor(dream): replace _dream_debug file logger with loguru
Remove the custom _dream_debug() helper that wrote to dream-debug.log
and use the existing loguru logger instead. Phase 1 analysis is logged
at debug level, tool events at info level — consistent with the rest
of the codebase and no extra log file to manage.
* fix(dream): make stale scan independent of conversation history
Reframe Phase 1 from a single comparison task to two independent
tasks: history diff AND proactive stale scan. The LLM was skipping
stale content that wasn't referenced in conversation history (e.g.
old triage snapshots). Now explicitly requires scanning memory files
for staleness patterns on every run.
* fix(dream): correct old_text param name and truncate debug log
- Phase 2 prompt: old_string -> old_text to match EditFileTool interface
- Phase 1 debug log: truncate analysis to 500 chars to avoid oversized lines
* refactor(dream): streamline prompts by separating concerns
Phase 1 owns all staleness judgment logic; Phase 2 is pure execution
guidance. Remove duplicated cleanup rules from Phase 2 since Phase 1
already determines what to add/remove. Fix remaining old_string -> old_text.
Total prompt size reduced ~45% (870 -> 480 tokens).
* fix(dream): add FILE-REMOVE execution guidance to Phase 2 prompt
Phase 2 was only processing [FILE] additions and ignoring [FILE-REMOVE]
deletions after the cleanup rules were removed. Add explicit mapping:
[FILE] → add content, [FILE-REMOVE] → delete content.
PyJWT and cryptography are optional msteams deps; they should not be
bundled into the generic dev install. Tests now skip the entire file
when the deps are missing, following the dingtalk pattern.
Dream Phase 2 uses fail_on_tool_error=True, which terminates the entire
run on the first tool error (e.g. old_text not found in edit_file).
Normal agent runs default to False so the LLM can self-correct and retry.
Dream should behave the same way.
The streaming path in OpenAICompatProvider.chat_stream() was passing
reasoning_content deltas through on_content_delta(), causing model
internal reasoning to be displayed to the user alongside the actual
response content.
reasoning_content is already collected separately in _parse_chunks()
and stored in LLMResponse.reasoning_content for session history.
It should never be forwarded to the user-facing stream.
Feishu's GetMessageResource API only accepts 'image' or 'file' as the
type parameter. Video messages have msg_type='media', which was passed
through unchanged, causing error 234001 (Invalid request param). Now
both 'audio' and 'media' are converted to 'file' for download.
- _add_reaction now returns reaction_id on success
- Add _remove_reaction_sync and _remove_reaction methods
- Remove reaction when stream ends to clear processing indicator
- Store reaction_id in metadata for later removal
Fixes#2591
The "nanobot is thinking..." spinner was printing ANSI escape codes
literally in some terminals, causing garbled output like:
?[2K?[32m⠧?[0m ?[2mnanobot is thinking...?[0m
Root causes:
1. Console created without force_terminal=True, so Rich couldn't
reliably detect terminal capabilities
2. Spinner continued running during user input prompt, conflicting
with prompt_toolkit
Changes:
- Set force_terminal=True in _make_console() for proper ANSI handling
- Add stop_for_input() method to StreamRenderer
- Call stop_for_input() before reading user input in interactive mode
- Add tests for the new functionality
Enable GPT-5 models (gpt-5, gpt-5.4, gpt-5.4-mini, etc.) to work
correctly with the OpenAI-compatible provider by:
- Setting `supports_max_completion_tokens=True` on the OpenAI provider
spec so `max_completion_tokens` is sent instead of the deprecated
`max_tokens` parameter that GPT-5 rejects.
- Adding `_supports_temperature()` to conditionally omit the
`temperature` parameter for reasoning models (o1/o3/o4) and when
`reasoning_effort` is active, matching the existing Azure provider
behaviour.
Both changes are backward-compatible: older GPT-4 models continue to
work as before since `max_completion_tokens` is accepted by all recent
OpenAI models and temperature is only omitted when reasoning is active.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The test test_openai_compat_strips_message_level_reasoning_fields was
added in fbedf7a and incorrectly asserted that reasoning_content and
extra_content should be stripped from messages. This contradicts the
intent of b5302b6 which explicitly added these fields to _ALLOWED_MSG_KEYS
to preserve them through sanitization.
Rename the test and fix assertions to match the original design intent:
reasoning_content and extra_content at message level should be preserved,
and extra_content inside tool_calls should also be preserved.
Signed-off-by: Lingao Meng <menglingao@xiaomi.com>
reasoning_content and extra_content were accidentally dropped from
_ALLOWED_MSG_KEYS.
Also fix session/manager.py to include reasoning_content when building
LLM messages from session history, so the field is not lost across
turns.
Without this fix, providers such as Kimi, emit reasoning_content in
assistant messages will have it stripped on the next request, breaking
multi-turn thinking mode.
Fixes: https://github.com/HKUDS/nanobot/issues/2777
Signed-off-by: Lingao Meng <menglingao@xiaomi.com>
Use dream_log and dream_restore in Telegram's bot command menu so command registration succeeds, while still accepting the original dream-log and dream-restore forms in chat. Keep the internal command routing unchanged and add coverage for the alias normalization path.
- Added Jinja2 template support for various agent responses, including identity, skills, and memory consolidation.
- Introduced new templates for evaluating notifications, handling subagent announcements, and managing platform policies.
- Updated the agent context and memory modules to utilize the new templating system for improved readability and maintainability.
- Added a new dependency on Jinja2 in pyproject.toml.