Add `self_modify: bool = True` config option. When disabled, the self
tool operates in read-only mode — inspect, list_tools, and list_snapshots
still work, but modify, call, manage_tool, snapshot, restore, and reset
are blocked. Provides a safer middle ground between full self_evolution
and disabling the tool entirely.
Track subagent progress via SubagentStatus dataclass that captures
phase, iteration, tool events, usage, and errors. Wire through
checkpoint_callback for phase transitions and _SubagentHook.after_iteration
for per-iteration updates. Enhance SelfTool inspect to display rich
subagent status via dot-path navigation.
Keep late follow-up injections observable when they are drained during max-iteration shutdown so loop-level response suppression still makes the right decision.
Made-with: Cursor
- Migrate "after tools" inline drain to use _try_drain_injections,
completing the refactoring (all 6 drain sites now use the helper).
- Move checkpoint emission into _try_drain_injections via optional
iteration parameter, eliminating the leaky split between helper
and caller for the final-response path.
- Extract _make_injection_callback() test helper to replace 7
identical inject_cb function bodies.
- Add test_injection_cycle_cap_on_error_path to verify the cycle
cap is enforced on error exit paths.
When the agent runner exits due to LLM error, tool error, empty response,
or max_iterations, it breaks out of the iteration loop without draining
the pending injection queue. This causes leftover messages to be
re-published as independent inbound messages, resulting in duplicate or
confusing replies to the user.
Extract the injection drain logic into a `_try_drain_injections` helper
and call it before each break in the error/edge-case paths. If injections
are found, continue the loop instead of breaking. For max_iterations
(where the loop is exhausted), drain injections to prevent re-publish
without continuing.
Prevent proactive compaction from archiving sessions that have an
in-flight agent task, avoiding mid-turn context truncation when a
task runs longer than the idle TTL.
Track text-only user messages that were flushed before the turn loop completes, then materialize an interrupted assistant placeholder on the next request so session history stays legal and later turns do not skip their own assistant reply.
Made-with: Cursor
Use session.add_message for the pre-turn user-message flush and add focused regression tests for crash-time persistence and duplicate-free successful saves.
Made-with: Cursor
Point Dream skill creation at a readable builtin skill-creator template, keep skill writes rooted at the workspace, and document the new skill discovery behavior in README.
Made-with: Cursor
* feat(agent): add mid-turn message injection for responsive follow-ups
Allow user messages sent during an active agent turn to be injected
into the running LLM context instead of being queued behind a
per-session lock. Inspired by Claude Code's mid-turn queue drain
mechanism (query.ts:1547-1643).
Key design decisions:
- Messages are injected as natural user messages between iterations,
no tool cancellation or special system prompt needed
- Two drain checkpoints: after tool execution and after final LLM
response ("last-mile" to prevent dropping late arrivals)
- Bounded by MAX_INJECTION_CYCLES (5) to prevent consuming the
iteration budget on rapid follow-ups
- had_injections flag bypasses _sent_in_turn suppression so follow-up
responses are always delivered
Closes#1609
* fix(agent): harden mid-turn injection with streaming fix, bounded queue, and message safety
- Fix streaming protocol violation: Checkpoint 2 now checks for injections
BEFORE calling on_stream_end, passing resuming=True when injections found
so streaming channels (Feishu) don't prematurely finalize the card
- Bound pending queue to maxsize=20 with QueueFull handling
- Add warning log when injection batch exceeds _MAX_INJECTIONS_PER_TURN
- Re-publish leftover queue messages to bus in _dispatch finally block to
prevent silent message loss on early exit (max_iterations, tool_error, cancel)
- Fix PEP 8 blank line before dataclass and logger.info indentation
- Add 12 new tests covering drain, checkpoints, cycle cap, queue routing,
cleanup, and leftover re-publish
Prefer the more user-friendly idleCompactAfterMinutes name for auto compact while keeping sessionTtlMinutes as a backward-compatible alias. Update tests and README to document the retained recent-context behavior and the new preferred key.
Keep a legal recent suffix in idle auto-compacted sessions so resumed chats preserve their freshest live context while older messages are summarized. Recover persisted summaries even when retained messages remain, and document the new behavior.
Make Consolidator.archive() return the summary string directly instead
of writing to history.jsonl then reading back via get_last_history_entry().
This eliminates a race condition where concurrent _archive calls for
different sessions could read each other's summaries from the shared
history file (cross-user context leak in multi-user deployments).
Also removes Consolidator.get_last_history_entry() — no longer needed.
When a user is idle for longer than a configured TTL, nanobot **proactively** compresses the session context into a summary. This reduces token cost and first-token latency when the user returns — instead of re-processing a long stale context with an expired KV cache, the model receives a compact summary and fresh input.
Keep tool-call assistant messages valid across provider sanitization and avoid trailing user-only history after model errors. This prevents follow-up requests from sending broken tool chains back to the gateway.
- Adjusted message handling in AgentRunner to ensure that historical messages remain unchanged during context governance.
- Introduced tests to verify that backfill operations do not alter the saved message boundary, maintaining the integrity of the conversation history.
- Merged latest main (no conflicts)
- Added test_llm_error_not_appended_to_session_messages: verifies error
content stays out of session messages
- Added test_streamed_flag_not_set_on_llm_error: verifies _streamed is
not set when LLM returns an error, so ChannelManager delivers it
Made-with: Cursor
When the LLM returns an error (e.g. 429 quota exceeded, stream timeout),
streaming channels silently drop the error message because `_streamed=True`
is set in metadata even though no content was actually streamed.
This change:
- Skips setting `_streamed` when stop_reason is "error", so error messages
go through the normal channel.send() path and reach the user
- Stops appending error content to session history, preventing error
messages from polluting subsequent conversation context
- Exposes stop_reason from _run_agent_loop to enable the above check
Introduce a disabled_skills option in the config schema that allows
users to specify a list of skill names to be excluded. The setting is
threaded from config through Nanobot -> AgentLoop -> ContextBuilder ->
SkillsLoader. Disabled skills are filtered out from list_skills,
get_always_skills, and build_skills_summary. Four new test cases cover
the filtering behavior.
Resolved conflict in azure_openai_provider.py by keeping main's
Responses API implementation (role alternation not needed for the
Responses API input format).
Made-with: Cursor
Preserve path folding for quoted exec command paths with spaces so hint previews do not fall back to mid-path truncation. Add regression coverage for quoted Unix and Windows path cases.
Made-with: Cursor
1. exec tool hints previously used val[:40] blind character truncation,
cutting paths mid-segment. Now detects file paths via regex and
abbreviates them with abbreviate_path. Supports Windows, Unix
absolute, and ~/ home paths.
2. Deduplication now compares fully formatted hint strings instead of
tool names alone. Fixes ls /Desktop and ls /Downloads being
incorrectly merged as "ls /Desktop × 2".
Co-authored-by: xzq.xu <zhiqiang.xu@nodeskai.com>
- Move _tool_hint implementation from loop.py to nanobot/utils/tool_hints.py
- Keep thin delegation in AgentLoop._tool_hint for backward compat
- Update test imports to test format_tool_hints directly
Made-with: Cursor
- Introduced a helper method `_for_each_hook_safe` to reduce code duplication in hook method implementations.
- Updated error logging to include the method name for better traceability.
- Improved the `SkillsLoader` class by adding a new method `_skill_entries_from_dir` to simplify skill listing logic.
- Enhanced skill loading and filtering logic, ensuring workspace skills take precedence over built-in ones.
- Added comprehensive tests for `SkillsLoader` to validate functionality and edge cases.
Dream Phase 2 uses fail_on_tool_error=True, which terminates the entire
run on the first tool error (e.g. old_text not found in edit_file).
Normal agent runs default to False so the LLM can self-correct and retry.
Dream should behave the same way.
- Add GitStore class wrapping dulwich for memory file versioning
- Auto-commit memory changes during Dream consolidation
- Add /dream-log and /dream-restore commands for history browsing
- Pass tracked_files as constructor param, generate .gitignore dynamically