Replace fixed sleep-based waits with condition polling in cron tests and mock the restart delay in CLI restart tests to reduce suite runtime without changing behavior.
When the Responses API fails repeatedly (3 consecutive compatibility
errors), skip it and fall back directly to Chat Completions. Unlike a
permanent disable, the circuit re-probes after 5 minutes so recovery
is automatic when the API comes back. Success resets the counter.
Keyed per (model, reasoning_effort) so a failure with one model does
not affect others.
- Track last_summary in maybe_consolidate_by_tokens() to persist the summary
- Change return to break in the consolidation loop to allow summary persistence
- Save summary to session.metadata['_last_summary'] for consistency with AutoCompact._archive()
- Ensures compressed content remains visible to the model via prepare_session() injection
Fixes#3274
The old test `test_make_console_uses_force_terminal` hardcoded
`force_terminal is True`, which contradicts the fix: we now defer
to sys.stdout.isatty() so piped / non-TTY output gets plain text
instead of ANSI escape codes.
Split into two tests covering both branches:
- test_make_console_force_terminal_when_stdout_is_tty: TTY path
(force_terminal=True, rich output)
- test_make_console_force_terminal_false_when_stdout_is_not_tty:
non-TTY path (force_terminal=False, plain text) — regression
guard for the bug reported in #3265
Co-authored with Claude Opus 4.7
The global guard changed baseline agent and subagent behavior without
proving a real no-progress loop. Keep this PR focused on the cron
contract hardening and validation fixes.
Made-with: Cursor
Treat `.git` files the same as `.git` directories so GitStore refuses to initialize inside git worktrees, and add a focused regression test for that checkout shape.
Made-with: Cursor
GitStore.init() now checks if the workspace is already inside a git
repository before calling porcelain.init(). If so, it refuses to create
a nested repo. Additionally, existing .gitignore files are preserved
by appending only missing Dream-specific entries rather than overwriting.
Closes#2980
Add structured issue templates for bug reports and feature requests,
with dropdown menus for channel, LLM provider, Python version, and OS.
Redirect questions to Discussions. Add PR template with checklist.
Ref: https://github.com/HKUDS/nanobot/discussions/3284
Literal["standard", "persistent"] fields are now rendered as select
dropdowns instead of free-text input. This makes provider_retry_mode
and any future Literal fields self-documenting in the wizard.
- Add [H] Channel Common menu to configure send_progress, send_tool_hints,
send_max_retries, and transcription_provider
- Add [I] API Server menu to configure host, port, timeout
- Add real-time Pydantic field constraint validation (ge/gt/le/lt/min_length/max_length)
with constraint hints shown in field display (e.g. "Send Max Retries (0-10)")
- Add _pause() to View Configuration Summary to prevent immediate screen clear
- Fix _format_value dict branch to handle BaseModel instances without crashing
Move all behavioral instructions out of identity.md into SOUL.md so that
each file has a single clear purpose:
- identity.md: capability facts only (runtime, workspace, format hints,
tool guidance, untrusted content warning)
- SOUL.md: behavioral rules (name, personality, execution rules)
The "Act, don't narrate" rule is refined into layered behavior: act
immediately on single-step tasks, plan first for multi-step tasks. This
eliminates the contradiction where identity said "never end with a plan"
but user SOUL.md said "always plan first".
Add two focused regression tests for the retry-wait leak this PR fixes:
- tests/agent/test_runner.py::test_runner_binds_on_retry_wait_to_retry_callback_not_progress
locks in that `AgentRunSpec.retry_wait_callback` (not `progress_callback`) is
what `_build_request_kwargs` forwards to the provider as `on_retry_wait`.
- tests/channels/test_channel_manager_delta_coalescing.py::TestRetryWaitFiltering
runs `_dispatch_outbound` end-to-end and asserts that `_retry_wait: True`
messages never reach channel send.
Both tests fail on origin/main and pass with this PR's fix applied.
Made-with: Cursor
- Add inline rationale for persisting before ContextBuilder and for
passing current_message="" on subagent follow-ups (avoids
double-projection after merge).
- Skip persistence for empty subagent content (no-op messages should
not pollute history).
- Add regression test covering the empty-content guard.
Made-with: Cursor
MyTool blocks direct access to sensitive nested paths, but its formatter
still printed scalar fields for small config objects. That let
`my(action="check", key="web_config.search")` expose `api_key` in plain
text even though the docs promise sensitive sub-fields are protected.
This keeps the change narrow: sensitive nested config fields are omitted
from MyTool's formatted output, and regression coverage locks the
behavior in.
Constraint: Must preserve existing read-only inspection behavior for non-sensitive fields
Constraint: Keep scope limited to MyTool rather than introducing broader redaction plumbing
Rejected: Rework global context/tool redaction around MyTool | broader than needed for the leak path
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: If more nested config rendering is added later, filter sensitive field names at the formatter boundary as well as the path resolver
Tested: PYTHONPATH=$PWD pytest -q tests/agent/tools/test_self_tool.py /Users/jh0927/Workspace/nanobot-validation-artifacts-2026-04-18/test_my_tool_secret_leak_regression.py
Not-tested: Full repository test suite
Related: #3259
The streaming API currently logs backend exceptions but still emits the
same `finish_reason: "stop"` + `[DONE]` terminator used for successful
responses. That makes a failed streamed request look successful to
OpenAI-compatible clients.
This keeps the fix narrow: track whether the stream backend failed and
suppress the success terminator in that case. A regression test locks in
the expected behavior.
Constraint: Keep the non-streaming response path untouched
Constraint: Follow up on the known limitation called out during PR #3222 review without redesigning the SSE protocol
Rejected: Introduce a custom SSE error event shape in the same patch | expands API surface and review scope
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: If explicit streamed error events are added later, keep them distinct from the success stop+[DONE] terminator to preserve client retry semantics
Tested: PYTHONPATH=$PWD pytest -q tests/test_api_stream.py /Users/jh0927/Workspace/nanobot-validation-artifacts-2026-04-18/test_api_stream_error_regression.py
Not-tested: Full repository test suite
Related: #3260
Related: #3222
The previous patch promoted `message` into top-level `required`, which solved
the `add` loop but broke `list` and `remove`: `ToolRegistry.prepare_call`
enforces `required` via `validate_params`, so `cron(action="list")` and
`cron(action="remove", job_id=...)` — both documented in `SKILL.md` — started
failing schema validation with the same "missing required message" shape that
#3113 describes for `add`.
Instead:
- Keep `required=["action"]` so `list`/`remove` stay callable.
- Prefix `message`'s description with `REQUIRED when action='add'.` and
`job_id`'s with `REQUIRED when action='remove'.` so LLMs see the real
per-action contract up front.
- Keep the improved runtime error message from the previous commit for the
case an LLM still omits `message` on `add`.
Also add `tests/cron/test_cron_tool_schema_contract.py` to lock in:
- `list` and `remove` pass schema validation with no `message`
- `add` with `message` passes
- `add` without `message` surfaces the actionable runtime error
- field descriptions carry the REQUIRED hints
- top-level `required` stays `["action"]`
Existing `tests/cron/test_cron_tool_list.py` cases bypass schema validation by
calling `_list_jobs()` / `_remove_job()` directly, which is why CI didn't catch
the regression; the new test goes through `ToolRegistry.prepare_call`.
Previously the JSON schema only required "action" but the runtime
rejected empty messages, causing LLM retry loops. Making "message"
required in the schema prevents the mismatch, and the improved error
message guides the LLM to retry with the correct parameters.
Fixes#3113
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The earlier commits picked up a large amount of Black-style reformatting
(multi-line frozenset / keyword-arg wrapping / docstring blanks / removed
parens) on top of the actual guard fix. @chengyongru flagged it; the
first pass reverted some but not all.
This restores nanobot/providers/base.py, runner.py, heartbeat/service.py,
and utils/evaluator.py to origin/main and reapplies only the guard logic:
- base.py: add should_execute_tools property
- runner.py / heartbeat/service.py / utils/evaluator.py: route through it
+ log a warning when has_tool_calls but finish_reason is anomalous
Net diff vs main is now +87/-4 (was +211/-102) — roughly 30 lines of real
logic, which is what the PR is actually about.
Behavior unchanged from previous HEAD; full suite still 2014 passed.
Made-with: Cursor
Two small follow-ups to the guard:
1. Fix the should_execute_tools docstring so it matches the actual code.
The previous version said "Only execute when finish_reason explicitly
signals tool intent" but the code also accepts finish_reason == "stop".
Explain why (some compliant providers emit "stop" with legitimate tool
calls — openai_compat_provider.py already mirrors this at lines ~633 /
~678 where ("tool_calls", "stop") are both treated as the terminal
tool-call state). Without this, a strict "tool_calls"-only guard would
regress 15 existing runner tests that construct LLMResponse with
tool_calls but no explicit finish_reason (default = "stop").
2. Add tests/providers/test_llm_response.py. This locks the three cases:
- no tool calls -> never executes
- tool calls + "tool_calls"/stop -> executes
- tool calls + refusal / content_filter / error / length / ... -> blocked
These are exactly the boundary cases the #3220 fix is about; without a
test here a future refactor could silently revert the guard.
Body + tests only, no behavior change beyond the existing PR's intent.
Made-with: Cursor
When chat_with_retry returns an error response (finish_reason='error')
instead of raising an exception, archive() previously treated the error
message as a valid summary and wrote it to history.jsonl, while the
original session data was already cleared by /new — causing irreversible
data loss.
Fix: check finish_reason after the LLM call and raise RuntimeError on
error responses, which naturally falls through to the existing raw_archive
fallback. This preserves the original messages in history.jsonl instead
of losing them.
Fixes#3244
The PyPI package `nanobot` is a different project ("Minimalist robot
navigation framework"), not this one. This project publishes as
`nanobot-ai` (see pyproject.toml). Following the guide as-written would
pull down the wrong package — flagged by vansatchen in #3188.
Same toml block as the build-backend fix, one-word change.
Made-with: Cursor
The previous setuptools.backends._legacy:_Backend has been removed in
Python 3.14 and newer setuptools, causing 'Cannot import setuptools.backends.legacy' error.
Using hatchling (same as main project) ensures compatibility across Python versions.
Closes#3188