Keep the /dev workspace guard exception scoped to the known benign device paths already handled by ExecTool, and add coverage that non-benign /dev targets still get blocked. Also add a streaming regression for tool_error responses so fatal tool failures are delivered by channels instead of being marked as already streamed.
Co-authored-by: Cursor <cursoragent@cursor.com>
Three independent fixes for issues exposed by PR #3493:
1. shell.py: allow /dev/* paths in workspace guard
Commands like `rm file.txt 2>/dev/null` were blocked because
_extract_absolute_paths captured /dev/null as a path outside
the workspace. Allow /dev like media_path is already allowed.
2. shell.py: remove | from home_paths regex prefix
Loki query operator `|~` was misinterpreted as pipe + home
directory, causing false workspace violation errors.
3. loop.py: change _streamed from blacklist to whitelist
stop_reason "tool_error" was not in the exclusion set
{"ask_user", "error"}, so _streamed=True was set on fatal
errors. channel manager then skipped channel.send() because
it assumed the content was already streamed — but it never
was. Whitelist to only {"stop", "end_turn", "max_tokens"}.
Also fixes a pre-existing Windows bug in _spawn where
create_subprocess_exec + list2cmdline breaks commands with
paths containing spaces (e.g. D:\Program Files\python.exe).
Closes: #3599, #3605
Keep the workspace-boundary changes easier to review by trimming long explanatory comments down to short local notes. Also make the #3599 POSIX command regression skip on Windows and normalize workspace violation signatures to POSIX separators so the throttle tests are platform-stable.
Tests:
- uv run pytest tests/tools/test_exec_security.py tests/utils/test_workspace_violation_throttle.py -q
- uv run pytest -q
Co-authored-by: Cursor <cursoragent@cursor.com>
Replaces PR #3493's blanket fatal abort with a "tell the model + throttle
the bypass loop" policy. Workspace-bound rejections are now ordinary
recoverable tool errors enriched with a structured "this is a hard policy
boundary" instruction; SSRF stays the only marker that aborts the turn.
Why the fatal-abort approach broke
----------------------------------
PR #3493 promoted every shell `_guard_command` and filesystem path-resolution
rejection to a turn-fatal RuntimeError. Two of those messages (`path
outside working dir` and `path traversal detected`) are heuristic substring
scans on the raw command, so legitimate commands like `rm <ws>/x.txt
2>/dev/null` or `find . -type f` killed the user's turn (#3599). On
channels with outbound dedupe (Telegram) the user just saw silence (#3605),
and the noise polluted the LLM's context until it started hallucinating
guard rejections on plain relative paths (#3597).
Why we still need *some* throttle
---------------------------------
The original #3493 pain point was real: the LLM, refused once, would
swap tools and try again -- read_file -> exec cat -> exec cp -> bash -c
-> ln -sf -> python -c open(...). Just removing the fatal escape lets
that loop run wild until max_iterations.
What this commit does
---------------------
- `nanobot/utils/runtime.py`: add `workspace_violation_signature` and
`repeated_workspace_violation_error`. The signature normalizes
filesystem `path` arguments and the first absolute path inside an
exec command, so swapping tools against the same outside target hits
the same throttle bucket. Two soft attempts are allowed; the third
attempt's tool result is replaced with a hard "stop trying to bypass"
message that quotes the target path and tells the model to ask the
user for help.
- `nanobot/agent/runner.py`: split classification into `_is_ssrf_violation`
(still fatal) and `_is_workspace_violation` (now soft). All three
failure branches in `_run_tool` (prep_error / exception / Error
result) route through a shared `_classify_violation` that bumps the
per-turn workspace_violation_counts dict and either keeps the tool's
own message or substitutes the throttle escalation. `_execute_tools`
now threads that dict alongside the existing external_lookup_counts.
- `nanobot/agent/tools/shell.py`: append a structured boundary note to
every workspace-bound guard rejection (`working_dir could not be
resolved`, `working_dir is outside`, `path outside working dir`,
`path traversal detected`). SSRF errors stay short and direct so the
model doesn't try to "phrase around" them. Existing `2>/dev/null`
allow-list and benign device passthrough from the previous commit
remain.
- `nanobot/agent/tools/filesystem.py`: append the same boundary note to
the `outside allowed directory` PermissionError so read_file / write_file
/ list_dir errors give the LLM the same explicit hint.
Tests
-----
- `tests/utils/test_workspace_violation_throttle.py` (new): signature
collapses across read_file/exec/python -c against the same path,
different paths get independent budgets, escalation only fires after
the third attempt.
- `tests/agent/test_runner.py`:
- `test_runner_does_not_abort_on_workspace_violation_anymore` -- v2
contract: filesystem PermissionError is now soft, runner moves to
the next iteration and finalizes cleanly.
- `test_is_ssrf_violation_remains_fatal` + the existing
`test_runner_aborts_on_ssrf_violation` -- SSRF still aborts on the
first attempt.
- `test_runner_lets_llm_recover_from_shell_guard_path_outside` -- end
to end recovery from `path outside working dir`.
- `test_runner_throttles_repeated_workspace_bypass_attempts` -- four
bypass attempts against the same outside target produce at least
one `workspace_violation_escalated` event and the run completes
naturally without aborting the turn.
- The two `_execute_tools` direct-call tests now pass the new
workspace_violation_counts dict.
- `tests/tools/test_tool_validation.py`: relax three `==` assertions
to `startswith` + "hard policy boundary" substring check to match
the new structured error messages.
- `tests/tools/test_exec_security.py` keeps the prior `2>/dev/null`
regression and the `> /etc/issue` negative case from the previous
commit on this branch -- they still pass under the new policy.
Coverage status: full pytest 2648 passed / 2 skipped (was 2638 / 2
on origin/main). Ruff is clean for every file touched in this commit.
Co-authored-by: Cursor <cursoragent@cursor.com>
PR #3493 promoted every shell `_guard_command` rejection to a turn-fatal
RuntimeError. The two heuristic outputs in that list -- `path outside
working dir` and `path traversal detected` -- routinely false-positive on
benign constructs (e.g. `2>/dev/null`, quoted `..` arguments to sed/find,
absolute paths inside inline scripts), so legitimate workspace commands
silently kill the user's turn (#3599) and the agent never gets a chance
to retry with a different approach (#3605).
Two changes, both narrowly scoped:
- `ExecTool._guard_command` now skips a small allow-list of kernel device
files (`/dev/null`, the standard streams, `/dev/random`, `/dev/fd/N`,
...) before the workspace path check, matched against the pre-resolve
string so symlinks like `/dev/stderr -> /proc/self/fd/2` still hit the
allow-list. Real outside writes such as `> /etc/issue` remain blocked.
- `AgentRunner._WORKSPACE_BLOCK_MARKERS` keeps only the four hard
path-resolution errors from filesystem.py / shell.py and the SSRF
marker. The two heuristic substrings move out of the fatal list, so
the LLM sees them as ordinary tool errors and can self-correct in the
next iteration. SSRF stays fatal because retrying an internal URL
with a different phrasing would defeat the safety boundary.
Tests:
- `tests/tools/test_exec_security.py`: parametrized regression for the
exact #3599 command sample plus other stdio redirects and device
reads; explicit negative case asserts `> /etc/issue` is still blocked.
- `tests/agent/test_runner.py`: `_is_workspace_violation` no longer
fatals on the two heuristic markers, plus an end-to-end case proving
the runner hands the guard error back to the LLM and finalizes the
next turn cleanly.
* fix: allow_patterns take priority over deny_patterns in ExecTool
Previously deny_patterns were checked first with no bypass, meaning
allow_patterns could never exempt commands from the built-in deny list.
This made it impossible to whitelist destructive commands for specific
directories (e.g. build/cleanup tasks).
Changes:
- shell.py: check allow_patterns first; if matched, skip deny check
- shell.py: deny_patterns now appends to built-in list (not replaces)
- schema.py: add allow_patterns/deny_patterns to ExecToolConfig
- loop.py/subagent.py: pass allow_patterns/deny_patterns to ExecTool
- Add test_exec_allow_patterns.py covering priority semantics
* fix: separate deny pattern errors from workspace violation detection
The deny pattern error message "Command blocked by safety guard" was
included in _WORKSPACE_BLOCK_MARKERS, causing deny_pattern blocks to be
misclassified as fatal workspace violations. This meant LLMs had no
chance to retry with a different command — the turn was aborted
immediately.
Changes:
- shell.py: deny/allowlist error messages now use distinct phrasing
("blocked by deny pattern filter" / "blocked by allowlist filter")
- runner.py: remove "blocked by safety guard" from
_WORKSPACE_BLOCK_MARKERS so deny_pattern errors are treated as normal
tool errors (LLM can retry) instead of fatal violations
- workspace path errors still use "blocked by safety guard" and remain
fatal as intended
* fix: update test assertions to match new deny pattern error message
* fix: indentation error in test file
* fix: restore SSRF fatal classification and tidy exec pattern plumbing
Address review feedback on the deny/allow_patterns rework:
- runner.py: re-add "internal/private url detected" to
_WORKSPACE_BLOCK_MARKERS. The earlier marker removal also stripped
fatal classification from SSRF / internal-URL rejections (whose
message still says "blocked by safety guard"), turning a hard
security boundary into something the LLM could retry.
- loop.py / subagent.py: drop `or None` between ExecToolConfig and
ExecTool. The schema default is an empty list and ExecTool already
normalizes None back to [], so the indirection was a no-op.
- shell.py: extract `explicitly_allowed` flag in _guard_command so
allow_patterns are scanned once instead of twice and the control
flow no longer relies on a no-op `pass + else` branch.
- tests/agent/test_runner.py: add a regression test asserting that
the SSRF block message is treated as fatal, while deny/allowlist
filter messages are deliberately non-fatal.
* fix: remove unused exec allow-pattern test import
Keep the new ExecTool allow-pattern coverage clean under ruff.
Co-authored-by: Cursor <cursoragent@cursor.com>
---------
Co-authored-by: Xubin Ren <xubinrencs@gmail.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
LLM-generated tool calls may wrap URLs in markdown backticks or quotes
(e.g. \https://example.com\), causing urlparse to produce empty scheme
and netloc, which leads to all fetch attempts failing silently.
Add URL cleaning at the top of WebFetchTool.execute to strip whitespace,
backticks, double quotes, and single quotes, plus an early rejection guard
for non-http(s) URLs after cleaning.
Adds Olostep (https://www.olostep.com) as an optional web_search backend
using the official olostep Python SDK (client.answers.create()).
Changes:
- pyproject.toml: adds olostep>=0.1.0 optional dependency
- schema.py: adds olostep to provider comment in WebSearchConfig
- web.py: adds _search_olostep() with lazy import and provider branching
- docs/configuration.md: documents Olostep setup under web search config
- tests: unit tests for the new provider
Backward compatible: existing users see no behavior change unless they
opt into provider: "olostep". No hard dependency at runtime path.
Co-authored-by: umerkay <umerkk164@gmail.com>
When deployed with Docker and workspace mounted as a volume, sending
media files failed because relative paths (e.g. output/image.png) were
not resolved against the workspace directory. The process CWD differs
from the workspace in containerized environments, causing os.path.isfile
checks to fail in channel handlers. Normalize relative media paths at
the MessageTool entry point using get_workspace_path().
MCP resource/prompt/tool names containing spaces or special characters
(e.g. "PostgreSQL System Information") were forwarded verbatim to model
provider APIs, causing validation errors from both Anthropic and OpenAI
which require names matching ^[a-zA-Z0-9_-]{1,128}$.
Add _sanitize_name() that replaces invalid characters with underscores
and collapses consecutive underscores. Applied in MCPToolWrapper,
MCPResourceWrapper, MCPPromptWrapper constructors and the enabled_tools
filtering logic.
Closes#3468
Capture Slack thread metadata for cron and message-tool deliveries so replies stay in the originating thread, and hydrate first thread mentions with recent Slack context.
Made-with: Cursor
Only mark message-tool deliveries for channel-session recording while cron jobs are running, avoiding duplicate session writes during normal user turns.
Made-with: Cursor
Extend the existing on_progress callback to carry structured tool-event
payloads alongside the plain-text hint, so channels can render rich
tool execution state (start/finish/error, arguments, results, file
attachments) rather than only the pre-formatted hint string.
Changes
-------
- AgentLoop._tool_event_start_payload() — builds a version-1 start
payload from a ToolCallRequest
- AgentLoop._tool_event_result_extras() — extracts files/embeds from a
tool result dict
- AgentLoop._tool_event_finish_payloads() — maps tool_calls +
tool_results + tool_events from AgentHookContext into finish payloads
- _LoopHook.before_execute_tools() — passes tool_events=[...] to
on_progress together with the existing tool_hint flag
- _LoopHook.after_iteration() — emits a second on_progress call with
the finish payloads once tool results are available
- _bus_progress() — forwards tool_events as _tool_events in OutboundMessage
metadata so channel implementations can read them
- on_progress type widened to Callable[..., Awaitable[None]] on all
public entry points; _cli_progress updated to accept and ignore
tool_events
The contract is additive: callers that only accept (content, *, tool_hint)
continue to work unchanged. Callers that also accept tool_events receive
the structured data.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two kill-switch tests for the new inline-keyboards path. Neither is
flashy — they just make sure the next unrelated refactor can't quietly
regress two narrow contracts the PR relies on.
1. TelegramChannel._build_keyboard returns None whenever
TelegramConfig.inline_keyboards is False, even if buttons are
supplied. The flag defaults off; if someone ever flips that default
the change should fail this test before it reaches prod bots.
2. MessageTool rejects malformed `buttons` payloads (non-list, mixed
list/str row, non-str label, None label) up front instead of
letting them slip into the channel layer where Telegram would
silently 400 the send. Parametrized over four shapes the guard
needs to reject.
No production code touched.
Made-with: Cursor
Wire up the existing office document extractors in document.py to
ReadFileTool by adding an extension guard and _read_office_doc() method
that follows the established PDF pattern. Handles missing libraries,
corrupt files, empty documents, and 128K truncation consistently.
Follow-ups from review of #3194:
- ci.yml: drop unconditional --ignore=tests/channels/test_matrix_channel.py.
That test file already calls pytest.importorskip("nio") at module top, so
it self-skips on Windows (where nio isn't installed) without also hiding
62 tests from Linux CI.
- filesystem.py: hoist `import os` to the module top and drop the duplicate
inline import in ReadFileTool.execute. Document the CRLF->LF normalization
as intentional (primarily a Windows UX fix so downstream StrReplace/Grep
match consistently regardless of where the file was written).
- test_read_enhancements.py: lock down two new behaviors
* TestFileStateHashFallback: check_read warns when content changes but
mtime is unchanged (coarse-mtime filesystems on Windows).
* TestReadFileLineEndingNormalization: ReadFileTool strips CRLF and
preserves LF-only files untouched.
- test_tool_validation.py: restore list2cmdline/shlex.quote in
test_exec_head_tail_truncation. The temp_path-based form was correct,
but dropping the quoting broke on any Windows path containing spaces
(e.g. C:\Users\John Doe\...). CI runners happen not to have spaces so
this slipped through.
Tests: 1993 passed locally.
Made-with: Cursor
Add a built-in tool that lets the agent inspect and modify its own
runtime state (model, iterations, context window, etc.).
Key features:
- inspect: view current config, usage stats, and subagent status
- modify: adjust parameters at runtime (protected by type/range validation)
- Subagent observability: inspect running subagent tasks (phase,
iteration, tool events, errors) — subagents are no longer a black box
- Watchdog corrects out-of-bounds values on each iteration
- Enabled by default in read-only mode (self_modify: false)
- All changes are in-memory only; restart restores defaults
- Comprehensive test suite (90 tests)
Includes a self-awareness skill (always-on) with progressive disclosure:
SKILL.md for core rules, references/examples.md for detailed scenarios.
get_definitions() sorts tools on every LLM iteration for prompt cache
stability. Cache the sorted result and invalidate on register/unregister
so the sort only runs when the tool set actually changes.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add focused registry coverage so the new read_file/read_write parameter guard stays actionable without changing generic validation behavior for other tools.
Made-with: Cursor
Keep the new exec guard focused on writes to history.jsonl and .dream_cursor while still allowing read-only copy operations out of those files.
Made-with: Cursor
* feat(agent): add mid-turn message injection for responsive follow-ups
Allow user messages sent during an active agent turn to be injected
into the running LLM context instead of being queued behind a
per-session lock. Inspired by Claude Code's mid-turn queue drain
mechanism (query.ts:1547-1643).
Key design decisions:
- Messages are injected as natural user messages between iterations,
no tool cancellation or special system prompt needed
- Two drain checkpoints: after tool execution and after final LLM
response ("last-mile" to prevent dropping late arrivals)
- Bounded by MAX_INJECTION_CYCLES (5) to prevent consuming the
iteration budget on rapid follow-ups
- had_injections flag bypasses _sent_in_turn suppression so follow-up
responses are always delivered
Closes#1609
* fix(agent): harden mid-turn injection with streaming fix, bounded queue, and message safety
- Fix streaming protocol violation: Checkpoint 2 now checks for injections
BEFORE calling on_stream_end, passing resuming=True when injections found
so streaming channels (Feishu) don't prematurely finalize the card
- Bound pending queue to maxsize=20 with QueueFull handling
- Add warning log when injection batch exceeds _MAX_INJECTIONS_PER_TURN
- Re-publish leftover queue messages to bus in _dispatch finally block to
prevent silent message loss on early exit (max_iterations, tool_error, cancel)
- Fix PEP 8 blank line before dataclass and logger.info indentation
- Add 12 new tests covering drain, checkpoints, cycle cap, queue routing,
cleanup, and leftover re-publish
Each MCP server now connects in its own asyncio.Task to isolate anyio
cancel scopes and prevent 'exit cancel scope in different task' errors
when multiple servers (especially mixed transport types) are configured.
Changes:
- connect_mcp_servers() returns dict[str, AsyncExitStack] instead of None
- Each server runs in separate task via asyncio.gather()
- AgentLoop uses _mcp_stacks dict to track per-server stacks
- Tests updated to handle new API
Add allowed_env_keys config field to selectively forward host environment variables (e.g. GOPATH, JAVA_HOME) into the sandboxed subprocess environment, while keeping the default allow-list unchanged.
When the LLM returns an error (e.g. 429 quota exceeded, stream timeout),
streaming channels silently drop the error message because `_streamed=True`
is set in metadata even though no content was actually streamed.
This change:
- Skips setting `_streamed` when stop_reason is "error", so error messages
go through the normal channel.send() path and reach the user
- Stops appending error content to session history, preventing error
messages from polluting subsequent conversation context
- Exposes stop_reason from _run_agent_loop to enable the above check
nanobot's Windows exec environment was not forwarding ProgramFiles and related variables, so docker desktop start could not discover the desktop CLI plugin and reported unknown command. Forward the missing variables and add a regression test that covers the Windows env shape.
ExecTool hardcoded bash, breaking exec on Windows. Now uses cmd.exe
via COMSPEC on Windows with a curated minimal env (PATH, SYSTEMROOT,
etc.) that excludes secrets. bwrap sandbox gracefully skips on Windows.
- Propagate `description` from MCP prompt arguments into the JSON
Schema so LLMs can better understand prompt parameters.
- Align generic-exception error message with tool/resource wrappers
(drop redundant `{exc}` detail).
- Extend test fixture to mock `mcp.shared.exceptions.McpError`.
- Add tests for argument description forwarding and McpError handling.
Made-with: Cursor
Add MCPResourceWrapper and MCPPromptWrapper classes that expose MCP
server resources and prompts as nanobot tools. Resources are read-only
tools that fetch content by URI, and prompts are read-only tools that
return filled prompt templates with optional arguments.
- MCPResourceWrapper: reads resource content (text and binary) via URI
- MCPPromptWrapper: gets prompt templates with typed arguments
- Both handle timeouts, cancellation, and MCP SDK 1.x error types
- Resources and prompts are registered during server connection
- Gracefully handles servers that don't support resources/prompts