Non-priority slash commands (e.g. /new, /help, /dream-log) arriving
while a session has an active LLM turn were silently queued into the
pending injection buffer and later injected as raw user messages into
the LLM conversation. This caused the model to respond to "/new" as
plain text instead of executing the command.
Root cause: the run() loop only checked priority commands (/stop,
/restart, /status) before routing messages to the pending queue. All
other command tiers (exact, prefix) bypassed command dispatch entirely.
Changes:
- Add CommandRouter.is_dispatchable_command() to match exact/prefix
tiers, mirroring the existing is_priority() pattern.
- In run(), intercept dispatchable commands before pending queue
insertion and dispatch them directly via _dispatch_command_inline().
- Extract _cancel_active_tasks() from cmd_stop for reuse; cmd_new now
cancels active tasks before clearing the session to prevent shared
mutable state corruption from concurrent asyncio coroutines.
- Update /new semantics: stops active task first, then clears session.
- Update documentation in help text, docs, and Discord command list.
Problem:
Modern LLMs (GPT-5.4, Claude, Gemini) produce markdown-heavy responses with
numbered lists, headers, and nested formatting. The Telegram channel's
_markdown_to_telegram_html() converter has gaps that leave these poorly
formatted:
1. Numbered lists (1. 2. 3.) have zero handling — sent as raw text
2. Headers (# Title) are stripped to plain text, losing visual hierarchy
3. Mid-stream edits send raw markdown (users see **bold** and ### headers
while the response generates, before the final HTML conversion)
Root Cause:
_markdown_to_telegram_html() handles bullets (- *) but skips numbered lists
entirely. Headers are stripped of # but not given any emphasis. The streaming
path in send_delta() sends buf.text as-is during mid-stream edits (plain
text, no parse_mode) — only the final _stream_end edit converts to HTML.
Fix:
1. Headers now render as <b>bold</b> in the final HTML (using placeholder
markers that survive HTML escaping, restored after all other processing)
2. Numbered lists are normalized (extra whitespace after the dot is cleaned)
3. New _strip_md_block() function strips markdown syntax for readable
plain-text preview during streaming mid-edits
The final _stream_end HTML conversion is unchanged — it still produces
full HTML with parse_mode=HTML. Only the intermediate edits are improved.
Tests:
Added 10 new tests covering:
- Headers converting to bold HTML
- Numbered list preservation and whitespace normalization
- Headers with HTML special characters
- Mixed formatting (headers + bullets + numbers + bold)
- _strip_md_block for inline formatting, headers, bullets, numbers, links
- Streaming mid-edit markdown stripping (initial send + edit)
ZhiPu API returns code 1302 with Chinese text "速率限制" instead of
standard HTTP 429 + "rate limit", causing the retry engine to treat
it as non-transient and fail immediately.
Move the non-int cursor guard out of the two consumer sites and into a
shared ``_iter_valid_entries`` iterator so the invariant lives in one
place. Closes three gaps left by the original fix:
* ``bool`` is now rejected — ``isinstance(True, int)`` is ``True`` in
Python, so the previous guard silently treated ``{"cursor": true}`` as
cursor ``1``.
* Recovery now returns ``max(valid cursors) + 1``. Under adversarial
corruption "first int scanning in reverse" is not the same thing, and
only ``max`` keeps the recovered cursor strictly greater than every
legitimate cursor still on disk.
* Non-int cursors are logged exactly once per ``MemoryStore``. Silently
dropping corrupted entries hides the root cause (an external writer
to ``memory/history.jsonl``); rate-limiting keeps the log clean when
the same poisoned file is read every turn.
All 7 tests from the original fix pass unchanged; 3 new tests pin the
invariants above.
Made-with: Cursor
_next_cursor now checks isinstance(cursor, int) before arithmetic,
falling back to a reverse scan of all entries when the last entry's
cursor is corrupted. read_unprocessed_history skips entries with
non-int cursors instead of crashing on comparison.
Root cause: external callers (cron jobs, plugins) occasionally wrote
string cursors to history.jsonl, which blocked all subsequent
append_history calls with TypeError/ValueError.
Includes 7 regression tests covering string, float, null, and list
cursor types.
The retry branch is only reachable via `except Exception`, and
`CancelledError` inherits from `BaseException`, so today it naturally
bypasses the retry path and /stop still works. Add one focused
regression test so any future refactor that widens the retry catch to
`BaseException`, re-orders the handlers, or adds `CancelledError` to
`_TRANSIENT_EXC_NAMES` fails CI instead of silently swallowing /stop.
Made-with: Cursor
When an MCP server restarts or a network connection drops between
tool calls, the existing session throws ClosedResourceError,
BrokenPipeError, ConnectionResetError, etc. Currently these are
caught as generic exceptions and returned as permanent failures
to the LLM, which then tells the user 'my tools are broken.'
This change adds a single automatic retry with a 1-second backoff
for transient connection-class errors in MCPToolWrapper,
MCPResourceWrapper, and MCPPromptWrapper. Non-transient errors
(ValueError, RuntimeError, McpError, etc.) are not retried.
The retry is conservative:
- Only 1 retry (not configurable, to keep the change minimal)
- Only for a specific set of connection-class exceptions
- Matched by exception class name to avoid importing anyio/etc.
- 1s sleep between attempts to allow the server to recover
- Clear logging distinguishes retried vs permanent failures
In production this eliminates most 'MCP tool call failed:
ClosedResourceError' noise when MCP bridge processes restart
(e.g. after config changes or OOM kills).
Tests: 22 new tests covering retry, exhaustion, non-transient
bypass, timeout bypass, and all three wrapper types.
Extend `_merge_consecutive` so the three invariants from
`LLMProvider._enforce_role_alternation` all hold for Anthropic:
1. collapse consecutive same-role turns (unchanged)
2. no trailing assistant — Anthropic rejects prefill (unchanged)
3. no leading assistant — Anthropic requires the first turn be user
4. non-empty messages array — recover the last stripped assistant as a
user turn when every turn got stripped, so callers don't hit a
secondary "messages array empty" 400
Anthropic-specific wrinkle: `tool_use` blocks live inside `content` (not
a separate `tool_calls` field) and are illegal inside user turns, so
both recovery paths skip any message carrying them rather than silently
producing a malformed request.
Adds 4 unit tests covering the new branches, including the tool_use
opt-outs, and updates the existing `test_single_assistant_stripped` to
reflect the new rerouting contract.
Made-with: Cursor
Anthropic does not support assistant-message prefill and returns a 400
error when the conversation ends with an assistant turn. This commonly
happens when heartbeat/system messages accumulate trailing assistant
replies in the session history.
The _merge_consecutive method already handles same-role merging but did
not strip trailing assistant messages. The base provider's
_enforce_role_alternation (used by OpenAI-compat) does strip them, but
AnthropicProvider uses its own _merge_consecutive instead.
Add a trailing-assistant stripping loop to _merge_consecutive, matching
the behavior already present in _enforce_role_alternation.
Includes 7 new tests covering merge + strip behavior.
Add a regression test that actually runs the CancelledError branch of
AgentLoop._dispatch end-to-end and asserts the in-flight checkpoint is
materialized into session.messages before the cancellation unwinds.
The three existing tests call _restore_runtime_checkpoint directly, so
they pass even if the cancel-time restore is ever removed from
_dispatch. This new test is the one that actually locks the fix in
place.
Made-with: Cursor
When a user sends /stop to interrupt an active agent turn, the task is
cancelled via CancelledError. Previously, the cancellation handler just
logged and re-raised, discarding any tool results and assistant messages
accumulated during the interrupted turn.
The runtime checkpoint mechanism already persists partial turn state
(assistant messages, completed tool results, pending tool calls) into
session metadata via _emit_checkpoint. However, this checkpoint was only
materialized into session history on the NEXT incoming message via
_restore_runtime_checkpoint — not at cancellation time.
Now the CancelledError handler in _dispatch calls
_restore_runtime_checkpoint immediately, so the partial context is
preserved in session history. This means the next message the user sends
will see all the work that was done before /stop, rather than starting
from scratch.
Fixes#2966
Includes 3 tests verifying checkpoint restoration on cancellation.
`append_history` previously used `strip_think(entry) or entry.rstrip()`
as a safety net, so if the entire entry was a template-token leak (e.g.
`<think>reasoning</think>` or `<channel|>` alone), the raw leaked text
was still persisted to history — later re-introducing the very content
`strip_think` was meant to scrub, via consolidation / replay.
Persist the cleaned content directly. When cleanup empties a non-empty
entry, log at debug and store an empty-content record (cursor continuity
preserved). Adds 3 regression tests in test_memory_store.py covering:
- Well-formed thinking blocks are stripped before persistence.
- Pure-leak entries persist as empty, not as raw text.
- Malformed prefix leaks (`<channel|>`) also persist as empty.
Some models / Ollama renderers occasionally emit tokenizer-level template
leaks that the existing regexes miss:
1. Malformed opening tags with no closing `>`, running straight into
user-facing content — e.g. `<think广场照明灯目前…` (observed with
Gemma 4 via Ollama). The earlier `<think>[\s\S]*?</think>` and
`^\s*<think>[\s\S]*$` patterns both require `>`, so these leak into
rendered messages.
2. Harmony-style channel markers like `<channel|>` / `<|channel|>` at
the start of a response.
3. Orphan `</think>` / `</thought>` closing tags left behind when only
the opener was consumed upstream.
Handles each case conservatively:
- Malformed `<think` / `<thought` only match when the next char is NOT
a tag-name continuation (`[A-Za-z0-9_\-:>/]`). Explicit ASCII class
instead of `\w` because Python's Unicode `\w` matches CJK and would
defeat the primary fix.
- Orphan closing tags and channel markers are stripped **only at the
start or end of the text**. `strip_think` is also applied before
persisting history (memory.py), so mid-text stripping would silently
rewrite transcripts where the tokens themselves are discussed.
Preserves: `<thinker>`, `<think-foo>`, `<think_foo>`, `<think1>`,
`<think:foo>`, `<thought/>`, literal `` `</think>` `` / `` `<channel|>` ``
inside prose or code blocks.
Adds 16 new regression tests covering both the leak cases and the
preserved-prose cases.
- Fix critical plain-text fallback that was sending raw HTML tags to
users: keep raw markdown available for the fallback path
- Extract TELEGRAM_HTML_MAX_LEN (4096) constant to replace hardcoded
magic number and document the difference from TELEGRAM_MAX_MESSAGE_LEN
- Add fallback to _send_text for extra HTML chunks when HTML parse fails
- Add missing @pytest.mark.asyncio decorator on
test_send_delta_stream_end_html_expansion_does_not_overflow
Cherry-picked from #3311 (stutiredboy). Streaming edits called
edit_message_text(text=buf.text) without chunking, so once accumulated
deltas crossed Telegram's 4096-char limit an ongoing stream would fail
with BadRequest.
Extracts _flush_stream_overflow helper that edits the first chunk in
place, sends any middle chunks, and re-anchors the buffer to a new
message for the tail so subsequent deltas keep streaming.
Co-Authored-By: stutiredboy <stutiredboy@users.noreply.github.com>
Cherry-picked from #3316 (himax12). When streaming completes in send_delta(),
the code was splitting raw markdown text by 4000, then converting to HTML.
The markdown-to-HTML conversion adds 10-33% characters, which could push
the result over Telegram's 4096 character limit.
The fix converts markdown to HTML first, then splits by 4096 (actual Telegram
limit), ensuring the edited message always fits.
Fixes#3315
Replaces inline dedup logic with the existing helper to match the
style of _is_self_address and other reject branches, and to keep the
_processed_uids eviction logic in one place.
The previous fix hardcoded session_key_override as channel:chat_id which
broke unified session mode where pending queues use "unified:default".
Propagate the effective key from _set_tool_context through SpawnTool
into the origin dict so _announce_result routes to the correct pending
queue in both normal and unified session modes.
When mid-turn message injection (PR #2985) was introduced, the pending
queue routing uses the effective session key to match incoming messages
against active sessions. Subagent results, however, use channel="system"
which produces a session key of "system:feishu:ou_..." instead of the
main agent's "feishu:ou_...", causing the result to bypass the pending
queue and be dispatched as a competing independent task.
Fix: set session_key_override to the original channel:chat_id so
_effective_session_key returns the correct key and the subagent result
gets routed into the main agent's pending queue.
Cron jobs now pass on_progress=_silent to process_direct, matching
the heartbeat pattern. Previously, tool hints and streaming deltas
were published to the user channel via bus during execution, but the
final response could be rejected by evaluate_response — leaving users
with confusing partial output and no conclusion.
Closes#3319
- Replace one-time DOM read with MutationObserver on <html> class
- Remove hardcoded #0a0a0a background, let oneDark/oneLight own it
- Add light-mode header/copy-button colors (bg-zinc-100 for light)
- Bump font size from 13px to 14px, line-height from 1.55 to 1.6
- Add subtle border to distinguish code block edges
- Add explicit CJK fonts (PingFang SC, Noto Sans SC, Microsoft YaHei) and
programmer fonts (JetBrains Mono, Fira Code, Cascadia Code) to Tailwind config
- Bump prose base size from prose-sm (14px) to prose-lg (18px) for sharper CJK rendering
- Unify user/assistant message font size at 18px with CJK-aware line-height (1.8)
- Replace pure black/white foreground with Apple-style warm grays (#1d1d1f / #f5f5f7)
- Override Tailwind Typography colors to use design tokens for consistency
- Add negative letter-spacing on headings for tighter, more polished look
SessionManager.save() previously used bare open("w") which could
truncate the JSONL file if the process crashed mid-write. Now writes
to a .tmp file and atomically replaces via os.replace(), matching the
pattern already used in qq.py.
_load() now attempts _repair() before returning None, recovering
valid lines from partially-written files. 12 new tests cover atomic
save correctness, temp-file cleanup on failure, and repair of
truncated/corrupt JSONL.
cowork-with:opencode(glm-5.1)
The old test `test_on_message_ignores_bot_messages` asserted the
previous (incorrect) contract that ALL bot-authored messages are
dropped. With #3217 only self-loops are dropped, so this test was
replaced with three more precise tests:
- test_on_message_ignores_self_messages: verifies self-loop guard
(author_id == _bot_user_id is dropped)
- test_on_message_accepts_messages_from_other_bots: new test for
the fix itself — other bots' messages flow through
- test_on_message_stops_typing_on_handle_exception: preserves the
typing cleanup assertion from the original test
Net result: +1 behavior tested, same behaviors retained.
Co-authored with Claude Opus 4.7
Previously the Discord channel dropped every message from any bot
account via `if message.author.bot`, which prevented legitimate
multi-agent setups (one bot asking another for help, bot-to-bot
@mentions, etc.) from working.
Narrow the guard to only drop messages from this bot's own account
by comparing against self._bot_user_id (already populated in on_ready).
Self-loop protection is preserved — each bot instance still ignores
its own outbound messages.
Co-authored with Claude Opus 4.7
PR #3125 added a top-level `oneOf` branch to `_CRON_PARAMETERS` to
advertise per-action required fields. OpenAI Codex/Responses rejects
`oneOf`/`anyOf`/`allOf`/`enum`/`not` at the root of function
parameters, so any agent that registers the cron tool now fails to
start with:
HTTP 400: Invalid schema for function 'cron': schema must have
type 'object' and not have 'oneOf'/'anyOf'/'allOf'/'enum'/'not'
at the top level.
Remove the top-level `oneOf`. The original intent of #3125 (stop LLMs
from looping on the #3113 contract mismatch) is preserved by:
- `validate_params` — runtime-enforces `message` for `action='add'`
and `job_id` for `action='remove'`
- field descriptions — each schema field already flags
"REQUIRED when action='...'" so the LLM sees the contract
The regression test is updated to lock the invariant in the other
direction: the top-level schema must not contain
`oneOf`/`anyOf`/`allOf`/`not`, and the REQUIRED hints must stay on
`message` and `job_id`.
Verified:
- tests/cron/ 70 passed
- tests/agent/test_loop_cron_timezone.py + tests/providers/ 232 passed
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Replace fixed sleep-based waits with condition polling in cron tests and mock the restart delay in CLI restart tests to reduce suite runtime without changing behavior.
When the Responses API fails repeatedly (3 consecutive compatibility
errors), skip it and fall back directly to Chat Completions. Unlike a
permanent disable, the circuit re-probes after 5 minutes so recovery
is automatic when the API comes back. Success resets the counter.
Keyed per (model, reasoning_effort) so a failure with one model does
not affect others.
- Track last_summary in maybe_consolidate_by_tokens() to persist the summary
- Change return to break in the consolidation loop to allow summary persistence
- Save summary to session.metadata['_last_summary'] for consistency with AutoCompact._archive()
- Ensures compressed content remains visible to the model via prepare_session() injection
Fixes#3274