- Pass resolved self.context_window_tokens to Consolidator instead of
raw parameter that could be None, preventing consolidation failures
- Calculate percentage against input budget (ctx - max_completion - 1024)
instead of raw context window, consistent with Consolidator/snip formulas
- Pass actual max_completion_tokens from provider to build_status_content
- Cap percentage display at 999 to prevent runaway values
- Add tests for budget-based percentage and cap behavior
Default Microsoft Teams inbound auth validation to enabled, update the README to match, and prevent denied senders from persisting conversation refs before allowlist checks pass.
Made-with: Cursor
- Check both jwt and cryptography in MSTEAMS_AVAILABLE guard so
partial installs fail early with a clear message instead of at runtime
- Add aclose() to test FakeHttpClient so stop() won't crash
- Move MSTEAMS.md into README.md following the same details/summary
pattern used by every other channel
- Note in README that validateInboundAuth defaults to false
Warn when validate_inbound_auth is disabled (default) so operators are
aware the webhook accepts unverified requests. Restore pymupdf to the
dev optional-dependencies group — its removal in the original PR was
unrelated to the Teams channel feature.
PyJWT and cryptography are optional msteams deps; they should not be
bundled into the generic dev install. Tests now skip the entire file
when the deps are missing, following the dingtalk pattern.
- Use .get('cursor') instead of direct dict access to prevent KeyError
- Skip entries without cursor and log a warning
- Fix _next_cursor fallback to safely check for cursor existence
Fixes#3190
channel_id is already assigned from self._channel_key(message.channel)
earlier in the same function. The second identical assignment on line 453
is dead code left over from a copy-paste.
Document why MiniMax thinking mode uses a separate Anthropic-compatible provider and list the matching base URLs. Add a small registry test so the new provider stays wired to the expected backend and API key.
Made-with: Cursor
- Convert skills summary from verbose XML (4-5 lines/skill) to compact
markdown list (1 line/skill) with inline path for read_file lookup
- Exclude always-loaded skills (e.g. memory) from the skills index to
avoid duplicating content already in the Active Skills section
- Skip injecting the Memory section when MEMORY.md still matches the
bundled template (i.e. Dream hasn't populated it yet)
- Inject `thinking={"type": "enabled|disabled"}` via extra_body for
Kimi thinking-capable models (kimi-k2.5, k2.6-code-preview).
- Add _is_kimi_thinking_model helper to handle both bare slugs and
OpenRouter-style prefixed names (e.g. moonshotai/kimi-k2.5).
- reasoning_effort="minimal" maps to disabled; any other value enables it.
- Add tests for enabled/disabled states and OpenRouter prefix handling.
Lock the new interaction-channel retry termination hints so both exhausted standard retries and persistent identical-error stops keep emitting the final progress message.
Made-with: Cursor
Lock the /status task counter to the actual stop scope by asserting it sums unfinished dispatch tasks with running subagents for the current session.
Made-with: Cursor
Lock the strict-provider sanitization path so assistant tool calls without function.arguments are normalized to {} instead of being forwarded as missing values.
Made-with: Cursor
Ensure assistant tool-call function.arguments is always emitted as valid JSON text so strict OpenAI-compatible backends (including Alibaba code models) do not reject requests. Add regressions for dict and malformed-string argument payloads in message sanitization.
Made-with: Cursor
Keep dict-backed channel configs compatible with both allow_from and allowFrom without losing empty-list semantics, and add focused regression coverage for the allow-list boundary.
Made-with: Cursor
getattr() on a dict never finds custom keys — it only searches
object attributes, not dict keys. When channel config is loaded as
a Pydantic extra field (which is a plain dict), getattr(config,
'allow_from', []) always returns the default [], causing all access
to be denied regardless of the allowFrom configuration.
Fix both is_allowed() and _validate_allow_from() to use isinstance
checks, falling back to dict.get() for dict configs while preserving
getattr() for object-style configs.
Bug 1: _drain_pending did not call extract_documents on follow-up
messages arriving mid-turn. Documents attached to queued messages were
silently dropped because _build_user_content only handles images.
Fix: call extract_documents before _build_user_content in _drain_pending.
Bug 2: extract_documents read the entire file into memory (up to 50 MB)
just to check 16 bytes of magic header for MIME detection.
Fix: read only the first 16 bytes via open()+read(16) instead of
Path.read_bytes().
Added regression tests for both bugs.
Made-with: Cursor
Move extract_documents() to nanobot.utils.document as a reusable helper
and call it once in AgentLoop._process_message, the single entry point
for all message processing (API + all channels).
This replaces the previous API-only _extract_documents() in server.py,
ensuring Telegram, Feishu, Slack, WeChat, and all other channels also
benefit from automatic document text extraction.
Adds a configurable max_file_size guard (default 50 MB) to skip
oversized files gracefully, preventing unbounded memory/CPU usage
from channel-downloaded attachments.
- server.py: removed _extract_documents and related imports
- document.py: added extract_documents() with size limit
- loop.py: calls extract_documents() at the top of _process_message
- Tests updated: 70 related tests pass
Made-with: Cursor
ContextBuilder._build_user_content now only handles images (its original
responsibility). Document text extraction (PDF, DOCX, XLSX, PPTX) is
performed by the new _extract_documents() helper in server.py, called
before process_direct(). This keeps the core context builder free of
format-specific dependencies and makes the API boundary the single place
where uploaded files are pre-processed.
Tests updated to reflect the new responsibility boundary.
Made-with: Cursor
Keep the API file upload branch current with main, enforce the documented JSON base64 per-file limit, and avoid leaking document extraction error strings into user prompts.
Made-with: Cursor
When Slack resolves a named target to another conversation, do not reuse the origin thread timestamp on the destination send, and keep reaction cleanup anchored to the source conversation.
Made-with: Cursor