Implement the real GitHub device flow and Copilot token exchange for the GitHub Copilot provider.
Also route github-copilot models through a dedicated backend and strip the provider prefix before API requests.
Add focused regression coverage for provider wiring and model normalization.
Generated with GitHub Copilot, GPT-5.4.
Read serve host, port, and timeout from config by default, keep CLI flags higher priority, and bind the API to localhost by default for safer local usage.
Expose OpenAI-compatible chat completions and models endpoints through a single persistent API session, keeping the integration simple without adding multi-session isolation yet.
Add agent-level timezone configuration with a UTC default, propagate it into runtime context and heartbeat prompts, and document valid IANA timezone usage in the README.
Keep cron state workspace-scoped while only migrating legacy jobs into the default workspace. This preserves seamless upgrades for existing installs without polluting intentionally new workspaces.
Move channel-specific login logic from CLI into each channel class via a
new `login(force=False)` method on BaseChannel. The `channels login <name>`
command now dynamically loads the channel and calls its login() method.
- WeixinChannel.login(): calls existing _qr_login(), with force to clear saved token
- WhatsAppChannel.login(): sets up bridge and spawns npm process for QR login
- CLI no longer contains duplicate login logic per channel
- Update CHANNEL_PLUGIN_GUIDE to document the login() hook
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add a new WeChat (微信) channel that connects to personal WeChat using
the ilinkai.weixin.qq.com HTTP long-poll API. Protocol reverse-engineered
from @tencent-weixin/openclaw-weixin v1.0.2.
Features:
- QR code login flow (nanobot weixin login)
- HTTP long-poll message receiving (getupdates)
- Text message sending with proper WeixinMessage format
- Media download with AES-128-ECB decryption (image/voice/file/video)
- Voice-to-text from WeChat + Groq Whisper fallback
- Quoted message (ref_msg) support
- Session expiry detection and auto-pause
- Server-suggested poll timeout adaptation
- Context token caching for replies
- Auto-discovery via channel registry
No WebSocket, no Node.js bridge, no local WeChat client needed — pure
HTTP with a bot token obtained via QR code scan.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Move ThinkingSpinner and StreamRenderer into a dedicated module to keep
commands.py focused on orchestration. Uses Rich Live with manual refresh
(auto_refresh=False) and ellipsis overflow for stable streaming output.
Made-with: Cursor
Preserve the provider and agent-loop streaming primitives plus the CLI experiment scaffolding so this work can be resumed later without blocking urgent bug fixes on main.
Made-with: Cursor
Merge process_direct() and process_direct_outbound() into a single
interface returning OutboundMessage | None. This eliminates the
dual-path detection logic in CLI single-message mode that relied on
inspect.iscoroutinefunction to distinguish between the two APIs.
Extract status rendering into a pure function build_status_content()
in utils/helpers.py, decoupling it from AgentLoop internals.
Made-with: Cursor
Only use process_direct_outbound when the agent loop actually exposes it as an async method, and otherwise fall back to the legacy process_direct path. This keeps the new CLI render-metadata flow without breaking existing test doubles or older direct-call implementations.
Made-with: Cursor
Keep status output responsive while estimating current context from session history, dropping low-value queue/subagent counters, and marking command-style replies for plain-text rendering in CLI. Also route direct CLI calls through outbound metadata so help/status formatting stays explicit instead of relying on content heuristics.
Made-with: Cursor
- Mask sensitive fields (api_key/token/secret/password) in all display
surfaces, showing only the last 4 characters
- Replace all emoji with pure ASCII labels for consistent cross-platform
terminal rendering
- Extract _print_summary_panel helper, eliminating 5x duplicate table
construction in _show_summary
- Replace 3 one-line wrapper functions with declarative _SETTINGS_SECTIONS
dispatch tables and _MENU_DISPATCH in run_onboard
- Extract _handle_model_field / _handle_context_window_field into a
_FIELD_HANDLERS registry, shrinking _configure_pydantic_model
- Return FieldTypeInfo NamedTuple from _get_field_type_info for clarity
- Replace global mutable _PROVIDER_INFO / _CHANNEL_INFO with @lru_cache
- Use vars() instead of dir() in _get_channel_info for reliable config
class discovery
- Defer litellm import in model_info.py so non-wizard CLI paths stay fast
- Clarify README Quick Start wording (Add -> Configure)
Cherry-pick from d6acf1a with manual merge resolution.
Keep onboarding edits in draft state until users choose Done or Save and
Exit, so backing out or discarding the wizard no longer persists partial
changes.
Co-Authored-By: Jason Zhao <144443939+JasonZhaoWW@users.noreply.github.com>
--workspace and --config now work as initial defaults in interactive mode:
- The wizard starts with these values pre-filled
- Users can view and modify them in the wizard
- Final saved config reflects user's choices
This makes the CLI args more useful for interactive sessions while
still allowing full customization through the wizard.
When upgrading, if jobs.json exists at the old global path and not yet
at the workspace path, move it automatically. Prevents silent loss of
existing cron jobs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace `get_cron_dir()` with `config.workspace_path / "cron"` so each
workspace keeps its own `jobs.json`. This lets users run multiple
nanobot instances with independent cron schedules without cross-talk.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace manual _active_spinner + _pause_spinner/_resume_spinner with
_ThinkingSpinner class that owns the spinner lifecycle via __enter__/
__exit__ and provides a pause() context manager for temporarily
stopping the spinner during progress output.
Benefits:
- Restores Pythonic context manager pattern matching original code
- Eliminates duplicated start/stop boilerplate between single-message
and interactive modes
- pause() context manager guarantees resume even if print raises
- _active flag prevents post-teardown resume from async callbacks
The Rich console.status() spinner ('nanobot is thinking...') was not
cleared when tool call progress lines were printed during processing,
causing overlapping/garbled terminal output.
Replace the context-manager approach with explicit start/stop lifecycle:
- _pause_spinner() stops the spinner before any progress line is printed
- _resume_spinner() restarts it after printing
- Applied to both single-message mode (_cli_progress) and interactive
mode (_consume_outbound)
Closes#1956
- Add nanobot/utils/evaluator.py: lightweight LLM tool-call to decide notify/silent after background task execution
- Remove magic token injection from heartbeat and cron prompts
- Clean session history (no more <SILENT_OK> pollution)
- Add tests for evaluator and updated heartbeat three-phase flow
Appends a strict instruction to background task prompts (cron and heartbeat)
directing the agent to return a `<SILENT_OK>` token if there is nothing
material to report. Adds conditional logic to intercept this token and
suppress the outbound message to the user, preventing notification spam
from autonomous background checks.
Replace platform-specific shell=True logic with shutil.which('npm') to
resolve the full path to the npm executable. This is cleaner because:
- No shell=True needed (safer, no shell injection risk)
- No platform-specific branching (sys.platform checks removed)
- Works identically on Windows, macOS, and Linux
- shutil.which() resolves npm.cmd on Windows automatically
The npm path check that already existed in _get_bridge_dir() is now
reused as the resolved path for subprocess calls. The same pattern is
applied to channels_login().
On Windows, npm is installed as npm.cmd (a batch script), not a direct
executable. When subprocess.run() is called with a list like
['npm', 'install'] without shell=True, Python's CreateProcess cannot
locate npm.cmd, resulting in:
FileNotFoundError: [WinError 2] The system cannot find the file specified
This fix adds a sys.platform == 'win32' check before each npm subprocess
call. On Windows, it uses shell=True with a string command so the shell
can resolve npm.cmd. On other platforms, the original list-based call is
preserved unchanged.
Affected locations:
- _get_bridge_dir(): npm install, npm run build
- channels_login(): npm start
No behavioral change on Linux/macOS.