Skip to content

Releases: openclaw/openclaw

openclaw 2026.4.23

24 Apr 15:19
v2026.4.23
a979721

Choose a tag to compare

2026.4.23

Changes

  • Providers/OpenAI: add image generation and reference-image editing through Codex OAuth, so openai/gpt-image-2 works without an OPENAI_API_KEY. Fixes #70703.
  • Providers/OpenRouter: add image generation and reference-image editing through image_generate, so OpenRouter image models work with OPENROUTER_API_KEY. Fixes #55066 via #67668. Thanks @notamicrodose.
  • Image generation: let agents request provider-supported quality and output format hints, and pass OpenAI-specific background, moderation, compression, and user hints through the image_generate tool. (#70503) Thanks @ottodeng.
  • Agents/subagents: add optional forked context for native sessions_spawn runs so agents can let a child inherit the requester transcript when needed, while keeping clean isolated sessions as the default; includes prompt guidance, context-engine hook metadata, docs, and QA coverage.
  • Agents/tools: add optional per-call timeoutMs support for image, video, music, and TTS generation tools so agents can extend provider request timeouts only when a specific generation needs it.
  • Memory/local embeddings: add configurable memorySearch.local.contextSize with a 4096 default so local embedding contexts can be tuned for constrained hosts without patching the memory host. (#70544) Thanks @aalekh-sarvam.
  • Dependencies/Pi: update bundled Pi packages to 0.70.0, use Pi's upstream gpt-5.5 catalog metadata for OpenAI and OpenAI Codex, and keep only local gpt-5.5-pro forward-compat handling.
  • Codex harness: add structured debug logging for embedded harness selection decisions so /status stays simple while gateway logs explain auto-selection and Pi fallback reasons. (#70760) Thanks @100yenadmin.

Fixes

  • Codex harness: route native request_user_input prompts back to the originating chat, preserve queued follow-up answers, and honor newer app-server command approval amendment decisions.
  • Codex harness/context-engine: redact context-engine assembly failures before logging, so fallback warnings do not serialize raw error objects. (#70809) Thanks @jalehman.
  • WhatsApp/onboarding: keep first-run setup entry loading off the Baileys runtime dependency path, so packaged QuickStart installs can show WhatsApp setup before runtime deps are staged. Fixes #70932.
  • Block streaming: suppress final assembled text after partial block-delivery aborts when the already-sent text chunks exactly cover the final reply, preventing duplicate replies without dropping unrelated short messages. Fixes #70921.
  • Codex harness/Windows: resolve npm-installed codex.cmd shims through PATHEXT before starting the native app-server, so codex/* models work without a manual .exe shim. Fixes #70913.
  • Slack/groups: classify MPIM group DMs as group chat context and suppress verbose tool/plan progress on Slack non-DM surfaces, so internal "Working…" traces no longer leak into rooms. Fixes #70912.
  • Agents/replay: stop OpenAI/Codex transcript replay from synthesizing missing tool results while still preserving synthetic repair on Anthropic, Gemini, and Bedrock transport-owned sessions. (#61556) Thanks @VictorJeon and @vincentkoc.
  • Telegram/media replies: parse remote markdown image syntax into outbound media payloads on the final reply path, so Telegram group chats stop falling back to plain-text image URLs when the model or a tool emits ![...](...) instead of a MEDIA: token. (#66191) Thanks @apezam and @vincentkoc.
  • Agents/WebChat: surface non-retryable provider failures such as billing, auth, and rate-limit errors from the embedded runner instead of logging surface_error and leaving webchat with no rendered error. Fixes #70124. (#70848) Thanks @truffle-dev.
  • WhatsApp: unify outbound media normalization across direct sends and auto-replies. Thanks @mcaxtr.
  • Memory/CLI: declare the built-in local embedding provider in the memory-core manifest, so standalone openclaw memory status, index, and search can resolve local embeddings just like the gateway runtime. Fixes #70836. (#70873) Thanks @mattznojassist.
  • Gateway/WebChat: preserve image attachments for text-only primary models by offloading them as media refs instead of dropping them, so configured image tools can still inspect the original file. Fixes #68513, #44276, #51656, #70212.
  • Plugins/Google Meet: hang up delegated Twilio calls on leave, clean up Chrome realtime audio bridges when launch fails, and use a flat provider-safe tool schema.
  • Media understanding: honor explicit image-model configuration before native-vision skips, including agents.defaults.imageModel, tools.media.image.models, and provider image defaults such as MiniMax VL when the active chat model is text-only. Fixes #47614, #63722, #69171.
  • Codex/media understanding: support codex/* image models through bounded Codex app-server image turns, while keeping openai-codex/* on the OpenAI Codex OAuth route and validating app-server responses against generated protocol contracts. Fixes #70201.
  • Providers/OpenAI Codex: synthesize the openai-codex/gpt-5.5 OAuth model row when Codex catalog discovery omits it, so cron and subagent runs do not fail with Unknown model while the account is authenticated.
  • Models/Codex: preserve Codex provider metadata when adding models from chat or CLI commands, so manually added Codex models keep the right auth and routing behavior. (#70820) Thanks @Takhoffman.
  • Providers/OpenAI: route openai/gpt-image-2 through configured Codex OAuth directly when an openai-codex profile is active, instead of probing OPENAI_API_KEY first.
  • Providers/OpenAI: harden image generation auth routing and Codex OAuth response parsing so fallback only applies to public OpenAI API routes and bounded SSE results. Thanks @Takhoffman.
  • OpenAI/image generation: send reference-image edits as guarded multipart uploads instead of JSON data URLs, restoring complex multi-reference gpt-image-2 edits. Fixes #70642. Thanks @dashhuang.
  • Providers/OpenRouter: send image-understanding prompts as user text before image parts, restoring non-empty vision responses for OpenRouter multimodal models. Fixes #70410.
  • Providers/Google: honor the private-network SSRF opt-in for Gemini image generation requests, so trusted proxy setups that resolve Google API hosts to private addresses can use image_generate. Fixes #67216.
  • Agents/transport: stop embedded runs from lowering the process-wide undici stream timeouts, so slow Gemini image generation and other long-running provider requests no longer inherit short run-attempt headers timeouts. Fixes #70423. Thanks @giangthb.
  • Providers/OpenAI: honor the private-network SSRF opt-in for OpenAI-compatible image generation endpoints, so trusted LocalAI/LAN image_generate routes work without disabling SSRF checks globally. Fixes #62879. Thanks @seitzbg.
  • Providers/OpenAI: stop advertising the removed gpt-5.3-codex-spark Codex model through fallback catalogs, and suppress stale rows with a GPT-5.5 recovery hint.
  • Control UI/chat: persist assistant-generated images as authenticated managed media and accept paired-device tokens for assistant media fetches, so webchat history reloads keep showing generated images. (#70719, #70741) Thanks @Patrick-Erichsen.
  • Control UI/chat: queue Stop-button aborts across Gateway reconnects so a disconnected active run is canceled on reconnect instead of only clearing local UI state. (#70673) Thanks @chinar-amrutkar.
  • Memory/QMD: recreate stale managed QMD collections when startup repair finds the collection name already exists, so root memory narrows back to MEMORY.md instead of staying on broad workspace markdown indexing.
  • Agents/OpenAI: surface selected-model capacity failures from PI, Codex, and auto-reply harness paths with a model-switch hint instead of the generic empty-response error. Thanks @vincentkoc.
  • Plugins/QR: replace legacy qrcode-terminal QR rendering with bounded qrcode-tui helpers for plugin login/setup flows. (#65969) Thanks @vincentkoc.
  • Voice-call/realtime: wait for OpenAI session configuration before greeting or forwarding buffered audio, and reject non-allowlisted Twilio callers before stream setup. (#43501) Thanks @forrestblount.
  • ACPX/Codex: stop materializing auth.json bridge files for Codex ACP, Codex app-server, and Codex CLI runs; Codex-owned runtimes now use their normal CODEX_HOME/~/.codex auth path directly.
  • Auto-reply/system events: route async exec-event completion replies through the persisted session delivery context, so long-running command results return to the originating channel instead of being dropped when live origin metadata is missing. (#70258) Thanks @wzfukui.
  • Gateway/sessions: extend the webchat session-mutation guard to sessions.compact and sessions.compaction.restore, so WEBCHAT_UI clients are rejected from compaction-side session mutations consistently with the existing patch/delete guards. (#70716) Thanks @drobison00.
  • QA channel/security: reject non-HTTP(S) inbound attachment URLs before media fetch, and log rejected schemes so suspicious or misconfigured payloads are visible during debugging. (#70708) Thanks @vincentkoc.
  • Plugins/install: link the host OpenClaw package into external plugins that declare openclaw as a peer dependency, so peer-only plugin SDK imports resolve after install without bundling a duplicate host package. (#70462) Thanks @anishesg.
  • Plugins/Windows: refresh the packaged plugin SDK alias in place during bundled runtime dependency repair, so gateway and CLI plugin startup no longer race on ENOTEMPTY/EPERM after same-guest npm updates.
  • Teams/security: require shared Bot Framework audience tokens to name the configured Teams app via verified appid or azp, blocking cross-bot token replay on the global audience. (#70724) Thanks @vincentkoc.
  • Plugins/startup: resolve bundled plugin Jiti loads relative to the target plugin module instead of the central loader, so Bun global installs no lon...
Read more

openclaw 2026.4.23-beta.6

24 Apr 14:34
v2026.4.23-beta.6
8558729

Choose a tag to compare

Pre-release

2026.4.23

Changes

  • Providers/OpenAI: add image generation and reference-image editing through Codex OAuth, so openai/gpt-image-2 works without an OPENAI_API_KEY. Fixes #70703.
  • Providers/OpenRouter: add image generation and reference-image editing through image_generate, so OpenRouter image models work with OPENROUTER_API_KEY. Fixes #55066 via #67668. Thanks @notamicrodose.
  • Image generation: let agents request provider-supported quality and output format hints, and pass OpenAI-specific background, moderation, compression, and user hints through the image_generate tool. (#70503) Thanks @ottodeng.
  • Agents/subagents: add optional forked context for native sessions_spawn runs so agents can let a child inherit the requester transcript when needed, while keeping clean isolated sessions as the default; includes prompt guidance, context-engine hook metadata, docs, and QA coverage.
  • Agents/tools: add optional per-call timeoutMs support for image, video, music, and TTS generation tools so agents can extend provider request timeouts only when a specific generation needs it.
  • Memory/local embeddings: add configurable memorySearch.local.contextSize with a 4096 default so local embedding contexts can be tuned for constrained hosts without patching the memory host. (#70544) Thanks @aalekh-sarvam.
  • Dependencies/Pi: update bundled Pi packages to 0.70.0, use Pi's upstream gpt-5.5 catalog metadata for OpenAI and OpenAI Codex, and keep only local gpt-5.5-pro forward-compat handling.
  • Codex harness: add structured debug logging for embedded harness selection decisions so /status stays simple while gateway logs explain auto-selection and Pi fallback reasons. (#70760) Thanks @100yenadmin.

Fixes

  • Codex harness: route native request_user_input prompts back to the originating chat, preserve queued follow-up answers, and honor newer app-server command approval amendment decisions.
  • Codex harness/context-engine: redact context-engine assembly failures before logging, so fallback warnings do not serialize raw error objects. (#70809) Thanks @jalehman.
  • WhatsApp/onboarding: keep first-run setup entry loading off the Baileys runtime dependency path, so packaged QuickStart installs can show WhatsApp setup before runtime deps are staged. Fixes #70932.
  • Block streaming: suppress final assembled text after partial block-delivery aborts when the already-sent text chunks exactly cover the final reply, preventing duplicate replies without dropping unrelated short messages. Fixes #70921.
  • Codex harness/Windows: resolve npm-installed codex.cmd shims through PATHEXT before starting the native app-server, so codex/* models work without a manual .exe shim. Fixes #70913.
  • Slack/groups: classify MPIM group DMs as group chat context and suppress verbose tool/plan progress on Slack non-DM surfaces, so internal "Working…" traces no longer leak into rooms. Fixes #70912.
  • Agents/replay: stop OpenAI/Codex transcript replay from synthesizing missing tool results while still preserving synthetic repair on Anthropic, Gemini, and Bedrock transport-owned sessions. (#61556) Thanks @VictorJeon and @vincentkoc.
  • Telegram/media replies: parse remote markdown image syntax into outbound media payloads on the final reply path, so Telegram group chats stop falling back to plain-text image URLs when the model or a tool emits ![...](...) instead of a MEDIA: token. (#66191) Thanks @apezam and @vincentkoc.
  • Agents/WebChat: surface non-retryable provider failures such as billing, auth, and rate-limit errors from the embedded runner instead of logging surface_error and leaving webchat with no rendered error. Fixes #70124. (#70848) Thanks @truffle-dev.
  • WhatsApp: unify outbound media normalization across direct sends and auto-replies. Thanks @mcaxtr.
  • Memory/CLI: declare the built-in local embedding provider in the memory-core manifest, so standalone openclaw memory status, index, and search can resolve local embeddings just like the gateway runtime. Fixes #70836. (#70873) Thanks @mattznojassist.
  • Gateway/WebChat: preserve image attachments for text-only primary models by offloading them as media refs instead of dropping them, so configured image tools can still inspect the original file. Fixes #68513, #44276, #51656, #70212.
  • Plugins/Google Meet: hang up delegated Twilio calls on leave, clean up Chrome realtime audio bridges when launch fails, and use a flat provider-safe tool schema.
  • Media understanding: honor explicit image-model configuration before native-vision skips, including agents.defaults.imageModel, tools.media.image.models, and provider image defaults such as MiniMax VL when the active chat model is text-only. Fixes #47614, #63722, #69171.
  • Codex/media understanding: support codex/* image models through bounded Codex app-server image turns, while keeping openai-codex/* on the OpenAI Codex OAuth route and validating app-server responses against generated protocol contracts. Fixes #70201.
  • Providers/OpenAI Codex: synthesize the openai-codex/gpt-5.5 OAuth model row when Codex catalog discovery omits it, so cron and subagent runs do not fail with Unknown model while the account is authenticated.
  • Models/Codex: preserve Codex provider metadata when adding models from chat or CLI commands, so manually added Codex models keep the right auth and routing behavior. (#70820) Thanks @Takhoffman.
  • Providers/OpenAI: route openai/gpt-image-2 through configured Codex OAuth directly when an openai-codex profile is active, instead of probing OPENAI_API_KEY first.
  • Providers/OpenAI: harden image generation auth routing and Codex OAuth response parsing so fallback only applies to public OpenAI API routes and bounded SSE results. Thanks @Takhoffman.
  • OpenAI/image generation: send reference-image edits as guarded multipart uploads instead of JSON data URLs, restoring complex multi-reference gpt-image-2 edits. Fixes #70642. Thanks @dashhuang.
  • Providers/OpenRouter: send image-understanding prompts as user text before image parts, restoring non-empty vision responses for OpenRouter multimodal models. Fixes #70410.
  • Providers/Google: honor the private-network SSRF opt-in for Gemini image generation requests, so trusted proxy setups that resolve Google API hosts to private addresses can use image_generate. Fixes #67216.
  • Agents/transport: stop embedded runs from lowering the process-wide undici stream timeouts, so slow Gemini image generation and other long-running provider requests no longer inherit short run-attempt headers timeouts. Fixes #70423. Thanks @giangthb.
  • Providers/OpenAI: honor the private-network SSRF opt-in for OpenAI-compatible image generation endpoints, so trusted LocalAI/LAN image_generate routes work without disabling SSRF checks globally. Fixes #62879. Thanks @seitzbg.
  • Providers/OpenAI: stop advertising the removed gpt-5.3-codex-spark Codex model through fallback catalogs, and suppress stale rows with a GPT-5.5 recovery hint.
  • Control UI/chat: persist assistant-generated images as authenticated managed media and accept paired-device tokens for assistant media fetches, so webchat history reloads keep showing generated images. (#70719, #70741) Thanks @Patrick-Erichsen.
  • Control UI/chat: queue Stop-button aborts across Gateway reconnects so a disconnected active run is canceled on reconnect instead of only clearing local UI state. (#70673) Thanks @chinar-amrutkar.
  • Memory/QMD: recreate stale managed QMD collections when startup repair finds the collection name already exists, so root memory narrows back to MEMORY.md instead of staying on broad workspace markdown indexing.
  • Agents/OpenAI: surface selected-model capacity failures from PI, Codex, and auto-reply harness paths with a model-switch hint instead of the generic empty-response error. Thanks @vincentkoc.
  • Plugins/QR: replace legacy qrcode-terminal QR rendering with bounded qrcode-tui helpers for plugin login/setup flows. (#65969) Thanks @vincentkoc.
  • Voice-call/realtime: wait for OpenAI session configuration before greeting or forwarding buffered audio, and reject non-allowlisted Twilio callers before stream setup. (#43501) Thanks @forrestblount.
  • ACPX/Codex: stop materializing auth.json bridge files for Codex ACP, Codex app-server, and Codex CLI runs; Codex-owned runtimes now use their normal CODEX_HOME/~/.codex auth path directly.
  • Auto-reply/system events: route async exec-event completion replies through the persisted session delivery context, so long-running command results return to the originating channel instead of being dropped when live origin metadata is missing. (#70258) Thanks @wzfukui.
  • Gateway/sessions: extend the webchat session-mutation guard to sessions.compact and sessions.compaction.restore, so WEBCHAT_UI clients are rejected from compaction-side session mutations consistently with the existing patch/delete guards. (#70716) Thanks @drobison00.
  • QA channel/security: reject non-HTTP(S) inbound attachment URLs before media fetch, and log rejected schemes so suspicious or misconfigured payloads are visible during debugging. (#70708) Thanks @vincentkoc.
  • Plugins/install: link the host OpenClaw package into external plugins that declare openclaw as a peer dependency, so peer-only plugin SDK imports resolve after install without bundling a duplicate host package. (#70462) Thanks @anishesg.
  • Plugins/Windows: refresh the packaged plugin SDK alias in place during bundled runtime dependency repair, so gateway and CLI plugin startup no longer race on ENOTEMPTY/EPERM after same-guest npm updates.
  • Teams/security: require shared Bot Framework audience tokens to name the configured Teams app via verified appid or azp, blocking cross-bot token replay on the global audience. (#70724) Thanks @vincentkoc.
  • Plugins/startup: resolve bundled plugin Jiti loads relative to the target plugin module instead of the central loader, so Bun global installs no lon...
Read more

OpenClaw 2026.4.23 beta 5

24 Apr 09:50
v2026.4.23-beta.5
aee7c4e

Choose a tag to compare

Pre-release

2026.4.23

Changes

  • Providers/OpenAI: add image generation and reference-image editing through Codex OAuth, so openai/gpt-image-2 works without an OPENAI_API_KEY. Fixes #70703.
  • Providers/OpenRouter: add image generation and reference-image editing through image_generate, so OpenRouter image models work with OPENROUTER_API_KEY. Fixes #55066 via #67668. Thanks @notamicrodose.
  • Image generation: let agents request provider-supported quality and output format hints, and pass OpenAI-specific background, moderation, compression, and user hints through the image_generate tool. (#70503) Thanks @ottodeng.
  • Agents/subagents: add optional forked context for native sessions_spawn runs so agents can let a child inherit the requester transcript when needed, while keeping clean isolated sessions as the default; includes prompt guidance, context-engine hook metadata, docs, and QA coverage.
  • Agents/tools: add optional per-call timeoutMs support for image, video, music, and TTS generation tools so agents can extend provider request timeouts only when a specific generation needs it.
  • Memory/local embeddings: add configurable memorySearch.local.contextSize with a 4096 default so local embedding contexts can be tuned for constrained hosts without patching the memory host. (#70544) Thanks @aalekh-sarvam.
  • Dependencies/Pi: update bundled Pi packages to 0.70.0, use Pi's upstream gpt-5.5 catalog metadata for OpenAI and OpenAI Codex, and keep only local gpt-5.5-pro forward-compat handling.
  • Codex harness: add structured debug logging for embedded harness selection decisions so /status stays simple while gateway logs explain auto-selection and Pi fallback reasons. (#70760) Thanks @100yenadmin.

Fixes

  • Codex harness: route native request_user_input prompts back to the originating chat, preserve queued follow-up answers, and honor newer app-server command approval amendment decisions.
  • Codex harness/context-engine: redact context-engine assembly failures before logging, so fallback warnings do not serialize raw error objects. (#70809) Thanks @jalehman.
  • WhatsApp/onboarding: keep first-run setup entry loading off the Baileys runtime dependency path, so packaged QuickStart installs can show WhatsApp setup before runtime deps are staged. Fixes #70932.
  • Block streaming: suppress final assembled text after partial block-delivery aborts when the already-sent text chunks exactly cover the final reply, preventing duplicate replies without dropping unrelated short messages. Fixes #70921.
  • Codex harness/Windows: resolve npm-installed codex.cmd shims through PATHEXT before starting the native app-server, so codex/* models work without a manual .exe shim. Fixes #70913.
  • Slack/groups: classify MPIM group DMs as group chat context and suppress verbose tool/plan progress on Slack non-DM surfaces, so internal "Working…" traces no longer leak into rooms. Fixes #70912.
  • Agents/replay: stop OpenAI/Codex transcript replay from synthesizing missing tool results while still preserving synthetic repair on Anthropic, Gemini, and Bedrock transport-owned sessions. (#61556) Thanks @VictorJeon and @vincentkoc.
  • Telegram/media replies: parse remote markdown image syntax into outbound media payloads on the final reply path, so Telegram group chats stop falling back to plain-text image URLs when the model or a tool emits ![...](...) instead of a MEDIA: token. (#66191) Thanks @apezam and @vincentkoc.
  • Agents/WebChat: surface non-retryable provider failures such as billing, auth, and rate-limit errors from the embedded runner instead of logging surface_error and leaving webchat with no rendered error. Fixes #70124. (#70848) Thanks @truffle-dev.
  • WhatsApp: unify outbound media normalization across direct sends and auto-replies. Thanks @mcaxtr.
  • Memory/CLI: declare the built-in local embedding provider in the memory-core manifest, so standalone openclaw memory status, index, and search can resolve local embeddings just like the gateway runtime. Fixes #70836. (#70873) Thanks @mattznojassist.
  • Gateway/WebChat: preserve image attachments for text-only primary models by offloading them as media refs instead of dropping them, so configured image tools can still inspect the original file. Fixes #68513, #44276, #51656, #70212.
  • Plugins/Google Meet: hang up delegated Twilio calls on leave, clean up Chrome realtime audio bridges when launch fails, and use a flat provider-safe tool schema.
  • Media understanding: honor explicit image-model configuration before native-vision skips, including agents.defaults.imageModel, tools.media.image.models, and provider image defaults such as MiniMax VL when the active chat model is text-only. Fixes #47614, #63722, #69171.
  • Codex/media understanding: support codex/* image models through bounded Codex app-server image turns, while keeping openai-codex/* on the OpenAI Codex OAuth route and validating app-server responses against generated protocol contracts. Fixes #70201.
  • Providers/OpenAI Codex: synthesize the openai-codex/gpt-5.5 OAuth model row when Codex catalog discovery omits it, so cron and subagent runs do not fail with Unknown model while the account is authenticated.
  • Models/Codex: preserve Codex provider metadata when adding models from chat or CLI commands, so manually added Codex models keep the right auth and routing behavior. (#70820) Thanks @Takhoffman.
  • Providers/OpenAI: route openai/gpt-image-2 through configured Codex OAuth directly when an openai-codex profile is active, instead of probing OPENAI_API_KEY first.
  • Providers/OpenAI: harden image generation auth routing and Codex OAuth response parsing so fallback only applies to public OpenAI API routes and bounded SSE results. Thanks @Takhoffman.
  • OpenAI/image generation: send reference-image edits as guarded multipart uploads instead of JSON data URLs, restoring complex multi-reference gpt-image-2 edits. Fixes #70642. Thanks @dashhuang.
  • Providers/OpenRouter: send image-understanding prompts as user text before image parts, restoring non-empty vision responses for OpenRouter multimodal models. Fixes #70410.
  • Providers/Google: honor the private-network SSRF opt-in for Gemini image generation requests, so trusted proxy setups that resolve Google API hosts to private addresses can use image_generate. Fixes #67216.
  • Agents/transport: stop embedded runs from lowering the process-wide undici stream timeouts, so slow Gemini image generation and other long-running provider requests no longer inherit short run-attempt headers timeouts. Fixes #70423. Thanks @giangthb.
  • Providers/OpenAI: honor the private-network SSRF opt-in for OpenAI-compatible image generation endpoints, so trusted LocalAI/LAN image_generate routes work without disabling SSRF checks globally. Fixes #62879. Thanks @seitzbg.
  • Providers/OpenAI: stop advertising the removed gpt-5.3-codex-spark Codex model through fallback catalogs, and suppress stale rows with a GPT-5.5 recovery hint.
  • Control UI/chat: persist assistant-generated images as authenticated managed media and accept paired-device tokens for assistant media fetches, so webchat history reloads keep showing generated images. (#70719, #70741) Thanks @Patrick-Erichsen.
  • Control UI/chat: queue Stop-button aborts across Gateway reconnects so a disconnected active run is canceled on reconnect instead of only clearing local UI state. (#70673) Thanks @chinar-amrutkar.
  • Memory/QMD: recreate stale managed QMD collections when startup repair finds the collection name already exists, so root memory narrows back to MEMORY.md instead of staying on broad workspace markdown indexing.
  • Agents/OpenAI: surface selected-model capacity failures from PI, Codex, and auto-reply harness paths with a model-switch hint instead of the generic empty-response error. Thanks @vincentkoc.
  • Plugins/QR: replace legacy qrcode-terminal QR rendering with bounded qrcode-tui helpers for plugin login/setup flows. (#65969) Thanks @vincentkoc.
  • Voice-call/realtime: wait for OpenAI session configuration before greeting or forwarding buffered audio, and reject non-allowlisted Twilio callers before stream setup. (#43501) Thanks @forrestblount.
  • ACPX/Codex: stop materializing auth.json bridge files for Codex ACP, Codex app-server, and Codex CLI runs; Codex-owned runtimes now use their normal CODEX_HOME/~/.codex auth path directly.
  • Auto-reply/system events: route async exec-event completion replies through the persisted session delivery context, so long-running command results return to the originating channel instead of being dropped when live origin metadata is missing. (#70258) Thanks @wzfukui.
  • Gateway/sessions: extend the webchat session-mutation guard to sessions.compact and sessions.compaction.restore, so WEBCHAT_UI clients are rejected from compaction-side session mutations consistently with the existing patch/delete guards. (#70716) Thanks @drobison00.
  • QA channel/security: reject non-HTTP(S) inbound attachment URLs before media fetch, and log rejected schemes so suspicious or misconfigured payloads are visible during debugging. (#70708) Thanks @vincentkoc.
  • Plugins/install: link the host OpenClaw package into external plugins that declare openclaw as a peer dependency, so peer-only plugin SDK imports resolve after install without bundling a duplicate host package. (#70462) Thanks @anishesg.
  • Plugins/Windows: refresh the packaged plugin SDK alias in place during bundled runtime dependency repair, so gateway and CLI plugin startup no longer race on ENOTEMPTY/EPERM after same-guest npm updates.
  • Teams/security: require shared Bot Framework audience tokens to name the configured Teams app via verified appid or azp, blocking cross-bot token replay on the global audience. (#70724) Thanks @vincentkoc.
  • Plugins/startup: resolve bundled plugin Jiti loads relative to the target plugin module instead of the central loader, so Bun global installs no lon...
Read more

openclaw 2026.4.23-beta.4

24 Apr 08:56
v2026.4.23-beta.4
5b1bd58

Choose a tag to compare

Pre-release

Changes

  • Providers/OpenAI: add image generation and reference-image editing through Codex OAuth, so openai/gpt-image-2 works without an OPENAI_API_KEY. Fixes #70703.
  • Providers/OpenRouter: add image generation and reference-image editing through image_generate, so OpenRouter image models work with OPENROUTER_API_KEY. Fixes #55066 via #67668. Thanks @notamicrodose.
  • Image generation: let agents request provider-supported quality and output format hints, and pass OpenAI-specific background, moderation, compression, and user hints through the image_generate tool. (#70503) Thanks @ottodeng.
  • Agents/subagents: add optional forked context for native sessions_spawn runs so agents can let a child inherit the requester transcript when needed, while keeping clean isolated sessions as the default; includes prompt guidance, context-engine hook metadata, docs, and QA coverage.
  • Agents/tools: add optional per-call timeoutMs support for image, video, music, and TTS generation tools so agents can extend provider request timeouts only when a specific generation needs it.
  • Memory/local embeddings: add configurable memorySearch.local.contextSize with a 4096 default so local embedding contexts can be tuned for constrained hosts without patching the memory host. (#70544) Thanks @aalekh-sarvam.
  • Dependencies/Pi: update bundled Pi packages to 0.70.0, use Pi's upstream gpt-5.5 catalog metadata for OpenAI and OpenAI Codex, and keep only local gpt-5.5-pro forward-compat handling.
  • Codex harness: add structured debug logging for embedded harness selection decisions so /status stays simple while gateway logs explain auto-selection and Pi fallback reasons. (#70760) Thanks @100yenadmin.

Fixes

  • Codex harness: route native request_user_input prompts back to the originating chat, preserve queued follow-up answers, and honor newer app-server command approval amendment decisions.
  • Codex harness/context-engine: redact context-engine assembly failures before logging, so fallback warnings do not serialize raw error objects. (#70809) Thanks @jalehman.
  • WhatsApp/onboarding: keep first-run setup entry loading off the Baileys runtime dependency path, so packaged QuickStart installs can show WhatsApp setup before runtime deps are staged. Fixes #70932.
  • Block streaming: suppress final assembled text after partial block-delivery aborts when the already-sent text chunks exactly cover the final reply, preventing duplicate replies without dropping unrelated short messages. Fixes #70921.
  • Codex harness/Windows: resolve npm-installed codex.cmd shims through PATHEXT before starting the native app-server, so codex/* models work without a manual .exe shim. Fixes #70913.
  • Slack/groups: classify MPIM group DMs as group chat context and suppress verbose tool/plan progress on Slack non-DM surfaces, so internal "Working…" traces no longer leak into rooms. Fixes #70912.
  • Agents/replay: stop OpenAI/Codex transcript replay from synthesizing missing tool results while still preserving synthetic repair on Anthropic, Gemini, and Bedrock transport-owned sessions. (#61556) Thanks @VictorJeon and @vincentkoc.
  • Telegram/media replies: parse remote markdown image syntax into outbound media payloads on the final reply path, so Telegram group chats stop falling back to plain-text image URLs when the model or a tool emits ![...](...) instead of a MEDIA: token. (#66191) Thanks @apezam and @vincentkoc.
  • Agents/WebChat: surface non-retryable provider failures such as billing, auth, and rate-limit errors from the embedded runner instead of logging surface_error and leaving webchat with no rendered error. Fixes #70124. (#70848) Thanks @truffle-dev.
  • WhatsApp: unify outbound media normalization across direct sends and auto-replies. Thanks @mcaxtr.
  • Memory/CLI: declare the built-in local embedding provider in the memory-core manifest, so standalone openclaw memory status, index, and search can resolve local embeddings just like the gateway runtime. Fixes #70836. (#70873) Thanks @mattznojassist.
  • Gateway/WebChat: preserve image attachments for text-only primary models by offloading them as media refs instead of dropping them, so configured image tools can still inspect the original file. Fixes #68513, #44276, #51656, #70212.
  • Plugins/Google Meet: hang up delegated Twilio calls on leave, clean up Chrome realtime audio bridges when launch fails, and use a flat provider-safe tool schema.
  • Media understanding: honor explicit image-model configuration before native-vision skips, including agents.defaults.imageModel, tools.media.image.models, and provider image defaults such as MiniMax VL when the active chat model is text-only. Fixes #47614, #63722, #69171.
  • Codex/media understanding: support codex/* image models through bounded Codex app-server image turns, while keeping openai-codex/* on the OpenAI Codex OAuth route and validating app-server responses against generated protocol contracts. Fixes #70201.
  • Providers/OpenAI Codex: synthesize the openai-codex/gpt-5.5 OAuth model row when Codex catalog discovery omits it, so cron and subagent runs do not fail with Unknown model while the account is authenticated.
  • Models/Codex: preserve Codex provider metadata when adding models from chat or CLI commands, so manually added Codex models keep the right auth and routing behavior. (#70820) Thanks @Takhoffman.
  • Providers/OpenAI: route openai/gpt-image-2 through configured Codex OAuth directly when an openai-codex profile is active, instead of probing OPENAI_API_KEY first.
  • Providers/OpenAI: harden image generation auth routing and Codex OAuth response parsing so fallback only applies to public OpenAI API routes and bounded SSE results. Thanks @Takhoffman.
  • OpenAI/image generation: send reference-image edits as guarded multipart uploads instead of JSON data URLs, restoring complex multi-reference gpt-image-2 edits. Fixes #70642. Thanks @dashhuang.
  • Providers/OpenRouter: send image-understanding prompts as user text before image parts, restoring non-empty vision responses for OpenRouter multimodal models. Fixes #70410.
  • Providers/Google: honor the private-network SSRF opt-in for Gemini image generation requests, so trusted proxy setups that resolve Google API hosts to private addresses can use image_generate. Fixes #67216.
  • Agents/transport: stop embedded runs from lowering the process-wide undici stream timeouts, so slow Gemini image generation and other long-running provider requests no longer inherit short run-attempt headers timeouts. Fixes #70423. Thanks @giangthb.
  • Providers/OpenAI: honor the private-network SSRF opt-in for OpenAI-compatible image generation endpoints, so trusted LocalAI/LAN image_generate routes work without disabling SSRF checks globally. Fixes #62879. Thanks @seitzbg.
  • Providers/OpenAI: stop advertising the removed gpt-5.3-codex-spark Codex model through fallback catalogs, and suppress stale rows with a GPT-5.5 recovery hint.
  • Control UI/chat: persist assistant-generated images as authenticated managed media and accept paired-device tokens for assistant media fetches, so webchat history reloads keep showing generated images. (#70719, #70741) Thanks @Patrick-Erichsen.
  • Control UI/chat: queue Stop-button aborts across Gateway reconnects so a disconnected active run is canceled on reconnect instead of only clearing local UI state. (#70673) Thanks @chinar-amrutkar.
  • Memory/QMD: recreate stale managed QMD collections when startup repair finds the collection name already exists, so root memory narrows back to MEMORY.md instead of staying on broad workspace markdown indexing.
  • Agents/OpenAI: surface selected-model capacity failures from PI, Codex, and auto-reply harness paths with a model-switch hint instead of the generic empty-response error. Thanks @vincentkoc.
  • Plugins/QR: replace legacy qrcode-terminal QR rendering with bounded qrcode-tui helpers for plugin login/setup flows. (#65969) Thanks @vincentkoc.
  • Voice-call/realtime: wait for OpenAI session configuration before greeting or forwarding buffered audio, and reject non-allowlisted Twilio callers before stream setup. (#43501) Thanks @forrestblount.
  • ACPX/Codex: stop materializing auth.json bridge files for Codex ACP, Codex app-server, and Codex CLI runs; Codex-owned runtimes now use their normal CODEX_HOME/~/.codex auth path directly.
  • Auto-reply/system events: route async exec-event completion replies through the persisted session delivery context, so long-running command results return to the originating channel instead of being dropped when live origin metadata is missing. (#70258) Thanks @wzfukui.
  • Gateway/sessions: extend the webchat session-mutation guard to sessions.compact and sessions.compaction.restore, so WEBCHAT_UI clients are rejected from compaction-side session mutations consistently with the existing patch/delete guards. (#70716) Thanks @drobison00.
  • QA channel/security: reject non-HTTP(S) inbound attachment URLs before media fetch, and log rejected schemes so suspicious or misconfigured payloads are visible during debugging. (#70708) Thanks @vincentkoc.
  • Plugins/install: link the host OpenClaw package into external plugins that declare openclaw as a peer dependency, so peer-only plugin SDK imports resolve after install without bundling a duplicate host package. (#70462) Thanks @anishesg.
  • Plugins/Windows: refresh the packaged plugin SDK alias in place during bundled runtime dependency repair, so gateway and CLI plugin startup no longer race on ENOTEMPTY/EPERM after same-guest npm updates.
  • Teams/security: require shared Bot Framework audience tokens to name the configured Teams app via verified appid or azp, blocking cross-bot token replay on the global audience. (#70724) Thanks @vincentkoc.
  • Plugins/startup: resolve bundled plugin Jiti loads relative to the target plugin module instead of the central loader, so Bun global installs no longer hang whil...
Read more

openclaw 2026.4.22

23 Apr 13:56
v2026.4.22
00bd2cf

Choose a tag to compare

2026.4.22

Changes

  • Providers/xAI: add image generation, text-to-speech, and speech-to-text support, including grok-imagine-image / grok-imagine-image-pro, reference-image edits, six live xAI voices, MP3/WAV/PCM/G.711 TTS formats, grok-stt audio transcription, and xAI realtime transcription for Voice Call streaming. (#68694) Thanks @KateWilkins.
  • Providers/STT: add Voice Call streaming transcription for Deepgram, ElevenLabs, and Mistral, alongside the existing OpenAI and xAI realtime STT paths; ElevenLabs also gains Scribe v2 batch audio transcription for inbound media.
  • TUI: add local embedded mode for running terminal chats without a Gateway while keeping plugin approval gates enforced. (#66767) Thanks @fuller-stack-dev.
  • Onboarding: auto-install missing provider and channel plugins during setup so first-run configuration can complete without manual plugin recovery.
  • OpenAI/Responses: use OpenAI's native web_search tool automatically for direct OpenAI Responses models when web search is enabled and no managed search provider is pinned; explicit providers such as Brave keep the managed web_search tool.
  • Models/commands: add /models add <provider> <modelId> so you can register a model from chat and use it without restarting the gateway; keep /models as a simple provider browser while adding clearer add guidance and copy-friendly command examples. (#70211) Thanks @Takhoffman.
  • WhatsApp: add configurable native reply quoting with replyToMode for WhatsApp conversations. Thanks @mcaxtr.
  • WhatsApp/groups+direct: forward per-group and per-direct systemPrompt config into inbound context GroupSystemPrompt so configured per-chat behavioral instructions are injected on every turn. Supports "*" wildcard fallback and account-scoped overrides under channels.whatsapp.accounts.<id>.{groups,direct}; account maps fully replace root maps (no deep merge), matching the existing requireMention pattern. Closes #7011. (#59553) Thanks @Bluetegu.
  • Agents/sessions: add mailbox-style sessions_list filters for label, agent, and search plus visibility-scoped derived title and last-message previews. (#69839) Thanks @dangoZhang.
  • Control UI/settings+chat: add a browser-local personal identity for the operator (name plus local-safe avatar), route user identity rendering through the shared chat/avatar path used by assistant and agent surfaces, and tighten Quick Settings, agent fallback chips, and narrow-screen chat layouts so personalization no longer wastes space or clips controls. (#70362) Thanks @BunsDev.
  • Gateway/diagnostics: enable payload-free stability recording by default and add a support-ready diagnostics export with sanitized logs, status, health, config, and stability snapshots for bug reports. (#70324) Thanks @gumadeiras.
  • Providers/Tencent: add the bundled Tencent Cloud provider plugin with TokenHub onboarding, docs, hy3-preview model catalog entries, and tiered Hy3 pricing metadata. (#68460) Thanks @JuniperSling.
  • Providers/Amazon Bedrock Mantle: add Claude Opus 4.7 through Mantle's Anthropic Messages route with provider-owned bearer-auth streaming, so the model is actually callable without treating AWS bearer tokens like Anthropic API keys. Thanks @wirjo.
  • Providers/GPT-5: move the GPT-5 prompt overlay into the shared provider runtime so compatible GPT-5 models receive the same behavior and heartbeat guidance through OpenAI, OpenRouter, OpenCode, Codex, and other GPT providers; add agents.defaults.promptOverlays.gpt5.personality as the global friendly-style toggle while keeping the OpenAI plugin setting as a fallback.
  • Providers/OpenAI Codex: remove the Codex CLI auth import path from onboarding and provider discovery so OpenClaw no longer copies ~/.codex OAuth material into agent auth stores; use browser login or device pairing instead. (#70390) Thanks @pashpashpash.
  • CLI/Claude: default claude-cli runs to warm stdio sessions, including custom configs that omit transport fields, and resume from the stored Claude session after Gateway restarts or idle exits. (#69679) Thanks @obviyus.
  • Pi/models: update the bundled pi packages to 0.68.1 and let the OpenCode Go catalog come from pi instead of plugin-maintained model aliases, adding the refreshed opencode-go/kimi-k2.6, Qwen, GLM, MiMo, and MiniMax entries.
  • Tokenjuice: add bundled native OpenClaw support for tokenjuice as an opt-in plugin that compacts noisy exec and bash tool results in Pi embedded runs. (#69946) Thanks @vincentkoc.
  • ACPX: add an explicit openClawToolsMcpBridge option that injects a core OpenClaw MCP server for selected built-in tools, starting with cron.
  • CLI/doctor plugins: lazy-load doctor plugin paths and prefer installed plugin dist/* runtime entries over source-adjacent JavaScript fallbacks, reducing the measured doctor --non-interactive runtime by about 74% while keeping cold doctor startup on built plugin artifacts. (#69840) Thanks @gumadeiras.
  • CLI/debugging: add an opt-in temporary debug timing helper for local CLI performance investigations, with readable stderr output, JSONL capture, and docs for removing probes before landing fixes. (#70469) Thanks @shakkernerd.
  • Docs/i18n: add Thai translation support for the docs site.
  • Providers/OpenAI-compatible: mark known local backends such as vLLM, SGLang, llama.cpp, LM Studio, LocalAI, Jan, TabbyAPI, and text-generation-webui as streaming-usage compatible, so their token accounting no longer degrades to unknown/stale totals. (#68711) Thanks @gaineyllc.
  • Providers/OpenAI-compatible: recover streamed token usage from llama.cpp-style timings.prompt_n / timings.predicted_n metadata and sanitize usage counts before accumulation, fixing unknown or stale totals when compatible servers do not emit an OpenAI-shaped usage object. (#41056) Thanks @xaeon2026.
  • Plugins/startup: prefer native Jiti loading for built bundled plugin dist modules on supported runtimes, cutting measured bundled plugin load time by 82-90% while keeping source TypeScript on the transform path. (#69925) Thanks @aauren.
  • Plugin SDK/STT: share realtime transcription WebSocket transport and multipart batch transcription form helpers across bundled STT providers, reducing provider plugin boilerplate while preserving proxy capture, reconnects, audio queueing, close flushing, upload filename normalization, and ready handshakes.
  • Plugin SDK/Pi embedded runs: add a bundled-plugin embedded extension factory seam so native plugins can extend Pi embedded runs with async runtime hooks such as tool_result handling instead of falling back to the older synchronous persistence path. (#69946) Thanks @vincentkoc.
  • Codex harness/hooks: route native Codex app-server turns through before_prompt_build and emit before_compaction / after_compaction for native compaction items so prompt and compaction hooks stop drifting from Pi. Thanks @vincentkoc.
  • Codex harness/plugins: add a bundled-plugin Codex app-server extension seam for async tool_result middleware, fire after_tool_call for Codex tool runs, and route mirrored Codex transcript writes through before_message_write so tool integrations stop diverging from Pi. Thanks @vincentkoc.
  • Codex harness/hooks: fire llm_input, llm_output, and agent_end for native Codex app-server turns so lifecycle hooks stop drifting from Pi. Thanks @vincentkoc.
  • QA/Telegram: record per-scenario reply RTT in the live Telegram QA report and summary, starting with the canary response. (#70550) Thanks @obviyus.
  • Status: add an explicit Runner: field to /status so sessions now report whether they are running on embedded Pi, a CLI-backed provider, or an ACP harness agent/backend such as codex (acp/acpx) or gemini (acp/acpx). (#70595)

Fixes

  • Thinking defaults/status: raise the implicit default thinking level for reasoning-capable models from legacy off/low fallback behavior to a safe provider-supported medium equivalent when no explicit config default is set, preserve configured-model reasoning metadata when runtime catalog loading is empty, and make /status report the same resolved default as runtime.
  • Gateway/model pricing: fetch OpenRouter and LiteLLM pricing asynchronously at startup and extend catalog fetch timeouts to 30 seconds, reducing noisy timeout warnings during slow upstream responses.
  • Agents/sessions: keep daily reset and idle-maintenance bookkeeping from bumping session activity or pruning freshly active routes, so active conversations no longer look newer or disappear for maintenance-only updates.
  • Plugins/install: add newly installed plugin ids to an existing plugins.allow list before enabling them, so allowlisted configs load installed plugins after restart.
  • Status: show Fast in /status when fast mode is enabled, including config/default-derived fast mode, and omit it when disabled.
  • OpenAI/image generation: detect Azure OpenAI-style image endpoints, use Azure api-key auth plus deployment-scoped image URLs, honor AZURE_OPENAI_API_VERSION, and document the Azure setup path so image generation and edits work against Azure-hosted OpenAI resources. (#70570) Thanks @zhanggpcsu.
  • Telegram/forum topics: cache recovered forum metadata with bounded expiry so supergroup updates no longer need repeated getChat lookups before topic routing.
  • Onboarding/WeCom: show the official WeCom channel plugin with its native Enterprise WeChat display name and blurb in the external channel catalog.
  • Models/auth: merge provider-owned default-model additions from openclaw models auth login instead of replacing agents.defaults.models, so re-authenticating an OAuth provider such as OpenAI Codex no longer wipes other providers' aliases and per-model params. Migrations that must rename keys (Anthropic -> Claude CLI) opt in with replaceDefaultModels. Fixes #69414. (#70435) Thanks @neeravmakwana.
  • Media understanding/audio: prefer configured or key-backed STT providers before auto-detected loc...
Read more

openclaw 2026.4.21

22 Apr 04:18
v2026.4.21
f788c88

Choose a tag to compare

2026.4.21

Changes

  • OpenAI/images: default the bundled image-generation provider and live media smoke tests to gpt-image-2, and advertise the newer 2K/4K OpenAI size hints in image-generation docs and tool metadata.

Fixes

  • Plugins/doctor: repair bundled plugin runtime dependencies from doctor paths so packaged installs can recover missing channel/provider dependencies without broad core dependency installs.
  • Image generation: log failed provider/model candidates at warn level before automatic provider fallback, so OpenAI image failures are visible in the gateway log even when a later provider succeeds.
  • Auth/commands: require owner identity (an owner-candidate match or internal operator.admin) for owner-enforced commands instead of treating wildcard channel allowFrom or empty owner-candidate lists as sufficient, so non-owner senders can no longer reach owner-only commands through a permissive fallback when enforceOwnerForCommands=true and commands.ownerAllowFrom is unset. (#69774) Thanks @drobison00.
  • Slack: preserve thread aliases in runtime outbound sends so generic runtime sends stay in the intended Slack thread when the caller supplies threadTs. (#62947) Thanks @bek91.
  • Browser: reject invalid ax<N> accessibility refs in act paths immediately instead of waiting for the browser action timeout. (#69924) Thanks @Patrick-Erichsen.
  • npm/install: mirror the node-domexception alias into root package.json overrides, so npm installs stop surfacing the deprecated google-auth-library -> gaxios -> node-fetch -> fetch-blob -> node-domexception chain pulled through Pi/Google runtime deps. Thanks @vincentkoc.

openclaw 2026.4.20

21 Apr 19:19
v2026.4.20
115f05d

Choose a tag to compare

2026.4.20

Changes

  • Onboard/wizard: restyle the setup security disclaimer with a single yellow warning banner, section headings and bulleted checklists, and un-dim the note body so key guidance is easy to scan; add a loading spinner during the initial model catalog load so the wizard no longer goes blank while it runs; add an "API key" placeholder to provider API key prompts. (#69553) Thanks @Patrick-Erichsen.
  • Agents/prompts: strengthen the default system prompt and OpenAI GPT-5 overlay with clearer completion bias, live-state checks, weak-result recovery, and verification-before-final guidance.
  • Models/costs: support tiered model pricing from cached catalogs and configured models, and include bundled Moonshot Kimi K2.6/K2.5 cost estimates for token-usage reports. (#67605) Thanks @sliverp.
  • Sessions/Maintenance: enforce the built-in entry cap and age prune by default, and prune oversized stores at load time so accumulated cron/executor session backlogs cannot OOM the gateway before the write path runs. (#69404) Thanks @bobrenze-bot.
  • Plugins/tests: reuse plugin loader alias and Jiti config resolution across repeated same-context loads, reducing import-heavy test overhead. (#69316) Thanks @amknight.
  • Cron: split runtime execution state into jobs-state.json so jobs.json stays stable for git-tracked job definitions. (#63105) Thanks @Feelw00.
  • Agents/compaction: send opt-in start and completion notices during context compaction. (#67830) Thanks @feniix.
  • Moonshot/Kimi: default bundled Moonshot setup, web search, and media-understanding surfaces to kimi-k2.6 while keeping kimi-k2.5 available for compatibility. (#69477) Thanks @scoootscooob.
  • Moonshot/Kimi: allow thinking.keep = "all" on moonshot/kimi-k2.6, and strip it for other Moonshot models or requests where pinned tool_choice disables thinking. (#68816) Thanks @aniaan.
  • BlueBubbles/groups: forward per-group systemPrompt config into inbound context GroupSystemPrompt so configured group-specific behavioral instructions (for example threaded-reply and tapback conventions) are injected on every turn. Supports "*" wildcard fallback matching the existing requireMention pattern. Closes #60665. (#69198) Thanks @omarshahine.
  • Plugins/tasks: add a detached runtime registration contract so plugin executors can own detached task lifecycle and cancellation without reaching into core task internals. (#68915) Thanks @mbelinky.
  • Terminal/logging: optimize sanitizeForLog() by replacing the iterative control-character stripping loop with a single regex pass while preserving the existing ANSI-first sanitization behavior. (#67205) Thanks @bulutmuf.
  • QA/CI: make openclaw qa suite and openclaw qa telegram fail by default when scenarios fail, add --allow-failures for artifact-only runs, and tighten live-lane defaults for CI automation. (#69122) Thanks @joshavant.
  • Mattermost: stream thinking, tool activity, and partial reply text into a single draft preview post that finalizes in place when safe. (#47838) thanks @ninjaa.

Fixes

  • Exec/YOLO: stop rejecting gateway-host exec in security=full plus ask=off mode via the Python/Node script preflight hardening path, so promptless YOLO exec once again runs direct interpreter stdin and heredoc forms such as node <<'NODE' ... NODE.
  • OpenAI Codex: normalize legacy openai-completions transport overrides on default OpenAI/Codex and GitHub Copilot-compatible hosts back to the native Codex Responses transport while leaving custom proxies untouched. (#45304, #42194) Thanks @dyss1992 and @DeadlySilent.
  • Anthropic/plugins: scope Anthropic api: "anthropic-messages" defaulting to Anthropic-owned providers, so openai-codex and other providers without an explicit api no longer get rewritten to the wrong transport. Fixes #64534.
  • fix(qqbot): add SSRF guard to direct-upload URL paths in uploadC2CMedia and uploadGroupMedia [AI-assisted]. (#69595) Thanks @pgondhi987.
  • fix(gateway): enforce allowRequestSessionKey gate on template-rendered mapping sessionKeys. (#69381) Thanks @pgondhi987.
  • Browser/Chrome MCP: surface DevToolsActivePort attach failures as browser-connectivity errors instead of a generic "waiting for tabs" timeout, and point signed-out fallbacks toward the managed openclaw profile.
  • Webchat/images: treat inline image attachments as media for empty-turn gating while still ignoring metadata-only blank turns. (#69474) Thanks @Jaswir.
  • Discord/think: only show adaptive in /think autocomplete for provider/model pairs that actually support provider-managed adaptive thinking, so GPT/OpenAI models no longer advertise an Anthropic-only option.
  • Thinking: only expose max for models that explicitly support provider max reasoning, and remap stored max settings to the largest supported thinking mode when users switch to another model.
  • Gateway/usage: bound the cost usage cache with FIFO eviction so date/range lookups cannot grow unbounded. (#68842) Thanks @Feelw00.
  • OpenAI/Responses: resolve /think levels against each GPT model's supported reasoning efforts so /think off no longer becomes high reasoning or sends unsupported reasoning.effort: "none" payloads.
  • Lobster/TaskFlow: allow managed approval resumes to use approvalId without a resume token, and persist that id in approval wait state. (#69559) Thanks @kirkluokun.
  • Plugins/startup: install bundled runtime dependencies into each plugin's own runtime directory, reuse source-checkout repair caches after rebuilds, and log only packages that were actually installed so repeated Gateway starts stay quiet once deps are present.
  • Plugins/startup: ignore pnpm's npm_execpath when repairing bundled plugin runtime dependencies and skip workspace-only package specs so npm-only install flags or local workspace links do not break packaged plugin startup.
  • MCP: block interpreter-startup env keys such as NODE_OPTIONS for stdio servers while preserving ordinary credential and proxy env vars. (#69540) Thanks @drobison00.
  • Agents/shell: ignore non-interactive placeholder shells like /usr/bin/false and /sbin/nologin, falling back to sh so service-user exec runs no longer exit immediately. (#69308) Thanks @sk7n4k3d.
  • Setup/TUI: relaunch the setup hatch TUI in a fresh process while preserving the configured gateway target and auth source, so onboarding recovers terminal state cleanly without exposing gateway secrets on command-line args. (#69524) Thanks @shakkernerd.
  • Codex: avoid re-exposing the image-generation tool on native vision turns with inbound images, and keep bare image-model overrides on the configured image provider. (#65061) Thanks @zhulijin1991.
  • Sessions/reset: clear auto-sourced model, provider, and auth-profile overrides on /new and /reset while preserving explicit user selections, so channel sessions stop staying pinned to runtime fallback choices. (#69419) Thanks @sk7n4k3d.
  • Sessions/costs: snapshot estimatedCostUsd like token counters so repeated persist paths no longer compound the same run cost by up to dozens of times. (#69403) Thanks @MrMiaigi.
  • OpenAI Codex: route ChatGPT/Codex OAuth Responses requests through the /backend-api/codex endpoint so openai-codex/gpt-5.4 no longer hits the removed /backend-api/responses alias. (#69336) Thanks @mzogithub.
  • OpenAI/Responses: omit disabled reasoning payloads when /think off is active, so GPT reasoning models no longer receive unsupported reasoning.effort: "none" requests. (#61982) Thanks @a-tokyo.
  • Gateway/pairing: treat loopback shared-secret node-host, TUI, and gateway clients as local for pairing decisions, so trusted local tools no longer reconnect as remote clients and fail with pairing required. (#69431) Thanks @SARAMALI15792.
  • Active Memory: degrade gracefully when memory recall fails during prompt building, logging a warning and letting the reply continue without memory context instead of failing the whole turn. (#69485) Thanks @Magicray1217.
  • Ollama: add provider-policy defaults for baseUrl and models so implicit local discovery can run before config validation rejects a minimal Ollama provider config. (#69370) Thanks @PratikRai0101.
  • Agents/model selection: clear transient auto-failover session overrides before each turn so recovered primary models are retried immediately without emitting user-override reset warnings. (#69365) Thanks @hitesh-github99.
  • Auto-reply: apply silent NO_REPLY policy per conversation type, so direct chats get a helpful rewritten reply while groups and internal deliveries can remain quiet. (#68644) Thanks @Takhoffman.
  • Telegram/status reactions: honor messages.removeAckAfterReply when lifecycle status reactions are enabled, clearing or restoring the reaction after success/error using the configured hold timings. (#68067) Thanks @poiskgit.
  • Web search/plugins: resolve plugin-scoped SecretRef API keys for bundled Exa, Firecrawl, Gemini, Kimi, Perplexity, Tavily, and Grok web-search providers when they are selected through the shared web-search config. (#68424) Thanks @afurm.
  • Telegram/polling: raise the default polling watchdog threshold from 90s to 120s and add configurable channels.telegram.pollingStallThresholdMs (also per-account) so long-running Telegram work gets more room before polling is treated as stalled. (#57737) Thanks @Vitalcheffe.
  • Telegram/polling: bound the persisted-offset confirmation getUpdates probe with a client-side timeout so a zombie socket cannot hang polling recovery before the runner watchdog starts. (#50368) Thanks @boticlaw.
  • Agents/Pi runner: retry silent stopReason=error turns with no output when no side effects ran, so non-frontier providers that briefly return empty error turns get another chance instead of ending the session early. (#68310) Thanks @Chased1k.
  • Plugins/memory: preserve the active memory capability when read-only snapshot plugin loads run, so status and provider discovery paths no longer wipe memory public artif...
Read more

openclaw 2026.4.20-beta.2

21 Apr 17:44
v2026.4.20-beta.2
4e25479

Choose a tag to compare

Pre-release

2026.4.20

Changes

  • Onboard/wizard: restyle the setup security disclaimer with a single yellow warning banner, section headings and bulleted checklists, and un-dim the note body so key guidance is easy to scan; add a loading spinner during the initial model catalog load so the wizard no longer goes blank while it runs; add an "API key" placeholder to provider API key prompts. (#69553) Thanks @Patrick-Erichsen.
  • Agents/prompts: strengthen the default system prompt and OpenAI GPT-5 overlay with clearer completion bias, live-state checks, weak-result recovery, and verification-before-final guidance.
  • Models/costs: support tiered model pricing from cached catalogs and configured models, and include bundled Moonshot Kimi K2.6/K2.5 cost estimates for token-usage reports. (#67605) Thanks @sliverp.
  • Sessions/Maintenance: enforce the built-in entry cap and age prune by default, and prune oversized stores at load time so accumulated cron/executor session backlogs cannot OOM the gateway before the write path runs. (#69404) Thanks @bobrenze-bot.
  • Plugins/tests: reuse plugin loader alias and Jiti config resolution across repeated same-context loads, reducing import-heavy test overhead. (#69316) Thanks @amknight.
  • Cron: split runtime execution state into jobs-state.json so jobs.json stays stable for git-tracked job definitions. (#63105) Thanks @Feelw00.
  • Agents/compaction: send opt-in start and completion notices during context compaction. (#67830) Thanks @feniix.
  • Moonshot/Kimi: default bundled Moonshot setup, web search, and media-understanding surfaces to kimi-k2.6 while keeping kimi-k2.5 available for compatibility. (#69477) Thanks @scoootscooob.
  • Moonshot/Kimi: allow thinking.keep = "all" on moonshot/kimi-k2.6, and strip it for other Moonshot models or requests where pinned tool_choice disables thinking. (#68816) Thanks @aniaan.
  • BlueBubbles/groups: forward per-group systemPrompt config into inbound context GroupSystemPrompt so configured group-specific behavioral instructions (for example threaded-reply and tapback conventions) are injected on every turn. Supports "*" wildcard fallback matching the existing requireMention pattern. Closes #60665. (#69198) Thanks @omarshahine.
  • Plugins/tasks: add a detached runtime registration contract so plugin executors can own detached task lifecycle and cancellation without reaching into core task internals. (#68915) Thanks @mbelinky.
  • Terminal/logging: optimize sanitizeForLog() by replacing the iterative control-character stripping loop with a single regex pass while preserving the existing ANSI-first sanitization behavior. (#67205) Thanks @bulutmuf.
  • QA/CI: make openclaw qa suite and openclaw qa telegram fail by default when scenarios fail, add --allow-failures for artifact-only runs, and tighten live-lane defaults for CI automation. (#69122) Thanks @joshavant.
  • Mattermost: stream thinking, tool activity, and partial reply text into a single draft preview post that finalizes in place when safe. (#47838) thanks @ninjaa.

Fixes

  • Exec/YOLO: stop rejecting gateway-host exec in security=full plus ask=off mode via the Python/Node script preflight hardening path, so promptless YOLO exec once again runs direct interpreter stdin and heredoc forms such as node <<'NODE' ... NODE.
  • OpenAI Codex: normalize legacy openai-completions transport overrides on default OpenAI/Codex and GitHub Copilot-compatible hosts back to the native Codex Responses transport while leaving custom proxies untouched. (#45304, #42194) Thanks @dyss1992 and @DeadlySilent.
  • Anthropic/plugins: scope Anthropic api: "anthropic-messages" defaulting to Anthropic-owned providers, so openai-codex and other providers without an explicit api no longer get rewritten to the wrong transport. Fixes #64534.
  • fix(qqbot): add SSRF guard to direct-upload URL paths in uploadC2CMedia and uploadGroupMedia [AI-assisted]. (#69595) Thanks @pgondhi987.
  • fix(gateway): enforce allowRequestSessionKey gate on template-rendered mapping sessionKeys. (#69381) Thanks @pgondhi987.
  • Browser/Chrome MCP: surface DevToolsActivePort attach failures as browser-connectivity errors instead of a generic "waiting for tabs" timeout, and point signed-out fallbacks toward the managed openclaw profile.
  • Webchat/images: treat inline image attachments as media for empty-turn gating while still ignoring metadata-only blank turns. (#69474) Thanks @Jaswir.
  • Discord/think: only show adaptive in /think autocomplete for provider/model pairs that actually support provider-managed adaptive thinking, so GPT/OpenAI models no longer advertise an Anthropic-only option.
  • Thinking: only expose max for models that explicitly support provider max reasoning, and remap stored max settings to the largest supported thinking mode when users switch to another model.
  • Gateway/usage: bound the cost usage cache with FIFO eviction so date/range lookups cannot grow unbounded. (#68842) Thanks @Feelw00.
  • OpenAI/Responses: resolve /think levels against each GPT model's supported reasoning efforts so /think off no longer becomes high reasoning or sends unsupported reasoning.effort: "none" payloads.
  • Lobster/TaskFlow: allow managed approval resumes to use approvalId without a resume token, and persist that id in approval wait state. (#69559) Thanks @kirkluokun.
  • Plugins/startup: install bundled runtime dependencies into each plugin's own runtime directory, reuse source-checkout repair caches after rebuilds, and log only packages that were actually installed so repeated Gateway starts stay quiet once deps are present.
  • Plugins/startup: ignore pnpm's npm_execpath when repairing bundled plugin runtime dependencies and skip workspace-only package specs so npm-only install flags or local workspace links do not break packaged plugin startup.
  • MCP: block interpreter-startup env keys such as NODE_OPTIONS for stdio servers while preserving ordinary credential and proxy env vars. (#69540) Thanks @drobison00.
  • Agents/shell: ignore non-interactive placeholder shells like /usr/bin/false and /sbin/nologin, falling back to sh so service-user exec runs no longer exit immediately. (#69308) Thanks @sk7n4k3d.
  • Setup/TUI: relaunch the setup hatch TUI in a fresh process while preserving the configured gateway target and auth source, so onboarding recovers terminal state cleanly without exposing gateway secrets on command-line args. (#69524) Thanks @shakkernerd.
  • Codex: avoid re-exposing the image-generation tool on native vision turns with inbound images, and keep bare image-model overrides on the configured image provider. (#65061) Thanks @zhulijin1991.
  • Sessions/reset: clear auto-sourced model, provider, and auth-profile overrides on /new and /reset while preserving explicit user selections, so channel sessions stop staying pinned to runtime fallback choices. (#69419) Thanks @sk7n4k3d.
  • Sessions/costs: snapshot estimatedCostUsd like token counters so repeated persist paths no longer compound the same run cost by up to dozens of times. (#69403) Thanks @MrMiaigi.
  • OpenAI Codex: route ChatGPT/Codex OAuth Responses requests through the /backend-api/codex endpoint so openai-codex/gpt-5.4 no longer hits the removed /backend-api/responses alias. (#69336) Thanks @mzogithub.
  • OpenAI/Responses: omit disabled reasoning payloads when /think off is active, so GPT reasoning models no longer receive unsupported reasoning.effort: "none" requests. (#61982) Thanks @a-tokyo.
  • Gateway/pairing: treat loopback shared-secret node-host, TUI, and gateway clients as local for pairing decisions, so trusted local tools no longer reconnect as remote clients and fail with pairing required. (#69431) Thanks @SARAMALI15792.
  • Active Memory: degrade gracefully when memory recall fails during prompt building, logging a warning and letting the reply continue without memory context instead of failing the whole turn. (#69485) Thanks @Magicray1217.
  • Ollama: add provider-policy defaults for baseUrl and models so implicit local discovery can run before config validation rejects a minimal Ollama provider config. (#69370) Thanks @PratikRai0101.
  • Agents/model selection: clear transient auto-failover session overrides before each turn so recovered primary models are retried immediately without emitting user-override reset warnings. (#69365) Thanks @hitesh-github99.
  • Auto-reply: apply silent NO_REPLY policy per conversation type, so direct chats get a helpful rewritten reply while groups and internal deliveries can remain quiet. (#68644) Thanks @Takhoffman.
  • Telegram/status reactions: honor messages.removeAckAfterReply when lifecycle status reactions are enabled, clearing or restoring the reaction after success/error using the configured hold timings. (#68067) Thanks @poiskgit.
  • Web search/plugins: resolve plugin-scoped SecretRef API keys for bundled Exa, Firecrawl, Gemini, Kimi, Perplexity, Tavily, and Grok web-search providers when they are selected through the shared web-search config. (#68424) Thanks @afurm.
  • Telegram/polling: raise the default polling watchdog threshold from 90s to 120s and add configurable channels.telegram.pollingStallThresholdMs (also per-account) so long-running Telegram work gets more room before polling is treated as stalled. (#57737) Thanks @Vitalcheffe.
  • Telegram/polling: bound the persisted-offset confirmation getUpdates probe with a client-side timeout so a zombie socket cannot hang polling recovery before the runner watchdog starts. (#50368) Thanks @boticlaw.
  • Agents/Pi runner: retry silent stopReason=error turns with no output when no side effects ran, so non-frontier providers that briefly return empty error turns get another chance instead of ending the session early. (#68310) Thanks @Chased1k.
  • Plugins/memory: preserve the active memory capability when read-only snapshot plugin loads run, so status and provider discovery paths no longer wipe memory public artif...
Read more

openclaw 2026.4.20-beta.1

21 Apr 13:34
v2026.4.20-beta.1
ddd05f4

Choose a tag to compare

Pre-release

Changes

  • Onboard/wizard: restyle the setup security disclaimer with a single yellow warning banner, section headings and bulleted checklists, and un-dim the note body so key guidance is easy to scan; add a loading spinner during the initial model catalog load so the wizard no longer goes blank while it runs; add an "API key" placeholder to provider API key prompts. (#69553) Thanks @Patrick-Erichsen.
  • Agents/prompts: strengthen the default system prompt and OpenAI GPT-5 overlay with clearer completion bias, live-state checks, weak-result recovery, and verification-before-final guidance.
  • Models/costs: support tiered model pricing from cached catalogs and configured models, and include bundled Moonshot Kimi K2.6/K2.5 cost estimates for token-usage reports. (#67605) Thanks @sliverp.
  • Sessions/Maintenance: enforce the built-in entry cap and age prune by default, and prune oversized stores at load time so accumulated cron/executor session backlogs cannot OOM the gateway before the write path runs. (#69404) Thanks @bobrenze-bot.
  • Plugins/tests: reuse plugin loader alias and Jiti config resolution across repeated same-context loads, reducing import-heavy test overhead. (#69316) Thanks @amknight.
  • Cron: split runtime execution state into jobs-state.json so jobs.json stays stable for git-tracked job definitions. (#63105) Thanks @Feelw00.
  • Agents/compaction: send opt-in start and completion notices during context compaction. (#67830) Thanks @feniix.
  • Moonshot/Kimi: default bundled Moonshot setup, web search, and media-understanding surfaces to kimi-k2.6 while keeping kimi-k2.5 available for compatibility. (#69477) Thanks @scoootscooob.
  • Moonshot/Kimi: allow thinking.keep = "all" on moonshot/kimi-k2.6, and strip it for other Moonshot models or requests where pinned tool_choice disables thinking. (#68816) Thanks @aniaan.
  • BlueBubbles/groups: forward per-group systemPrompt config into inbound context GroupSystemPrompt so configured group-specific behavioral instructions (for example threaded-reply and tapback conventions) are injected on every turn. Supports "*" wildcard fallback matching the existing requireMention pattern. Closes #60665. (#69198) Thanks @omarshahine.
  • Plugins/tasks: add a detached runtime registration contract so plugin executors can own detached task lifecycle and cancellation without reaching into core task internals. (#68915) Thanks @mbelinky.
  • Terminal/logging: optimize sanitizeForLog() by replacing the iterative control-character stripping loop with a single regex pass while preserving the existing ANSI-first sanitization behavior. (#67205) Thanks @bulutmuf.
  • QA/CI: make openclaw qa suite and openclaw qa telegram fail by default when scenarios fail, add --allow-failures for artifact-only runs, and tighten live-lane defaults for CI automation. (#69122) Thanks @joshavant.
  • Mattermost: stream thinking, tool activity, and partial reply text into a single draft preview post that finalizes in place when safe. (#47838) thanks @ninjaa.

Fixes

  • Exec/YOLO: stop rejecting gateway-host exec in security=full plus ask=off mode via the Python/Node script preflight hardening path, so promptless YOLO exec once again runs direct interpreter stdin and heredoc forms such as node <<'NODE' ... NODE.
  • OpenAI Codex: normalize legacy openai-completions transport overrides on default OpenAI/Codex and GitHub Copilot-compatible hosts back to the native Codex Responses transport while leaving custom proxies untouched. (#45304, #42194) Thanks @dyss1992 and @DeadlySilent.
  • Anthropic/plugins: scope Anthropic api: "anthropic-messages" defaulting to Anthropic-owned providers, so openai-codex and other providers without an explicit api no longer get rewritten to the wrong transport. Fixes #64534.
  • fix(qqbot): add SSRF guard to direct-upload URL paths in uploadC2CMedia and uploadGroupMedia [AI-assisted]. (#69595) Thanks @pgondhi987.
  • fix(gateway): enforce allowRequestSessionKey gate on template-rendered mapping sessionKeys. (#69381) Thanks @pgondhi987.
  • Browser/Chrome MCP: surface DevToolsActivePort attach failures as browser-connectivity errors instead of a generic "waiting for tabs" timeout, and point signed-out fallbacks toward the managed openclaw profile.
  • Webchat/images: treat inline image attachments as media for empty-turn gating while still ignoring metadata-only blank turns. (#69474) Thanks @Jaswir.
  • Discord/think: only show adaptive in /think autocomplete for provider/model pairs that actually support provider-managed adaptive thinking, so GPT/OpenAI models no longer advertise an Anthropic-only option.
  • Thinking: only expose max for models that explicitly support provider max reasoning, and remap stored max settings to the largest supported thinking mode when users switch to another model.
  • Gateway/usage: bound the cost usage cache with FIFO eviction so date/range lookups cannot grow unbounded. (#68842) Thanks @Feelw00.
  • OpenAI/Responses: resolve /think levels against each GPT model's supported reasoning efforts so /think off no longer becomes high reasoning or sends unsupported reasoning.effort: "none" payloads.
  • Lobster/TaskFlow: allow managed approval resumes to use approvalId without a resume token, and persist that id in approval wait state. (#69559) Thanks @kirkluokun.
  • Plugins/startup: install bundled runtime dependencies into each plugin's own runtime directory, reuse source-checkout repair caches after rebuilds, and log only packages that were actually installed so repeated Gateway starts stay quiet once deps are present.
  • Plugins/startup: ignore pnpm's npm_execpath when repairing bundled plugin runtime dependencies and skip workspace-only package specs so npm-only install flags or local workspace links do not break packaged plugin startup.
  • MCP: block interpreter-startup env keys such as NODE_OPTIONS for stdio servers while preserving ordinary credential and proxy env vars. (#69540) Thanks @drobison00.
  • Agents/shell: ignore non-interactive placeholder shells like /usr/bin/false and /sbin/nologin, falling back to sh so service-user exec runs no longer exit immediately. (#69308) Thanks @sk7n4k3d.
  • Setup/TUI: relaunch the setup hatch TUI in a fresh process while preserving the configured gateway target and auth source, so onboarding recovers terminal state cleanly without exposing gateway secrets on command-line args. (#69524) Thanks @shakkernerd.
  • Codex: avoid re-exposing the image-generation tool on native vision turns with inbound images, and keep bare image-model overrides on the configured image provider. (#65061) Thanks @zhulijin1991.
  • Sessions/reset: clear auto-sourced model, provider, and auth-profile overrides on /new and /reset while preserving explicit user selections, so channel sessions stop staying pinned to runtime fallback choices. (#69419) Thanks @sk7n4k3d.
  • Sessions/costs: snapshot estimatedCostUsd like token counters so repeated persist paths no longer compound the same run cost by up to dozens of times. (#69403) Thanks @MrMiaigi.
  • OpenAI Codex: route ChatGPT/Codex OAuth Responses requests through the /backend-api/codex endpoint so openai-codex/gpt-5.4 no longer hits the removed /backend-api/responses alias. (#69336) Thanks @mzogithub.
  • OpenAI/Responses: omit disabled reasoning payloads when /think off is active, so GPT reasoning models no longer receive unsupported reasoning.effort: "none" requests. (#61982) Thanks @a-tokyo.
  • Gateway/pairing: treat loopback shared-secret node-host, TUI, and gateway clients as local for pairing decisions, so trusted local tools no longer reconnect as remote clients and fail with pairing required. (#69431) Thanks @SARAMALI15792.
  • Active Memory: degrade gracefully when memory recall fails during prompt building, logging a warning and letting the reply continue without memory context instead of failing the whole turn. (#69485) Thanks @Magicray1217.
  • Ollama: add provider-policy defaults for baseUrl and models so implicit local discovery can run before config validation rejects a minimal Ollama provider config. (#69370) Thanks @PratikRai0101.
  • Agents/model selection: clear transient auto-failover session overrides before each turn so recovered primary models are retried immediately without emitting user-override reset warnings. (#69365) Thanks @hitesh-github99.
  • Auto-reply: apply silent NO_REPLY policy per conversation type, so direct chats get a helpful rewritten reply while groups and internal deliveries can remain quiet. (#68644) Thanks @Takhoffman.
  • Telegram/status reactions: honor messages.removeAckAfterReply when lifecycle status reactions are enabled, clearing or restoring the reaction after success/error using the configured hold timings. (#68067) Thanks @poiskgit.
  • Web search/plugins: resolve plugin-scoped SecretRef API keys for bundled Exa, Firecrawl, Gemini, Kimi, Perplexity, Tavily, and Grok web-search providers when they are selected through the shared web-search config. (#68424) Thanks @afurm.
  • Telegram/polling: raise the default polling watchdog threshold from 90s to 120s and add configurable channels.telegram.pollingStallThresholdMs (also per-account) so long-running Telegram work gets more room before polling is treated as stalled. (#57737) Thanks @Vitalcheffe.
  • Telegram/polling: bound the persisted-offset confirmation getUpdates probe with a client-side timeout so a zombie socket cannot hang polling recovery before the runner watchdog starts. (#50368) Thanks @boticlaw.
  • Agents/Pi runner: retry silent stopReason=error turns with no output when no side effects ran, so non-frontier providers that briefly return empty error turns get another chance instead of ending the session early. (#68310) Thanks @Chased1k.
  • Plugins/memory: preserve the active memory capability when read-only snapshot plugin loads run, so status and provider discovery paths no longer wipe memory public artifacts. (#69219...
Read more

openclaw 2026.4.19-beta.2

19 Apr 05:55
v2026.4.19-beta.2
dc3df91

Choose a tag to compare

Pre-release

2026.4.19-beta.2

Fixes

  • Agents/openai-completions: always send stream_options.include_usage on streaming requests, so local and custom OpenAI-compatible backends report real context usage instead of showing 0%. (#68746) Thanks @kagura-agent.
  • Agents/nested lanes: scope nested agent work per target session so a long-running nested run on one session no longer head-of-line blocks unrelated sessions across the gateway. (#67785) Thanks @stainlu.
  • Agents/status: preserve carried-forward session token totals for providers that omit usage metadata, so /status and openclaw sessions keep showing the last known context usage instead of dropping back to unknown/0%. (#67695) Thanks @stainlu.
  • Install/update: keep legacy update verification compatible with the QA Lab runtime shim, so updating older global installs to beta no longer fails after npm installs the package successfully.