Fundamentals

上下文引擎

A context engine controls how OpenClaw builds model context for each run: which messages to include, how to summarize older history, and how to manage context across subagent boundaries.

OpenClaw ships with a built-in legacy engine and uses it by default - most users never need to change this. Install and select a plugin engine only when you want different assembly, compaction, or cross-session recall behavior.

Quick start

  • Check which engine is active

    openclaw doctor
    # or inspect config directly:
    cat ~/.openclaw/openclaw.json | jq '.plugins.slots.contextEngine'
    
  • Install a plugin engine

    Context engine plugins are installed like any other OpenClaw plugin.

    From npm

    openclaw plugins install @martian-engineering/lossless-claw
    

    From a local path

    openclaw plugins install -l ./my-context-engine
    
  • Enable and select the engine

    // openclaw.json
    {
      plugins: {
        slots: {
          contextEngine: "lossless-claw", // must match the plugin's registered engine id
        },
        entries: {
          "lossless-claw": {
            enabled: true,
            // Plugin-specific config goes here (see the plugin's docs)
          },
        },
      },
    }
    

    Restart the gateway after installing and configuring.

  • Switch back to legacy (optional)

    Set contextEngine to "legacy" (or remove the key entirely - "legacy" is the default).

  • How it works

    Every time OpenClaw runs a model prompt, the context engine participates at four lifecycle points:

    1. Ingest

    Called when a new message is added to the session. The engine can store or index the message in its own data store.

    2. Assemble

    Called before each model run. The engine returns an ordered set of messages (and an optional systemPromptAddition) that fit within the token budget.

    3. Compact

    Called when the context window is full, or when the user runs /compact. The engine summarizes older history to free space.

    4. After turn

    Called after a run completes. The engine can persist state, trigger background compaction, or update indexes.

    For the bundled non-ACP Codex harness, OpenClaw applies the same lifecycle by projecting assembled context into Codex developer instructions and the current turn prompt. Codex still owns its native thread history and native compactor.

    Subagent lifecycle (optional)

    OpenClaw calls two optional subagent lifecycle hooks:

    prepareSubagentSpawnmethod

    Prepare shared context state before a child run starts. The hook receives parent/child session keys, contextMode (isolated or fork), available transcript ids/files, and optional TTL. If it returns a rollback handle, OpenClaw calls it when spawn fails after preparation succeeds.

    onSubagentEndedmethod

    Clean up when a subagent session completes or is swept.

    System prompt addition

    The assemble method can return a systemPromptAddition string. OpenClaw prepends this to the system prompt for the run. This lets engines inject dynamic recall guidance, retrieval instructions, or context-aware hints without requiring static workspace files.

    The legacy engine

    The built-in legacy engine preserves OpenClaw's original behavior:

    • Ingest: no-op (the session manager handles message persistence directly).
    • Assemble: pass-through (the existing sanitize → validate → limit pipeline in the runtime handles context assembly).
    • Compact: delegates to the built-in summarization compaction, which creates a single summary of older messages and keeps recent messages intact.
    • After turn: no-op.

    The legacy engine does not register tools or provide a systemPromptAddition.

    When no plugins.slots.contextEngine is set (or it's set to "legacy"), this engine is used automatically.

    Plugin engines

    A plugin can register a context engine using the plugin API:

    
    export default function register(api) {
      api.registerContextEngine("my-engine", (ctx) => ({
        info: {
          id: "my-engine",
          name: "My Context Engine",
          ownsCompaction: true,
        },
    
        async ingest({ sessionId, message, isHeartbeat }) {
          // Store the message in your data store
          return { ingested: true };
        },
    
        async assemble({ sessionId, messages, tokenBudget, availableTools, citationsMode }) {
          // Return messages that fit the budget
          return {
            messages: buildContext(messages, tokenBudget),
            estimatedTokens: countTokens(messages),
            systemPromptAddition: buildMemorySystemPromptAddition({
              availableTools: availableTools ?? new Set(),
              citationsMode,
            }),
          };
        },
    
        async compact({ sessionId, force }) {
          // Summarize older context
          return { ok: true, compacted: true };
        },
      }));
    }
    

    The factory ctx includes optional config, agentDir, and workspaceDir values so plugins can initialize per-agent or per-workspace state before the first lifecycle hook runs.

    Then enable it in config:

    {
      plugins: {
        slots: {
          contextEngine: "my-engine",
        },
        entries: {
          "my-engine": {
            enabled: true,
          },
        },
      },
    }
    

    The ContextEngine interface

    Required members:

    Member Kind Purpose
    info Property Engine id, name, version, and whether it owns compaction
    ingest(params) Method Store a single message
    assemble(params) Method Build context for a model run (returns AssembleResult)
    compact(params) Method Summarize/reduce context

    assemble returns an AssembleResult with:

    messagesMessage[]required

    The ordered messages to send to the model.

    estimatedTokensnumberrequired

    The engine's estimate of total tokens in the assembled context. OpenClaw uses this for compaction threshold decisions and diagnostic reporting.

    systemPromptAdditionstring

    Prepended to the system prompt.

    promptAuthority"assembled" | "preassembly_may_overflow"

    Controls which token estimate the runner uses for preemptive overflow prechecks. Defaults to "assembled", which means only the assembled prompt's estimate is checked - appropriate for engines that return a windowed, self-contained context. Set to "preassembly_may_overflow" only when your assembled view can hide overflow risk in the underlying transcript; the runner then takes the maximum of the assembled estimate and the pre-assembly (unwindowed) session-history estimate when deciding whether to preemptively compact. Either way, the messages you return are still what the model sees - promptAuthority only affects the precheck.

    compact returns a CompactResult. When compaction rotates the active transcript, result.sessionId and result.sessionFile identify the successor session that the next retry or turn must use.

    Optional members:

    Member Kind Purpose
    bootstrap(params) Method Initialize engine state for a session. Called once when the engine first sees a session (e.g., import history).
    ingestBatch(params) Method Ingest a completed turn as a batch. Called after a run completes, with all messages from that turn at once.
    afterTurn(params) Method Post-run lifecycle work (persist state, trigger background compaction).
    prepareSubagentSpawn(params) Method Set up shared state for a child session before it starts.
    onSubagentEnded(params) Method Clean up after a subagent ends.
    dispose() Method Release resources. Called during gateway shutdown or plugin reload - not per-session.

    ownsCompaction

    ownsCompaction controls whether Pi's built-in in-attempt auto-compaction stays enabled for the run:

    ownsCompaction: true

    The engine owns compaction behavior. OpenClaw disables Pi's built-in auto-compaction for that run, and the engine's compact() implementation is responsible for /compact, overflow recovery compaction, and any proactive compaction it wants to do in afterTurn(). OpenClaw may still run the pre-prompt overflow safeguard; when it predicts the full transcript will overflow, the recovery path calls the active engine's compact() before submitting another prompt.

    ownsCompaction: false or unset

    Pi's built-in auto-compaction may still run during prompt execution, but the active engine's compact() method is still called for /compact and overflow recovery.

    That means there are two valid plugin patterns:

    Owning mode

    Implement your own compaction algorithm and set ownsCompaction: true.

    Delegating mode

    Set ownsCompaction: false and have compact() call delegateCompactionToRuntime(...) from openclaw/plugin-sdk/core to use OpenClaw's built-in compaction behavior.

    A no-op compact() is unsafe for an active non-owning engine because it disables the normal /compact and overflow-recovery compaction path for that engine slot.

    Configuration reference

    {
      plugins: {
        slots: {
          // Select the active context engine. Default: "legacy".
          // Set to a plugin id to use a plugin engine.
          contextEngine: "legacy",
        },
      },
    }
    

    Relationship to compaction and memory

    Compaction

    Compaction 是內容引擎的其中一項責任。舊版引擎會委派給 OpenClaw 內建的摘要功能。Plugin 引擎可以實作任何 Compaction 策略(DAG 摘要、向量檢索等)。

    記憶體 Plugin

    記憶體 Plugin(plugins.slots.memory)與內容引擎是分開的。記憶體 Plugin 提供搜尋/擷取;內容引擎控制模型會看到什麼。它們可以協同運作 - 內容引擎可能會在組裝期間使用記憶體 Plugin 資料。想使用 Active Memory 提示路徑的 Plugin 引擎,應優先使用 openclaw/plugin-sdk/core 中的 buildMemorySystemPromptAddition(...),它會將 Active Memory 提示區段轉換成可直接前置的 systemPromptAddition。如果引擎需要較低層級的控制,仍可透過 buildActiveMemoryPromptSection(...)openclaw/plugin-sdk/memory-host-core 拉取原始行。

    工作階段修剪

    無論目前啟用哪個內容引擎,仍會在記憶體中修剪舊的工具結果。

    提示

    • 使用 openclaw doctor 驗證你的引擎是否正確載入。
    • 如果切換引擎,現有工作階段會繼續使用其目前的歷史記錄。新引擎會接管未來的執行。
    • 引擎錯誤會被記錄並顯示在診斷資訊中。如果 Plugin 引擎無法註冊,或所選的引擎 ID 無法解析,OpenClaw 不會自動回退;執行會失敗,直到你修正 Plugin 或將 plugins.slots.contextEngine 切回 "legacy"
    • 開發時,使用 openclaw plugins install -l ./my-engine 連結本機 Plugin 目錄,而無需複製。

    相關