Providers
Z.AI
Z.AI is the API platform for GLM models. It provides REST APIs for GLM and uses API keys
for authentication. Create your API key in the Z.AI console. OpenClaw uses the zai provider
with a Z.AI API key.
- Provider:
zai - Auth:
ZAI_API_KEY - API: Z.AI Chat Completions (Bearer auth)
Getting started
Auto-detect endpoint
Best for: most users. OpenClaw detects the matching Z.AI endpoint from the key and applies the correct base URL automatically.
Run onboarding
openclaw onboard --auth-choice zai-api-key
Set a default model
{
env: { ZAI_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "zai/glm-5.1" } } },
}
Verify the model is listed
openclaw models list --all --provider zai
Explicit regional endpoint
Best for: users who want to force a specific Coding Plan or general API surface.
Pick the right onboarding choice
# Coding Plan Global (recommended for Coding Plan users)
openclaw onboard --auth-choice zai-coding-global
# Coding Plan CN (China region)
openclaw onboard --auth-choice zai-coding-cn
# General API
openclaw onboard --auth-choice zai-global
# General API CN (China region)
openclaw onboard --auth-choice zai-cn
Set a default model
{
env: { ZAI_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "zai/glm-5.1" } } },
}
Verify the model is listed
openclaw models list --all --provider zai
Built-in catalog
OpenClaw ships the bundled zai provider catalog in the plugin manifest, so read-only
listing can show known GLM rows without loading provider runtime:
openclaw models list --all --provider zai
The manifest-backed catalog currently includes:
| Model ref | Notes |
|---|---|
zai/glm-5.1 |
Default model |
zai/glm-5 |
|
zai/glm-5-turbo |
|
zai/glm-5v-turbo |
|
zai/glm-4.7 |
|
zai/glm-4.7-flash |
|
zai/glm-4.7-flashx |
|
zai/glm-4.6 |
|
zai/glm-4.6v |
|
zai/glm-4.5 |
|
zai/glm-4.5-air |
|
zai/glm-4.5-flash |
|
zai/glm-4.5v |
Advanced configuration
Forward-resolving unknown GLM-5 models
Unknown glm-5* ids still forward-resolve on the bundled provider path by
synthesizing provider-owned metadata from the glm-4.7 template when the id
matches the current GLM-5 family shape.
Tool-call streaming
tool_stream is enabled by default for Z.AI tool-call streaming. To disable it:
{
agents: {
defaults: {
models: {
"zai/<model>": {
params: { tool_stream: false },
},
},
},
},
}
Thinking and preserved thinking
Z.AI thinking follows OpenClaw's /think controls. With thinking off,
OpenClaw sends thinking: { type: "disabled" } to avoid responses that
spend the output budget on reasoning_content before visible text.
Preserved thinking is opt-in because Z.AI requires the full historical
reasoning_content to be replayed, which increases prompt tokens. Enable it
per model:
{
agents: {
defaults: {
models: {
"zai/glm-5.1": {
params: { preserveThinking: true },
},
},
},
},
}
When enabled and thinking is on, OpenClaw sends
thinking: { type: "enabled", clear_thinking: false } and replays prior
reasoning_content for the same OpenAI-compatible transcript.
Advanced users can still override the exact provider payload with
params.extra_body.thinking.