Providers
SGLang
SGLang serves open-weight models via an OpenAI-compatible HTTP API. OpenClaw connects to SGLang using the openai-completions provider family with auto-discovery of available models.
| Property | Value |
|---|---|
| Provider id | sglang |
| Plugin | bundled, enabledByDefault: true |
| Auth env var | SGLANG_API_KEY (any non-empty value if server has no auth) |
| Onboarding flag | --auth-choice sglang |
| API | OpenAI-compatible (openai-completions) |
| Default base URL | http://127.0.0.1:30000/v1 |
| Default model placeholder | sglang/Qwen/Qwen3-8B |
| Streaming usage | Yes (supportsStreamingUsage: true) |
| Pricing | Marked external-free (modelPricing.external: false) |
OpenClaw also auto-discovers available models from SGLang when you opt in with SGLANG_API_KEY and you do not define an explicit models.providers.sglang entry — see Model discovery (implicit provider) below.
Getting started
Start SGLang
Launch SGLang with an OpenAI-compatible server. Your base URL should expose
/v1 endpoints (for example /v1/models, /v1/chat/completions). SGLang
commonly runs on:
http://127.0.0.1:30000/v1
Set an API key
Any value works if no auth is configured on your server:
export SGLANG_API_KEY="sglang-local"
Run onboarding or set a model directly
openclaw onboard
Or configure the model manually:
{
agents: {
defaults: {
model: { primary: "sglang/your-model-id" },
},
},
}
Model discovery (implicit provider)
When SGLANG_API_KEY is set (or an auth profile exists) and you do not
define models.providers.sglang, OpenClaw will query:
GET http://127.0.0.1:30000/v1/models
and convert the returned IDs into model entries.
Explicit configuration (manual models)
Use explicit config when:
- SGLang runs on a different host/port.
- You want to pin
contextWindow/maxTokensvalues. - Your server requires a real API key (or you want to control headers).
{
models: {
providers: {
sglang: {
baseUrl: "http://127.0.0.1:30000/v1",
apiKey: "${SGLANG_API_KEY}",
api: "openai-completions",
models: [
{
id: "your-model-id",
name: "Local SGLang Model",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 128000,
maxTokens: 8192,
},
],
},
},
},
}
Advanced configuration
Proxy-style behavior
SGLang is treated as a proxy-style OpenAI-compatible /v1 backend, not a
native OpenAI endpoint.
| Behavior | SGLang |
|---|---|
| OpenAI-only request shaping | Not applied |
service_tier, Responses store, prompt-cache hints |
Not sent |
| Reasoning-compat payload shaping | Not applied |
Hidden attribution headers (originator, version, User-Agent) |
Not injected on custom SGLang base URLs |
Troubleshooting
Server not reachable
Verify the server is running and responding:
curl http://127.0.0.1:30000/v1/models
Auth errors
If requests fail with auth errors, set a real SGLANG_API_KEY that matches
your server configuration, or configure the provider explicitly under
models.providers.sglang.