OpenClaw Unknown Model Error: Why Your Config Works But Agent Hangs
The Mystery: Your openclaw.json5 file clearly has a model defined. openclaw models status shows it as "configured, missing". You send a message to your agent. It reads the message — you can see it's received — but then absolutely nothing happens. No error. No response. No timeout message. Your agent is just silently hanging, indefinitely. This guide explains every reason this happens and how to fix each one.
Of all the OpenClaw errors, the "Unknown Model" silent hang is uniquely frustrating because there's nothing obvious to debug. A 400 error gives you a code. A gateway crash gives you a log. But this issue gives you nothing — a message that goes in, and silence that comes back out. You stare at the screen waiting for a response that never arrives.
This bug was documented in a widely-shared Dev.to post titled "OpenClaw's 'Unknown Model' Error — How One Missing JSON Entry Broke My AI Assistant for 4 Hours." The post described how a single missing field in the config file caused the model to be listed as configured,missing — meaning OpenClaw found the model name in the config but couldn't resolve it to an actual working API endpoint at runtime. The result was silent failure at every attempt.
What "configured,missing" Actually Means
OpenClaw's model resolution has two separate stages:
Stage 1: Config Parsing (configured)
OpenClaw reads your openclaw.json5 file and parses the model name. If the name is found in the file, the model is marked as "configured." This stage always succeeds as long as the JSON is valid.
Stage 2: Runtime Resolution (missing)
OpenClaw tries to find the model in its internal registry — resolving the model name to a specific API endpoint, API format, and authentication method. If this fails, the model becomes "missing" at runtime, even though it exists in config.
The configured,missing state means Stage 1 succeeded but Stage 2 failed. The config file is fine. The problem is somewhere between the config and the actual API call.
The 7 Most Common Causes
Cause 1: Outdated Model Name (Most Common)
Anthropic, OpenAI, and Google regularly update their model naming conventions. A model name that was valid three months ago may no longer resolve. For example, claude-opus-4 may have been renamed to claude-opus-4-20260301. OpenClaw's internal registry uses the current official model names — if your config has an old alias, resolution fails silently.
// Old (may fail):
"model": "claude-3-opus-20240229"
// Current (check Anthropic's API docs for latest):
"model": "claude-opus-4-20260301"
Cause 2: Missing api Field for Custom Providers
For providers that don't have native OpenClaw support (like local Ollama models or third-party providers), you must specify the api field to tell OpenClaw which API format to use. Without it, OpenClaw can't map the provider to an API handler. This is the bug from the famous Dev.to post — one missing line caused a 4-hour outage.
// Wrong — will show configured,missing for local models:
"ollama": {
"baseUrl": "http://127.0.0.1:11434/v1",
"apiKey": "ollama-local"
}
// Correct — must include the api field:
"ollama": {
"baseUrl": "http://127.0.0.1:11434/v1",
"apiKey": "ollama-local",
"api": "openai-responses" // ← This line is required!
}
Cause 3: Environment Variable Not Set
The config file has "apiKey": "$ANTHROPIC_API_KEY" but the environment variable isn't actually set in the process context where OpenClaw runs. This is common when using LaunchAgent (macOS), systemd (Linux), or running OpenClaw in a Docker container — the environment variables set in your shell aren't automatically inherited.
# Diagnose: Check what OpenClaw sees
openclaw doctor --show-env
# Fix for systemd: Add to service file
Environment="ANTHROPIC_API_KEY=sk-ant-..."
# Fix for LaunchAgent: Add to plist
<key>EnvironmentVariables</key>
<dict>
<key>ANTHROPIC_API_KEY</key>
<string>sk-ant-...</string>
</dict>
Cause 4: OpenClaw Version Mismatch
The model name was valid in an older version of OpenClaw but the model registry was updated and the old alias was removed. After an OpenClaw update, run openclaw models list to see the current list of recognized model names and update your config accordingly.
Cause 5: Network Cannot Reach API Endpoint
OpenClaw validates model availability by attempting a lightweight ping to the API at startup. If your network can't reach the API (blocked by firewall, regional restriction, or corporate proxy), the model shows as missing — even though the config is technically correct. The fix is to ensure OpenClaw can reach the API endpoint from your network environment.
Cause 6: API Key Lacks Access to This Model
Some models require specific API tier access. For example, Claude Opus requires an Anthropic API key on a paid tier. If your key is on the free/trial tier, the model may appear in OpenClaw's registry but fail at runtime because your key isn't authorized for it. Check your API usage tier at the provider's dashboard.
Cause 7: Stale Gateway Cache
OpenClaw caches model availability at startup. If you recently added a new model or API key, the gateway may not have picked up the change yet. A gateway restart forces a fresh resolution of all models.
Complete Diagnosis Workflow
Run These in Order
# Step 1: Get full model status
openclaw models status
# Look for: configured,missing vs configured,available
# Step 2: Run doctor with verbose output
openclaw doctor --verbose
# This checks: config validity, env vars, network, API key auth
# Step 3: Check which models are actually available in registry
openclaw models list
# Compare this to what you have in your config
# Step 4: Try authenticating manually
openclaw models auth verify --provider anthropic
# Should say "Authentication successful" or show error
# Step 5: Test network connectivity to API
curl -s https://api.anthropic.com/health
# Should return {"status":"ok"} if reachable
# Step 6: Check environment variables in OpenClaw context
openclaw config show --env
# Shows what env vars OpenClaw actually sees at runtime
# Step 7: Enable verbose mode and try again
/verbose on
# Then send a test message — verbose output will show
# exactly where the model resolution is failing
The Definitive Fix: Correct Config Template
Here is a complete, working configuration template for 2026 that avoids all of the above causes:
{
"models": {
"providers": {
"anthropic": {
// ✅ Use current official model name from Anthropic docs
"apiKey": "sk-ant-api03-YOUR_KEY_HERE",
// Don't use env vars if running as a service (set directly or use secrets manager)
},
"openai": {
"apiKey": "sk-YOUR_OPENAI_KEY_HERE"
}
},
"defaults": {
"provider": "anthropic",
// ✅ Always verify the model name at docs.anthropic.com/models
"model": "claude-opus-4-20260301"
}
}
}
// For Ollama local model (MUST include api field):
{
"models": {
"providers": {
"ollama": {
"baseUrl": "http://127.0.0.1:11434/v1",
"apiKey": "ollama",
"api": "openai-responses" // ← REQUIRED for local models
}
},
"defaults": {
"provider": "ollama",
"model": "llama3.3:70b"
}
}
}
Keeping Model Names Up to Date
Model names change frequently. Here's a quick reference for current (March 2026) model identifiers for each provider:
| Provider | Current Model ID | Notes |
|---|---|---|
| Anthropic | claude-opus-4-20260301 |
Best quality; verify at docs.anthropic.com |
| OpenAI | gpt-5 |
Latest flagship; also gpt-4o still works |
gemini-2-pro-latest |
Use -latest to auto-track latest version | |
| Ollama (local) | llama3.3:70b |
Must have api: "openai-responses" in config |
Pro Tip: Set a Monthly Reminder
Add a recurring calendar reminder to check the model names in your OpenClaw config against the official provider documentation. Model aliases become stale faster than you'd expect — Anthropic in particular regularly releases new versions and deprecates old aliases. Running openclaw models list monthly takes 30 seconds and prevents silent hangs.
Advanced Diagnostics: Reading the openclaw.log
When openclaw doctor doesn't give you enough detail, the raw log file is your most powerful diagnostic tool. OpenClaw writes detailed startup and runtime logs that show exactly what happens during model resolution — including the specific step where it fails.
Reading the Log File
# Find the log file location
openclaw gateway log --path
# Typical locations:
# macOS: ~/Library/Logs/openclaw/gateway.log
# Linux: ~/.local/share/openclaw/logs/gateway.log
# Windows: %APPDATA%\openclaw\logs\gateway.log
# Tail the log with filtering for model resolution
tail -f ~/.local/share/openclaw/logs/gateway.log | grep -i "model\|resolv\|error\|missing"
# Look for these key log entries that explain WHY resolution fails:
# [model] Resolving: anthropic/claude-opus-4-20260301
# [model] Provider lookup: anthropic ← success
# [model] API endpoint: https://api.anthropic.com/v1 ← success
# [model] Auth check: ← THIS is where "missing" often occurs
# [model] ERROR: Model configured but could not be resolved at runtime
# Reason: API returned 401 (Invalid API Key)
# Model status: configured,missing
Special Case: Docker and Container Deployments
Users running OpenClaw in Docker containers frequently encounter "configured,missing" due to environment variable scoping. Docker containers don't automatically inherit the host's environment variables. If you've set ANTHROPIC_API_KEY in your host shell but run OpenClaw inside a container, the container won't see that variable — and the model resolution will fail silently.
Docker Environment Variable Fix
# Option 1: Pass env vars at run time
docker run -e ANTHROPIC_API_KEY="sk-ant-..." openclaw
# Option 2: Use a .env file
docker run --env-file ~/.openclaw/.env openclaw
# Option 3: Add to docker-compose.yml
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
# Verify the container sees the variable:
docker exec -it [container-id] env | grep ANTHROPIC
Network Access Is a Hidden Prerequisite
One of the less obvious causes of the configured,missing error is network-level inaccessibility. In some corporate networks, home setups with strict firewalls, or regions with AI service restrictions, OpenClaw simply cannot reach the Anthropic or OpenAI API endpoints. From OpenClaw's perspective, this is indistinguishable from the model not existing — the resolution ping times out with no useful error message.
This is where VPN07 becomes essential infrastructure for OpenClaw users. With servers in 70+ countries and 1000Mbps bandwidth, VPN07 ensures that your OpenClaw gateway can reach all major AI model APIs from any location. Whether you're running OpenClaw on a home server in a restricted region, on a corporate laptop with firewall restrictions, or on a cloud VPS in a geography with AI service blocks, VPN07 provides the clean, direct path to the API endpoints that OpenClaw needs to resolve models correctly.
It's worth noting that network-level restriction affects not just model resolution — it can cause OpenClaw to silently fail at every stage of its operation. Skill installations that pull from remote repositories, heartbeat webhooks that can't reach external services, and cron jobs that make HTTP requests all break silently when the network is restricted. A stable VPN connection solves all of these issues at once, rather than requiring individual workarounds for each affected feature.
For users in enterprise environments with strict network policies, it may be worth configuring OpenClaw to use a proxy server. This allows the gateway to communicate with AI APIs through a company-approved proxy, while maintaining corporate network compliance. The configuration is straightforward and described in the OpenClaw documentation under "Enterprise Deployment."
Verify Network Access to AI APIs
# Test direct API connectivity (run without VPN first)
curl -s --max-time 5 https://api.anthropic.com/health
curl -s --max-time 5 https://api.openai.com/v1/models
curl -s --max-time 5 https://generativelanguage.googleapis.com/
# If any of these time out, your network is blocking the API
# Connect VPN07, then re-run the tests
# All should return valid responses when VPN is active
VPN07 — Direct Access to All AI Model APIs
Resolve the "Unknown Model" error caused by network restrictions
When your network blocks access to Anthropic, OpenAI, or Google APIs, OpenClaw silently hangs — looking exactly like an Unknown Model error. VPN07's 1000Mbps network across 70+ countries gives your OpenClaw gateway a clean, unrestricted path to every major AI API. Trusted since 2015, with a 30-day money-back guarantee.