About This Guide: Configuration is where most OpenClaw users get stuck after installation. This Q&A covers every config command and option — from the openclaw config command to manual .env file editing, model switching, API key management, and provider-specific settings for Anthropic, OpenAI, Google Gemini, DeepSeek, and local models.
OpenClaw Configuration Architecture
~/.openclaw/
├── .env # PRIMARY: API keys, provider, model
├── config.json # ADVANCED: Agent behavior, memory, limits
├── persona.md # Agent name, personality, system prompt
├── memory.json # Long-term memory store
├── skills/ # Installed skill configurations
│ ├── telegram.json # Telegram bot settings
│ └── gmail.json # Gmail skill settings
└── logs/ # Runtime logs
Key insight: The .env file controls which AI brain your agent uses. The config.json controls how it behaves. The persona.md controls who it is.
Part 1: The openclaw config Command (Q1–Q8)
Q1 What does the "openclaw config" command do?
openclaw config # Open config in default editor
openclaw config --show # Print current config to terminal
openclaw config --reset # Reset to default configuration
openclaw config --validate # Check config for errors
openclaw config set KEY VALUE # Set a specific value
openclaw config get KEY # Get a specific value
The openclaw config command is your interface to view, validate, and modify your agent's settings without manually editing files.
Q2 How do I view all current OpenClaw configuration values?
# Show all config values (API keys redacted for security)
openclaw config --show
# Example output:
AI_PROVIDER=anthropic
MODEL=claude-opus-4-5
ANTHROPIC_API_KEY=sk-ant-***[REDACTED]***
AGENT_NAME=Jarvis
MAX_TOKENS=8192
MEMORY_ENABLED=true
LOG_LEVEL=info
Q3 How do I set a config value using the CLI command?
# Change AI model via CLI
openclaw config set MODEL claude-opus-4-5
# Change max tokens
openclaw config set MAX_TOKENS 16384
# Enable debug logging
openclaw config set LOG_LEVEL debug
# Restart required after changes
openclaw restart
Q4 How do I validate my config to check for errors before restarting?
openclaw config --validate
# Good output:
✅ AI_PROVIDER: valid (anthropic)
✅ API_KEY: valid (authenticated)
✅ TELEGRAM_BOT_TOKEN: valid (connected)
✅ All configuration checks passed
# Error output:
❌ ANTHROPIC_API_KEY: authentication failed
❌ MODEL: claude-opus-99 not found
Always run --validate after making config changes before restarting your production agent.
Q5 How do I reset OpenClaw to default configuration?
# Reset config.json to defaults (keeps .env intact)
openclaw config --reset
# Full reset (removes config AND .env — starts fresh)
openclaw config --reset --hard
# Then re-run onboarding
openclaw onboard
Warning: --reset --hard deletes your API keys and platform connections. You will need to re-enter everything via openclaw onboard.
Q6 How do I open the config file in my system's default editor?
# Opens config.json in $EDITOR (or vim by default)
openclaw config
# Specify editor explicitly
EDITOR=nano openclaw config
EDITOR=code openclaw config # VS Code
# Open .env directly
nano ~/.openclaw/.env
code ~/.openclaw/.env
Q7 How do I export my config for backup or migration?
# Export config (API keys redacted)
openclaw config --export > openclaw-config-backup.json
# Full backup including skills (manual)
cp -r ~/.openclaw/ ~/openclaw-backup-$(date +%Y%m%d)/
# Import config on new machine
openclaw config --import openclaw-config-backup.json
Q8 What config changes require a restart vs hot-reload?
✅ Hot-reload (no restart needed)
- • persona.md changes
- • Log level changes
- • Memory settings
- • Heartbeat schedules
🔄 Requires restart
- • API key changes
- • AI provider changes
- • Model changes
- • New skill installations
Part 2: AI Provider & Model Configuration (Q9–Q20)
Q9 What are all supported AI providers and their .env config keys?
Anthropic (Claude) — Most popular
AI_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-api03-xxxxx
MODEL=claude-opus-4-5
OpenAI (GPT)
AI_PROVIDER=openai
OPENAI_API_KEY=sk-proj-xxxxx
MODEL=gpt-4o
Google Gemini
AI_PROVIDER=google
GOOGLE_API_KEY=AIzaSy-xxxxx
MODEL=gemini-2.5-pro
DeepSeek
AI_PROVIDER=deepseek
DEEPSEEK_API_KEY=sk-xxxxx
MODEL=deepseek-v3
Local (Ollama)
AI_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434
MODEL=llama3
OpenRouter (100+ models via 1 API)
AI_PROVIDER=openrouter
OPENROUTER_API_KEY=sk-or-xxxxx
MODEL=anthropic/claude-opus-4-5
MiniMax (M2.1 — great for multilingual)
AI_PROVIDER=minimax
MINIMAX_API_KEY=xxxxx
MODEL=minimax-m2.1
Q10 How do I switch from Claude to GPT-4o mid-deployment?
# Method 1: CLI command (easiest)
openclaw config set AI_PROVIDER openai
openclaw config set OPENAI_API_KEY sk-proj-yourkey
openclaw config set MODEL gpt-4o
openclaw restart
# Method 2: Edit .env directly
nano ~/.openclaw/.env
# Change: AI_PROVIDER=openai
# Add: OPENAI_API_KEY=sk-proj-xxxxx
# Change: MODEL=gpt-4o
openclaw restart
Note: Agent memory and skills are provider-agnostic — switching models doesn't lose your agent's memory or installed skills.
Q11 How do I configure multiple AI providers for fallback?
OpenClaw supports provider fallback chains — if the primary provider is unavailable, it automatically tries the next:
# In config.json
{
"providers": {
"primary": "anthropic",
"fallback": ["openai", "google"],
"fallback_on": ["timeout", "rate_limit", "server_error"]
}
}
# All provider API keys must be set in .env
ANTHROPIC_API_KEY=sk-ant-xxx
OPENAI_API_KEY=sk-proj-xxx
GOOGLE_API_KEY=AIzaSy-xxx
Q12 What are all the supported Claude models and their IDs?
Claude Opus 4.5
Most capable, best for complex reasoning
claude-opus-4-5
Claude Sonnet 4.5
Best speed/capability balance — recommended default
claude-sonnet-4-5
Claude Haiku 3.5
Fastest and cheapest, good for simple tasks
claude-haiku-3-5
Q13 How do I configure token limits and context window settings?
# In .env file
MAX_TOKENS=8192 # Max tokens per response
CONTEXT_WINDOW=200000 # Context window to use
TEMPERATURE=0.7 # Response creativity (0.0-1.0)
TOP_P=0.9 # Nucleus sampling
# Or via CLI
openclaw config set MAX_TOKENS 16384
openclaw config set TEMPERATURE 0.5
Cost tip: Higher MAX_TOKENS = higher API costs. Start with 4096 for basic tasks, 16384 for code review and long documents.
Q14 How do I configure OpenClaw to use a local Ollama model?
# Step 1: Install and start Ollama
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3 # Download the model
ollama serve # Start Ollama server
# Step 2: Configure OpenClaw
openclaw config set AI_PROVIDER ollama
openclaw config set OLLAMA_BASE_URL http://localhost:11434
openclaw config set MODEL llama3
openclaw restart
# Test connection
openclaw config --validate
Privacy benefit: Local Ollama mode means NO data leaves your machine — even API calls stay local. But you need powerful hardware (16GB+ RAM for 7B models).
Q15 How do I use Vercel AI Gateway to access hundreds of models with one key?
AI_PROVIDER=vercel-ai-gateway
VERCEL_AI_KEY=your-vercel-key
VERCEL_AI_GATEWAY_URL=https://ai-gateway.vercel.com
# Then set any model from any provider:
MODEL=anthropic/claude-opus-4-5
# or
MODEL=openai/gpt-4o
# or
MODEL=google/gemini-2.5-pro
Q16 "Authentication failed" when configuring API key — how to debug?
Step 1: Verify key format
openclaw config get ANTHROPIC_API_KEY
# Must start with: sk-ant-api03-
Step 2: Test key directly with curl
curl https://api.anthropic.com/v1/messages \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-d '{"model":"claude-haiku-3-5","max_tokens":10,"messages":[{"role":"user","content":"hi"}]}'
Step 3: If curl also fails — network issue
Connect via VPN07 before testing. API endpoints are sometimes blocked by ISPs or corporate networks. VPN07's 1000Mbps bandwidth ensures stable API connections.
Q17 How do I configure OpenClaw to use a proxy for API calls?
# In .env — configure HTTP proxy
HTTPS_PROXY=http://proxy-server:8080
HTTP_PROXY=http://proxy-server:8080
NO_PROXY=localhost,127.0.0.1
# Or use SOCKS proxy
HTTPS_PROXY=socks5://127.0.0.1:1080
Better alternative: Instead of configuring per-app proxies, use VPN07 as a system-wide VPN. All OpenClaw traffic (npm, API calls, skill downloads) automatically routes through VPN07's secure 1000Mbps network.
Q18 How do I configure rate limiting to avoid API quota errors?
# In config.json — rate limiting settings
{
"rateLimit": {
"requestsPerMinute": 20,
"tokensPerMinute": 100000,
"retryOnRateLimit": true,
"retryDelay": 5000,
"maxRetries": 3
}
}
Q19 How do I configure the agent persona and system prompt?
# Edit the persona file
nano ~/.openclaw/persona.md
# Example persona.md content:
---
name: Jarvis
timezone: America/New_York
language: English
---
You are Jarvis, a highly capable personal AI assistant.
Your owner is Alex, a software engineer in New York.
You are proactive, precise, and communicate concisely.
You prefer bullet points for lists and code blocks for code.
The persona.md file hot-reloads — changes take effect immediately without restarting the agent.
Q20 How do I configure logging level and log output destination?
# In .env
LOG_LEVEL=info # Options: debug, info, warn, error
LOG_FILE=~/.openclaw/logs/agent.log
LOG_MAX_SIZE=50MB # Rotate logs at this size
LOG_MAX_FILES=7 # Keep last 7 log files
# Enable debug temporarily without editing .env
openclaw start --debug
# Stream logs in real-time
openclaw logs --follow --level debug
Part 3: API Key Security & Advanced Config (Q21–Q28)
Q21 How do I secure my API keys in the .env file?
# Set strict file permissions (owner read-only)
chmod 600 ~/.openclaw/.env
ls -la ~/.openclaw/.env
# Should show: -rw------- (600)
# Verify no world-readable permissions
stat ~/.openclaw/.env
# NEVER do this (world-readable):
chmod 644 ~/.openclaw/.env # ❌ Dangerous!
Q22 Can I use environment variables instead of the .env file?
# Yes — system env vars override .env file values
export ANTHROPIC_API_KEY="sk-ant-xxxxx"
export AI_PROVIDER="anthropic"
openclaw start
# Useful for: Docker deployments, CI/CD pipelines
# Priority: system env > .env file > config.json defaults
Q23 How do I rotate my API key without downtime?
# Step 1: Generate new key from provider dashboard
# Step 2: Update config while agent is running
openclaw config set ANTHROPIC_API_KEY sk-ant-new-key
# Step 3: Validate new key before restart
openclaw config --validate
# Step 4: Graceful restart
openclaw restart --graceful
# (finishes current task before restarting)
# Step 5: Revoke old key in provider dashboard
Q24 How do I configure OpenClaw for multi-user/team environments?
# In config.json — enable multi-user mode
{
"multiUser": {
"enabled": true,
"allowedUsers": ["telegram:@alice", "discord:bob#1234"],
"adminUsers": ["telegram:@alice"],
"separateMemory": true
}
}
Each authorized user gets their own memory context while sharing the same agent instance.
Q25 How do I configure OpenClaw to run on a custom port?
# In .env
PORT=3000 # HTTP server port (default: 3000)
WEBHOOK_PORT=3001 # Webhook listener port
HOST=0.0.0.0 # Bind to all interfaces
# Or via CLI flags
openclaw start --port 8080
Q26 How do I configure automatic config backups?
# In config.json
{
"backup": {
"enabled": true,
"schedule": "0 2 * * *", // Daily at 2 AM
"destination": "~/openclaw-backups/",
"keepLast": 30,
"includeMemory": true,
"excludeApiKeys": true
}
}
Q27 How do I configure OpenClaw for a Docker container deployment?
# Pass all config via Docker environment variables
docker run -d \
--name openclaw \
-e ANTHROPIC_API_KEY="sk-ant-xxx" \
-e AI_PROVIDER="anthropic" \
-e MODEL="claude-sonnet-4-5" \
-e TELEGRAM_BOT_TOKEN="xxx" \
-v ~/openclaw-data:/root/.openclaw \
openclaw/openclaw:latest
# Or use --env-file
docker run -d --env-file ~/.openclaw/.env openclaw/openclaw:latest
Q28 How does VPN configuration interact with OpenClaw?
OpenClaw doesn't have built-in VPN settings — it uses your system's network stack. This means:
System VPN (recommended)
When you connect VPN07, ALL OpenClaw network traffic automatically routes through it — API calls, skill downloads, webhook connections. Zero configuration needed.
Benefits for OpenClaw
- • API calls reach Anthropic/OpenAI reliably
- • Skill downloads from ClawHub unblocked
- • Stable 1000Mbps for 24/7 agent operation
- • GitHub access for git clone/updates
The Best VPN for OpenClaw Developers
VPN07 — 10 Years Proven, Industry-Leading Bandwidth
When your OpenClaw agent runs 24/7, you need a VPN that never drops. VPN07's 10-year track record means zero unexpected downtime for your AI automation workflows.
Related Articles
Power Your OpenClaw with VPN07
API authentication failures, network timeouts, blocked endpoints — all solved with VPN07. Our 1000Mbps network across 70+ countries keeps your OpenClaw config commands and API calls running smoothly, 24/7, for just $1.5/month.