About This Guide: openclaw agent is the CLI command for running direct agent turns — sending prompts to your OpenClaw assistant without needing a chat message. This Q&A covers every single flag: thinking levels, delivery routing, session selection, local vs. gateway mode, JSON output, timeouts, verbose logging, and multi-agent setups. Every option explained with real examples.
Command Structure Overview
# Basic syntax
openclaw agent [session] [delivery] [flags] --message "<prompt>"
# Real examples from the docs
openclaw agent --to +15555550123 --message "status update"
openclaw agent --agent ops --message "Summarize logs"
openclaw agent --session-id 1234 --message "Summarize inbox" --thinking medium
openclaw agent --to +15555550123 --message "Trace logs" --verbose on --json
openclaw agent --to +15555550123 --message "Summon reply" --deliver
openclaw agent --agent ops --message "Generate report" --deliver \
--reply-channel slack --reply-to "#reports"
Session Selection: --to, --agent, --session-id
Q What are the three ways to select a session in openclaw agent?
Session selection determines which conversation context the agent run uses. There are three options — they are mutually exclusive.
| Flag | How It Works | Example |
|---|---|---|
| --agent <name> | Targets a configured agent directly using its main session key |
--agent ops |
| --session-id <id> | Reuses an existing session by its exact ID | --session-id 1234 |
| --to <target> | Derives the session key from a phone/channel target. Direct chats collapse to main |
--to +15555550123 |
Q
How does --agent work with multi-agent setups?
In multi-agent configurations, --agent <name> pins the run to a specific named agent. If that agent doesn't exist, OpenClaw falls back to the default agent automatically.
# Target a specific named agent
openclaw agent --agent ops --message "Check production queue"
# Target a personal agent
openclaw agent --agent personal --message "Summarize my emails"
# If 'ops' agent doesn't exist, falls back to default
openclaw agent --agent ops --message "Generate report"
Q
When should I use --to vs --session-id?
Use --to when you want to simulate sending a message to a specific contact (derives the session the same way inbound messages do). Use --session-id when you have an exact session ID and need to inject into that specific context.
# --to: uses same session as when +1555... messages you
openclaw agent --to +15555550123 --message "status update"
# --session-id: inject into exact session by ID
openclaw agent --session-id abc123xyz --message "Continue analysis"
Thinking Levels: --thinking Flag
Q What are all the thinking levels and what do they mean?
The --thinking flag controls how much extended reasoning the model applies. Only available with GPT-5.2+ and Codex models.
| Level | Use When | Speed |
|---|---|---|
| off | Simple tasks, no reasoning needed | Fastest |
| minimal | Light reasoning, quick responses | Fast |
| low | Basic analytical tasks | Moderate |
| medium | Standard complex tasks | Moderate |
| high | Deep analysis, multi-step problems | Slower |
| xhigh | Maximum reasoning (research, complex code) | Slowest |
# Examples
openclaw agent --message "Send status update" --thinking off
openclaw agent --message "Analyze this log file" --thinking medium
openclaw agent --message "Architect new microservice" --thinking high
openclaw agent --message "Review entire codebase" --thinking xhigh
Important: The --thinking flag persists into the session store. Subsequent agent runs on the same session will use the same thinking level unless you override it again.
Delivery Flags: --deliver, --channel, --reply-to
Q
What does --deliver do and when should I use it?
By default, openclaw agent prints the reply to stdout. With --deliver, OpenClaw sends the reply back to a chat channel instead. Combine with --channel and --reply-to to control where it goes.
# Deliver reply to WhatsApp (default channel)
openclaw agent --to +15555550123 --message "Morning brief" --deliver
# Deliver to a specific Slack channel
openclaw agent --agent ops --message "Generate report" \
--deliver --channel slack --reply-to "#reports"
# Deliver to Telegram chat
openclaw agent --message "Daily summary" \
--deliver --channel telegram --reply-to "1234567890"
Q What are all the supported --channel values?
The --channel flag specifies which messaging platform to deliver to. The default is whatsapp.
Q How do reply override flags work: --reply-channel, --reply-to, --reply-account?
These three flags override where delivery goes without changing the session context. They're useful when you want to run against one session but deliver the response somewhere else.
# Override delivery channel without changing session
openclaw agent --to +15555550123 --message "Daily brief" \
--deliver --reply-channel telegram --reply-to "987654321"
# Override all three delivery params
openclaw agent --agent ops --message "Alert" \
--reply-channel slack \
--reply-to "#alerts" \
--reply-account workspace-bot
| Flag | Overrides |
|---|---|
| --reply-channel | Which channel platform to use for delivery |
| --reply-to | The target recipient (phone, channel ID, username) |
| --reply-account | Which account ID to use for sending |
Output & Debug Flags
Q
What does --json output?
The --json flag switches output from plain text to a structured JSON payload. This is useful for scripting and piping agent output into other tools.
# JSON output (structured payload + metadata)
openclaw agent --to +15555550123 --message "Trace logs" \
--verbose on --json
# Default (text output + MEDIA: lines)
openclaw agent --message "What is my status?"
# stdout: plain reply text
# stderr: MEDIA: lines for any media attachments
Q How do --verbose and --timeout flags work?
Both flags persist into the session store when set — meaning they affect future runs on the same session too, not just the current one.
# Enable verbose logging (persists to session)
openclaw agent --message "Debug this" --verbose on
# Disable verbose logging
openclaw agent --message "Back to normal" --verbose off
# Override agent timeout (in seconds)
openclaw agent --message "Long task" --timeout 300
# Combine verbose + json + timeout
openclaw agent --message "Complex analysis" \
--verbose on --json --timeout 600 --thinking high
Persistence note: --verbose and --thinking both persist to the session store. To reset, explicitly pass --verbose off or --thinking off.
Local Mode: --local Flag
Q What is --local mode and when should I use it?
--local forces the embedded agent runtime to run directly on your machine, bypassing the Gateway. This requires model provider API keys to be available in your shell environment.
# Run locally (requires API keys in shell)
export ANTHROPIC_API_KEY="sk-ant-..."
openclaw agent --message "Quick test" --local
# Gateway unreachable? CLI auto-falls back to local
# If the Gateway is down, --local behavior activates automatically
Use --local when:
- → Testing without the gateway
- → Quick one-off tasks
- → Gateway is down
- → CI/CD pipelines
Use Gateway mode (default) when:
- → Need session persistence
- → Using channels/delivery
- → Multi-agent routing
- → Production use
Complete Flag Reference
| Flag | Type | Description |
|---|---|---|
| --message <text> | Required | The prompt text to send to the agent |
| --agent <name> | Session | Target a configured agent by name |
| --session-id <id> | Session | Reuse an existing session by ID |
| --to <target> | Session | Derive session key from a target address |
| --thinking <level> | Model | off / minimal / low / medium / high / xhigh |
| --deliver | Delivery | Send reply to a chat channel |
| --channel <name> | Delivery | Delivery channel (whatsapp/telegram/discord/slack/signal/imessage) |
| --reply-to <target> | Delivery | Delivery recipient override |
| --reply-channel | Delivery | Delivery channel platform override |
| --reply-account | Delivery | Delivery account ID override |
| --json | Output | Output structured JSON payload + metadata |
| --verbose <on|off> | Debug | Persist verbose level to session |
| --timeout <secs> | Control | Override agent timeout in seconds |
| --local | Runtime | Force embedded local runtime (bypass Gateway) |
Real-World Command Examples
# 1. Quick status to WhatsApp with medium thinking
openclaw agent --to +15555550123 --message "status update" --thinking medium
# 2. Ops agent summarizes logs, delivers to Slack
openclaw agent --agent ops --message "Summarize overnight logs" \
--deliver --channel slack --reply-to "#ops-reports" --thinking high
# 3. Reuse session with JSON output for scripting
openclaw agent --session-id abc123 --message "Analyze this" \
--json --timeout 300 | jq '.reply'
# 4. Local mode for CI/CD pipelines
ANTHROPIC_API_KEY="sk-..." openclaw agent \
--message "Run test suite analysis" --local --json
# 5. Deep analysis with max thinking + verbose debug
openclaw agent --agent research --message "Review architecture" \
--thinking xhigh --verbose on --timeout 900
Best VPN for OpenClaw Agent Performance
VPN07 — Best for AI Agent Latency
High-thinking agent commands like --thinking xhigh make many API calls. With VPN07's 1000Mbps bandwidth, every Anthropic/OpenAI API request goes through at full speed — no throttling, no timeouts. Available across 70+ countries at just $1.5/month.
2. Surfshark
7.0/10Budget option but inconsistent speeds during peak hours. Not reliable for long --thinking xhigh runs that require sustained API throughput.
3. Mullvad
6.8/10Privacy-focused but limited server coverage. Can cause API routing issues for OpenClaw agents connecting to Claude/OpenAI endpoints from certain regions.
Related Articles
OpenClaw Gateway Commands 2026: Operator Q&A
Every gateway command: start, stop, restart, port config, hot reload, and remote access.
Read More → Cron GuideOpenClaw Cron Advanced 2026: Every Schedule Command Q&A
All cron subcommands: add, edit, run, list, runs, remove, system event, delivery modes.
Read More →Run OpenClaw Agent at Full Speed with VPN07
Every openclaw agent --thinking high call deserves 1000Mbps bandwidth. VPN07's global network across 70+ countries ensures your API calls hit Anthropic and OpenAI at maximum speed — for just $1.5/month with a 30-day money-back guarantee.