OpenClaw not working? Common errors and how to fix them
By Linas Valiukas · March 11, 2026
Self-hosting OpenClaw means you're your own sysadmin, your own DevOps team, and your own on-call engineer. When something breaks at 10pm on a Tuesday, you don't open a support ticket. You open a terminal. This guide covers the errors you'll actually hit, with the exact messages you'll see and the commands that fix them.
I've debugged these errors more times than I'd like. They come in waves. You fix one, another shows up a week later. That's the nature of running your own infrastructure. Here's the runbook.
Gateway errors
The gateway is the WebSocket bridge between OpenClaw's core and the outside world. It handles connections to messaging platforms, browser extensions, and paired devices. When it breaks, everything stops. These are the errors you'll see most.
"Gateway not connected" / "Gateway disconnected"
The WebSocket connection between the OpenClaw server and the gateway dropped. This can happen after a network interruption, a server restart, or just because the connection timed out after sitting idle too long.
Fix it:
docker restart openclaw-gateway
docker logs openclaw-gateway --tail 50
If it keeps disconnecting, check that GATEWAY_URL in your .env file matches your actual server address. A mismatch between localhost and your public IP is a common cause. Also make sure ports 443 and 8443 aren't blocked by your firewall or cloud provider security group.
"Disconnected 4008 connect failed"
Error code 4008 means the gateway tried to establish a WebSocket connection and got rejected. The server is running, but it won't accept the connection. Nine times out of ten, this is a configuration mismatch.
- Verify
GATEWAY_URLuseswss://if you're behind a reverse proxy with TLS, orws://if you're not. - Check that your reverse proxy (nginx, Caddy, Traefik) is configured to upgrade WebSocket connections. Missing
proxy_set_header Upgradeheaders in nginx is extremely common. - If you recently changed your domain or IP, the gateway config is still pointing at the old one.
"Pairing required" / "Disconnected 1008 pairing required"
Your device's pairing with the OpenClaw server has expired or was invalidated. This happens after server updates, database resets, or when the pairing token TTL runs out.
Go to Settings > Devices in the OpenClaw dashboard and remove the stale device entry. Then re-pair from scratch. If you're getting this on every restart, check that the database volume is actually persisting between container restarts. A missing or misconfigured Docker volume means OpenClaw loses pairing data every time it stops.
# Check if your data volume is mounted correctly
docker inspect openclaw | grep -A 5 "Mounts" "Device signature expired" / "Token mismatch"
The authentication token for a paired device has expired or doesn't match what the server expects. Clear the device's local storage, remove the device from the server's device list, and re-pair. If you're running multiple OpenClaw instances behind a load balancer without shared sessions, that's your problem — each instance has its own device registry.
"Gateway failed to start" / "Gateway unreachable" / "Restart failed"
The gateway process can't start at all. Check the logs first:
docker logs openclaw-gateway 2>&1 | tail -100
Common causes: the port is already in use by another process, the container ran out of memory (check docker stats), or the gateway binary crashed and Docker's restart policy gave up after too many attempts. Kill any orphaned processes on the port, bump the memory limit, or reset the restart count with docker rm and recreate the container.
Authentication errors
"401 Unauthorized" / "Invalid authentication"
Your LLM API key is wrong, expired, or missing. OpenClaw doesn't generate its own AI responses — it forwards requests to providers like OpenAI or Anthropic using your API key. A 401 means the provider rejected your credentials.
- Check that
OPENAI_API_KEYorANTHROPIC_API_KEYin your.envstarts with the correct prefix (sk-for OpenAI). - Regenerate the key in your provider's dashboard. Keys can be revoked without notice if the provider detects a leak.
- Make sure there are no trailing spaces or newlines in the key. Copy-paste from a terminal sometimes adds invisible characters.
"Invalid bearer token"
Similar to the 401, but this usually means the token format is wrong rather than the token being expired. Some providers require a "Bearer " prefix in the authorization header, and OpenClaw's config expects just the raw key. Don't include "Bearer " or "sk-" twice. Just the key, nothing else.
Rate limiting
"429 Too Many Requests" / "Rate limit exceeded" / "API rate limit reached"
This isn't an OpenClaw bug. Your LLM provider is telling you to slow down. Every provider has rate limits based on your plan tier — requests per minute, tokens per minute, or both. Free-tier API keys hit these limits fast, sometimes within minutes of normal use.
Check your usage on your provider's dashboard. Then set request throttling in OpenClaw:
# In your .env or OpenClaw settings
MAX_REQUESTS_PER_MINUTE=20
RATE_LIMIT_COOLDOWN_SECONDS=60 If you're on a paid API plan and still hitting limits, you're probably sending too many concurrent requests. Reduce the number of active conversations or switch to a provider with higher limits for your tier.
"LLM request timed out"
The LLM provider took too long to respond and OpenClaw gave up waiting. This happens during provider outages, when using very large context windows, or when the model is overloaded. Increase the timeout:
LLM_REQUEST_TIMEOUT=120 # seconds, default is usually 60 If it keeps timing out, try a different model or check status.openai.com (or your provider's status page) for ongoing incidents.
"Fetch failed"
A generic network error. OpenClaw tried to reach an external service and couldn't. Could be DNS resolution failure, a firewall blocking outbound HTTPS, or the provider's API being down. Test connectivity from inside the container:
docker exec openclaw curl -I https://api.openai.com/v1/models If that fails, your container's network is misconfigured. Check Docker's DNS settings and make sure your host can resolve external domains.
Installation errors
"npm ERR! code ENOENT" / "npm install failed"
ENOENT means "file not found." npm is looking for a file or directory that doesn't exist. This usually means you're running npm install from the wrong directory, or the package.json is missing or corrupted.
# Make sure you're in the right directory
ls package.json
# Clear npm cache and try again
npm cache clean --force
rm -rf node_modules package-lock.json
npm install
On Linux, permission errors masquerading as ENOENT are common. If you installed Node with sudo, npm's global directory might have root ownership. Use nvm instead.
"openclaw: command not found" / "not on PATH"
The openclaw binary exists on your machine but your shell can't find it. After a global npm install, the binary lands in npm's global bin directory, which isn't always in your PATH.
# Find where npm puts global binaries
npm bin -g
# Add it to your PATH (bash)
echo 'export PATH="$(npm bin -g):$PATH"' >> ~/.bashrc
source ~/.bashrc
# Verify
which openclaw
If you installed via Docker, there's no openclaw CLI command. You access it through the web UI at http://localhost:3210.
"Port already in use"
Another process is using port 3210 (or whatever port OpenClaw is configured to use). Find and kill it:
# Find what's using the port
lsof -i :3210
# or on Linux without lsof
ss -tlnp | grep 3210
# Kill it (replace PID with the actual process ID)
kill -9 PID
# Or just change OpenClaw's port
PORT=3211 openclaw
If it's a zombie OpenClaw process from a previous crash, docker rm -f openclaw and start fresh.
Messaging platform issues
"Telegram not working" / "Bot not responding"
Three things to check, in this order:
- Is the Telegram plugin enabled? Go to Settings > Plugins in the OpenClaw dashboard. If it says "configured plugin disabled," toggle it on.
- Is your bot token valid? Tokens expire or get revoked. Open
@BotFatheron Telegram, use/tokento check, and regenerate if needed. - Is the webhook URL correct and reachable? Verify with:
The URL should point to your server's public address. If you seecurl https://api.telegram.org/bot<YOUR_TOKEN>/getWebhookInfo"last_error_message"in the response, that tells you exactly what's wrong.
"Discord bot not responding" / "Plugin not available"
Discord bots need two things that people forget: the MESSAGE_CONTENT privileged intent (enabled in the Discord Developer Portal under your application's Bot settings) and the correct bot permissions when generating the invite link. Without the MESSAGE_CONTENT intent, the bot connects but can't read any messages. It just sits there, silent.
If the plugin shows as "not available," your Discord bot token is likely wrong or the bot was removed from the server. Regenerate the token, re-invite the bot, and restart OpenClaw.
"WhatsApp plugin not available" / "Slack plugin not available" / "iMessage plugin not available"
WhatsApp integration requires a Meta Business account and a verified phone number through the WhatsApp Business API. It's not a simple token — there's an entire approval process. If the plugin shows "not available," you haven't completed the Meta Business verification, or your WhatsApp API access has been revoked.
Slack integration requires creating a Slack app with the right OAuth scopes and installing it to your workspace. The bot token needs chat:write, channels:history, and app_mentions:read at minimum.
iMessage requires running OpenClaw on a Mac with the Messages app configured. There's no way around this. Linux and Windows users can't use the iMessage plugin at all.
Performance issues
"Context overflow" / "Compacting context"
Every LLM has a context window — a maximum amount of text it can process at once. When a conversation gets too long, OpenClaw has to compress older messages to make room for new ones. You'll see "compacting context" in the logs and the response quality drops because the model is working with a lossy summary of earlier messages.
There's no permanent fix. This is how LLMs work. You can mitigate it:
- Start new conversations more often instead of keeping one running for days.
- Use a model with a larger context window (GPT-4o supports 128k tokens, Claude supports 200k).
- Keep your system prompt short. A 2,000-token system prompt eats into every single request.
- Disable file attachments and image analysis if you don't need them — they consume a lot of context.
"No output" / "No response" / "Stops responding"
OpenClaw sends a message to the LLM and gets nothing back. Or it gets a partial response and stops mid-sentence. This is usually one of three things:
- The model is overloaded. Try again in a few minutes, or switch to a different model.
- Your API key ran out of credits. Check your provider's billing page.
- The server ran out of memory. Run
docker statsand check if OpenClaw is using all available RAM. If it is, you need a bigger server or you need to reduce concurrent conversations.
OpenClaw is slow / high latency
Response time depends on the model you're using, the length of the conversation, and your server's proximity to the LLM provider's data center. A VPS in Singapore talking to OpenAI's US-East servers adds 200-300ms of network latency to every request. On top of that, model inference itself takes 2-30 seconds depending on the model and prompt length.
If it used to be fast and slowed down, your conversation context has grown too large. Start a new conversation. If it's always been slow, try a smaller/faster model or move your server closer to your LLM provider.
Browser and extension issues
"Browser relay not working" / "Chrome extension not working"
The browser relay lets OpenClaw interact with web pages through a Chrome extension. It's finicky. The extension needs to be paired with your OpenClaw instance, the relay server needs to be running, and the extension needs to be on the same network or connected through a tunnel.
- Reinstall the extension from the Chrome Web Store.
- Re-pair it with your OpenClaw instance (Settings > Browser Relay).
- Make sure the relay port (default 7860) is open and reachable from your browser.
- Check that you're not running an ad blocker or privacy extension that blocks WebSocket connections.
The pattern you'll notice
You fix the gateway error. Next week the API key expires. You regenerate it. Then the Telegram webhook breaks because you changed your server's IP. You fix that. Then the Docker volume fills up and you lose your conversation history. Every fix is temporary. Something else breaks.
At some point you do the math. You've spent 15 hours this month debugging OpenClaw instead of using it. At any reasonable hourly rate, that's more than a hosting service would cost.
That's why we built TryOpenClaw.ai. We handle the gateway connections, the API keys, the plugin configuration, the updates, and the monitoring. You get OpenClaw in your messaging app without the terminal window. If that sounds better than maintaining a runbook like this one, join the waitlist.
Software engineer and founder of TryOpenClaw.ai. Been writing code since age 14.
Try it right now
This is just one example — OpenClaw adapts to whatever you need. Describe any workflow in plain language and it figures out the rest. Pay $1 for a full 24-hour trial, pick your messaging app, and start chatting with your own instance in under 60 seconds. Love it? $39/mo. Not for you? Walk away — we delete everything.
Try OpenClaw for $124h full access. No commitment. Cancel anytime.