Troubleshooting
Common issues and solutions when running Relay
Bot won't start
"No config found" / Setup wizard starts automatically
If no config exists, Relay will start the setup wizard automatically on first run. You can also run it manually:
relay onboard
This creates ~/.relay/config.json with your bot token, user ID, and other settings.
"Unknown provider: xyz"
Check provider in ~/.relay/config.json. The only supported value is opencode.
OpenCode not found
Relay requires OpenCode as its AI backend. Install it:
npm i -g opencode-ai@latest
Verify it's installed: opencode --version. The setup wizard (relay onboard) will detect OpenCode automatically.
Authentication
Bot doesn't respond to messages
Relay only responds to a single authorized user. Verify allowedUserId in ~/.relay/config.json matches your Telegram user ID. Get your ID from @userinfobot.
OpenCode Issues
"Connection refused" in connect mode
The OpenCode server isn't running or isn't reachable. Check:
- The server is running
opencodeUrlis correct in~/.relay/config.json- If remote, the port is open
HTTP warning: If connecting to a remote OpenCode server, use HTTPS in production to encrypt traffic.
"Failed to start OpenCode server"
Check that OpenCode is installed (npm i -g opencode-ai@latest) and the port isn't already in use.
Voice Messages
"No STT provider available"
Add at least one speech-to-text API key to ~/.relay/config.json:
"groqApiKey": "gsk_..."
MCP Tools
"MCP not supported"
MCP requires OpenCode to be running.
MCP server shows "failed"
Run /mcp to see the error. Common causes: command not found, connection refused (remote), or permission denied.
Use /mcp connect <name> to attempt to reconnect a failed server.
Fetch MCP fails to start
The Fetch MCP requires uvx (Python package runner). Install it:
Linux/macOS:
curl -LsSf https://astral.sh/uv/install.sh | sh
Windows:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
After installing, restart Relay. The setup wizard checks for uvx when Fetch MCP is selected.
Memory MCP data location
Memory MCP stores its knowledge graph in ~/.relay/memory.jsonl. This file is created automatically when Memory MCP is enabled and grows as the AI stores information.
Streaming
Messages appear jumpy
Telegram rate-limits message edits. Relay batches updates to stay within limits, but slight delays on slow connections are normal.
Long responses get cut off
Telegram has a 4096-character limit per message. Relay splits long responses automatically. If a response seems incomplete, wait for the stream to finish.
Daemon Mode
pm2 fails to install
If relay start can't install pm2 automatically, install it manually:
sudo npm install -g pm2
Daemon won't start
- Check if another instance is already running:
relay status - Check logs for errors:
relay logs
Where are daemon logs?
pm2 stores logs in ~/.pm2/logs/. View them with relay logs or access the raw files:
~/.pm2/logs/relay-out.log— stdout~/.pm2/logs/relay-error.log— stderr
Web Monitoring
"Cannot monitor this URL"
The URL failed upfront validation. Common causes: DNS resolution failed, HTTP 403/401 (authentication or bot blocking), or timeout (> 30 seconds). Check the URL is correct and publicly accessible.
"Page returned very little content"
The page returned fewer than 50 words of text. This usually means a JavaScript-rendered SPA (the page uses React/Vue and renders client-side — plain HTTP only gets the HTML shell) or Cloudflare bot protection ("Just a moment..." challenge page). Try monitoring the underlying API endpoint instead of the rendered page.
Watch never detects changes
Check the watch is enabled (/watch → verify [ON]). If the page content is thin (SPA/bot protection), every check returns the same empty content. Use the "Check Now" button to trigger an immediate check.
Watch auto-disabled
After 5 consecutive fetch errors, the watch is automatically disabled. Re-enable via /watch. If the URL is permanently unreachable, delete the watch.
General
Bot is slow
Check your internet connection, the AI provider's status, and consider using a faster model (/model haiku or /model o4-mini).
"Operation timed out"
The AI took too long. Try simplifying your request, using a faster model, or breaking the task into smaller steps.