Set Up OpenClaw as Your Personal AI Agent in 2026
OpenClaw is the open-source AI agent that hit 356,000 GitHub stars in three months. It connects any LLM — Claude, GPT, Kimi K2.5, local models via Ollama — to your messaging apps and runs as a persistent daemon on your machine. No vendor lock-in, no subscription beyond API costs.
This is a practical OpenClaw setup guide. Install it, configure it, connect it to a messaging platform, and have it running in under 15 minutes.
What OpenClaw actually is
OpenClaw is a self-hosted personal AI assistant. It runs on your computer (or a VPS), connects to 23+ messaging platforms — WhatsApp, Telegram, Discord, Slack, Signal, iMessage — and executes tasks using whichever AI model you choose.
The project started as a weekend WhatsApp relay by Peter Steinberger (founder of PSPDFKit) in late 2025. It launched as “Clawdbot” in November 2025, got renamed to “Moltbot” after a trademark complaint from Anthropic, and settled on “OpenClaw” in January 2026. Steinberger joined OpenAI in February 2026, and the project moved to a non-profit foundation to stay independent.
Key facts:
- Open source (MIT license) — inspect, modify, distribute
- Model agnostic — Claude, GPT, Gemini, Kimi K2.5, DeepSeek, Ollama local models
- Persistent — runs as a daemon, always on, not session-based
- MCP support — connects to Model Context Protocol servers for extended capabilities
- Memory system — three tiers: long-term (MEMORY.md), daily notes, and experimental “Dreaming” consolidation
Prerequisites
Before installing:
- Node.js 22.14+ (Node 24 recommended). Check with
node --version. - An API key from at least one provider — Anthropic, OpenAI, Google, or Moonshot.
- A messaging app you want to connect (Telegram is the easiest to start with).
Install OpenClaw
npm install -g openclaw@latest
openclaw onboard --install-daemon
The onboard command walks you through:
- Choosing your AI provider and entering your API key
- Selecting a messaging platform to connect
- Installing the daemon so OpenClaw runs in the background
After setup, verify it is running:
openclaw gateway status
Open the dashboard to see your agent:
openclaw dashboard
That is the entire install. The daemon starts automatically and reconnects after reboots.
Connect a messaging platform
Telegram is the fastest to set up. You need a Telegram bot token from @BotFather:
- Message @BotFather on Telegram
- Send
/newbotand follow the prompts - Copy the bot token
- Add it to your OpenClaw config
For WhatsApp, Discord, Slack, and the other 20+ platforms — each has a connector in the OpenClaw docs at docs.openclaw.ai.
Once connected, message your bot. OpenClaw receives the message, routes it to your chosen AI model, and sends the response back. It handles conversation history, file attachments, and tool calls automatically.
Add MCP servers
OpenClaw supports Model Context Protocol servers natively. MCP servers give your agent access to external tools — GitHub, Notion, databases, APIs.
MCP server definitions live in your OpenClaw config under mcp.servers. When you install an MCP skill, OpenClaw registers those tools and your agent can call them during conversations.
If you are already using MCP servers with Claude Code, the same servers work with OpenClaw. The protocol is standardized — the server does not care which client connects to it.
Memory: how OpenClaw remembers
OpenClaw has a three-tier memory system, all stored locally on your machine:
- MEMORY.md — Long-term facts and preferences. Loaded at the start of every conversation. Think of it as the agent’s permanent knowledge base.
- memory/YYYY-MM-DD.md — Daily notes. Today’s and yesterday’s files are auto-loaded. Good for tracking tasks, decisions, and context that matters this week but not forever.
- DREAMS.md — Experimental feature introduced in v2026.4.9. Performs “REM backfill” — replays historical daily notes and consolidates important patterns into long-term memory automatically.
All memory stays in ~/.openclaw/workspace. Nothing is cloud-synced by default. You own your data.
This is similar to how I built persistent memory for my Claude Code agent — the same pattern of file-based memory that loads on session start.
Run OpenClaw on a VPS
If you want your agent running 24/7 without keeping your laptop open, deploy to a VPS. Minimum requirements:
- 2 GB RAM (4 GB recommended)
- 10 GB storage
- sudo/root access
The recommended deployment method on Linux is Docker. OpenClaw has official VPS docs at docs.openclaw.ai/vps.
Cloud providers with one-click or documented guides:
- DigitalOcean — Marketplace droplet with hardened firewall, non-root execution, Docker isolation
- Hostinger — One-click Docker Manager VPS template
- Oracle Cloud — Free tier (4 ARM CPUs, 24 GB RAM, 200 GB storage) usable for $0/month
A basic VPS from Hetzner or DigitalOcean costs under $10/month. OpenClaw itself is free — you only pay for API calls to your model provider.
OpenClaw vs Claude Code
Both are AI agents. They solve different problems.
Claude Code is a purpose-built coding agent. It runs in your terminal, operates on your codebase, and excels at multi-file edits, test writing, and debugging. It only works with Claude models and requires a Claude Pro or Max subscription. Sessions are terminal-bound — close the terminal, the session ends.
OpenClaw is a general-purpose personal assistant. It connects to messaging apps, runs as a persistent daemon, and works with any model. It handles calendar management, email, web browsing, file organization, and conversation across platforms. It is not as deep on coding tasks as Claude Code.
The overlap is growing. Claude Code now has Channels for Telegram/Discord messaging, and OpenClaw’s coding capabilities are improving. But today, the use cases are distinct:
- Need a coding agent? Claude Code.
- Need an always-on personal assistant across messaging apps? OpenClaw.
- Want both? Run both. They do not conflict.
Security: one thing to know
In January 2026, a remote code execution vulnerability (CVE-2026-25253) was disclosed. Over 135,000 OpenClaw instances were found exposed on the public internet without proper firewall configuration.
If you deploy OpenClaw on a VPS:
- Use a firewall. Block all ports except the ones your messaging connector needs.
- Do not expose the gateway to the public internet without authentication.
- Keep OpenClaw updated. The release cadence is near-daily (current version: v2026.4.11). Security patches ship fast.
The DigitalOcean Marketplace droplet is the most security-hardened one-click option — it includes rate-limiting and an authenticated gateway token out of the box.
FAQ
Is OpenClaw free?
The software is free and open source (MIT license). You pay only for API calls to your model provider. Running Kimi K2.5 or local Ollama models keeps costs near zero. Claude and GPT API costs vary by usage.
Can I run OpenClaw with local models?
Yes. Connect Ollama as your model provider and run models like Llama, Mistral, or Qwen locally. No API costs, but you need a machine with enough GPU/CPU to run inference.
Does OpenClaw support MCP servers like Claude Code?
Yes. MCP is natively supported. Server definitions go in your config file, and the agent can call MCP tools during conversations. The same MCP servers that work with Claude Code work with OpenClaw.
How is memory different from Claude Code?
Both use file-based memory. Claude Code uses CLAUDE.md and project-level instructions. OpenClaw has MEMORY.md (long-term), daily notes, and the experimental Dreaming feature for automatic memory consolidation. OpenClaw’s memory is more structured for personal assistant use cases.
I’m documenting the full agent stack — OpenClaw, Claude Code, MCP servers, automation pipelines — in my Build & Automate community. Real production setups, not demo projects.
This post was published using Notipo — my Notion-to-WordPress sync tool. Write in Notion, publish to WordPress automatically.