How to Deploy OpenClaw – Autonomous AI Agent Platform

Updated on 02 February, 2026
Learn how to deploy OpenClaw and run a self-hosted AI assistant across messaging platforms.
How to Deploy OpenClaw – Autonomous AI Agent Platform header image

OpenClaw (formerly Moltbot) is a personal AI assistant you can run on your own devices. It connects to popular messaging platforms like WhatsApp, Telegram, Slack, Discord, and many more, providing a unified assistant experience across channels. OpenClaw runs locally and includes a Gateway control plane that manages sessions, routes messages, executes tools, and maintains persistent memory.

This article explains how to deploy OpenClaw using Docker Compose with its interactive setup wizard. It covers model configuration, channel integration, gateway setup, the persistent memory system, and optional integration with Vultr Serverless Inference.

Prerequisites

Before you begin, you need to:

Deploy OpenClaw Using Docker Compose

The OpenClaw repository includes a setup script that handles building, onboarding, and starting the gateway. For the complete Docker setup guide, see the official Docker documentation.

Interactive Setup (Recommended)

The interactive wizard configures model providers, channel integrations, and security settings in one streamlined flow.

  1. Create the project directory and navigate into it.

    console
    $ mkdir -p ~/openclaw-assistant
    $ cd ~/openclaw-assistant
    
  2. Clone the official OpenClaw repository and switch to the cloned directory.

    console
    $ git clone https://github.com/openclaw/openclaw.git
    $ cd openclaw
    
  3. Run the Docker setup script.

    console
    $ ./docker-setup.sh
    

    The interactive wizard walks you through:

    • Security acknowledgment: Review security best practices for running AI agents.
    • Model provider setup: Select your preferred model provider (Anthropic, OpenAI, Google, OpenRouter, and so on) and enter your API key or authenticate via OAuth.
    • Channel configuration: Choose messaging platforms (Slack, Discord, Telegram, WhatsApp, and so on) and enter the required tokens.
    • Channel allowlists: Configure which channels or users can interact with the bot.
    • Skills setup: Enable optional capabilities like web search, image generation, and more.
    • Gateway startup: Builds the image, generates a gateway token, and starts the service.

    OpenClaw Setup Wizard

  4. Verify the gateway is running by checking the container logs.

    console
    $ docker compose logs openclaw-gateway
    

    The output shows a successful startup:

    [gateway] listening on ws://0.0.0.0:18789

The wizard writes configuration and workspace data to:

  • ~/.openclaw/: Configuration, credentials, and session data
  • ~/.openclaw/workspace: Agent workspace (memory files, skills, and so on)

Access the OpenClaw Control UI

The Control UI provides a web interface for managing your OpenClaw installation.

Note
For security, the Control UI requires either localhost access or HTTPS. It will not work over plain HTTP from a remote IP.
  1. Retrieve your gateway token from the configuration file.

    console
    $ grep -A1 '"token"' ~/.openclaw/openclaw.json
    

    Output:

    "token": "your-token-here"
  2. Choose one of the following access methods:

  1. Explore the main interface sections:

    • Chat: Interact with the assistant directly through the built-in chat interface.
    • Overview: View gateway status and health information.
    • Channels: Monitor messaging platform connections and their status.
    • Sessions: View active chat sessions across all connected channels.
    • Skills: Browse available agent skills and capabilities.
    • Config: Manage gateway and agent configuration settings.
  2. Use the Chat interface to verify the installation works correctly.

    OpenClaw Control UI Chat Interface

Use Vultr Serverless Inference (Optional)

OpenClaw supports custom model providers through OpenAI-compatible endpoints. Configure Vultr Serverless Inference as a model provider to access additional AI models.

Note
OpenClaw requires models that support tool calling (function calling). Currently, Kimi K2 Instruct is the only Vultr Serverless Inference model with tool calling support. Other models like DeepSeek, Qwen, and GPT-OSS do not support tool calling and will return blank responses.
  1. Enable Vultr Serverless Inference in the Vultr Customer Portal and copy your API key.

  2. Edit the OpenClaw configuration file.

    console
    $ nano ~/.openclaw/openclaw.json
    

    Add the Vultr provider to the models section. Replace YOUR-VULTR-API-KEY with your Vultr API key.

    json
    "models": {
        "providers": {
            "vultr": {
                "baseUrl": "https://api.vultrinference.com/v1",
                "apiKey": "YOUR-VULTR-API-KEY",
                "api": "openai-completions",
                "models": [
                    { "id": "kimi-k2-instruct", "name": "Kimi K2 Instruct" }
                ]
            }
        }
    }
    

    Save and close the file.

  3. Restart the gateway to apply the new provider.

    console
    $ docker compose restart openclaw-gateway
    
  4. Select the Vultr model in the Control UI or via the chat command.

    ini
    /model vultr/kimi-k2-instruct
    
  5. Verify the model responds correctly by sending a test message.

    Vultr Kimi K2 Model Response

OpenClaw Memory Database

OpenClaw maintains persistent memory through local file storage, enabling continuous learning across conversations. The system stores session transcripts, conversation history, and long-term memories to provide context-aware assistance that improves over time.

Session Storage

The Gateway stores all session data locally on the host.

Path Purpose
~/.openclaw/openclaw.json Main configuration (JSON5)
~/.openclaw/agents/<agentId>/sessions/sessions.json Session metadata and state
~/.openclaw/agents/<agentId>/sessions/<sessionId>.jsonl Full conversation transcripts
~/.openclaw/credentials/ Channel credentials (WhatsApp, Slack, and so on)

Memory Files

OpenClaw maintains two layers of memory through Markdown files in the agent workspace.

  • Daily Logs (memory/YYYY-MM-DD.md): Append-only logs for day-to-day context. The assistant reads today's and yesterday's logs at the start of a session.
  • Long-term Memory (MEMORY.md): Curated facts, preferences, and decisions that persist across sessions.

To instruct OpenClaw to remember something, send a message like:

Remember that I prefer dark mode in all applications.

The assistant writes this to the appropriate memory file for future reference.

Session Management Commands

Control OpenClaw sessions directly from any connected chat.

  • /status: View current session status (model, tokens, cost)
  • /new or /reset: Start a fresh session
  • /compact: Summarize and compress session context
  • /think <level>: Adjust thinking depth (off, minimal, low, medium, high)

Backup and Migration

  1. Back up your OpenClaw data.

    console
    $ tar -czvf openclaw-backup.tar.gz ~/.openclaw
    
  2. Restore on a new server.

    console
    $ tar -xzvf openclaw-backup.tar.gz -C ~/
    

Use Cases

  • Personal productivity assistant: Track tasks, set reminders, draft emails, and summarize documents directly from Slack, WhatsApp, or any connected channel.
  • Multi-channel unified inbox: Start a conversation on one platform and continue on another; OpenClaw maintains context across all connected channels.
  • Team collaboration bot: Deploy in Slack channels to answer questions, search documentation, and assist with workflows.
  • Development and DevOps helper: Get coding assistance, run commands, and automate routine tasks via tool integrations.
  • Scheduled automation: Use cron jobs, webhooks, and Gmail Pub/Sub to trigger assistant actions automatically.
  • Persistent knowledge base: Build long-term memory that persists across sessions and improves over time.

Conclusion

You have successfully deployed OpenClaw as a personal AI assistant with Docker Compose. The interactive setup wizard streamlines the setup process by configuring your model provider, messaging channels, and gateway in one flow. The memory database ensures continuity across sessions by building a personalized knowledge base that improves over time. With Vultr Serverless Inference, you can access additional AI models like Kimi K2 directly on Vultr's infrastructure. For more information, refer to the official OpenClaw documentation.

Comments