Overview

This guide walks you through deploying an AI assistant powered by OpenAI Codex using VPS Deployment. You'll configure the provider, set up the deployment environment, and get your assistant running.

Why OpenAI Codex?

  • 128K context window
  • Image generation and understanding
  • Tool/function calling

Why VPS Deployment?

  • Always-on availability
  • Accessible from anywhere
  • Cheap hosting options

Requirements

  • Linux VPS: Ubuntu, Debian, or similar Linux distribution
  • Node.js 22+: Node.js version 22 or higher
  • SSH access: SSH access to the server
  • systemd: For service management
  • OpenAI Codex credentials: API key or authentication

Step 1: Configure OpenAI Codex

Standard API key authentication

  1. Create an account at platform.openai.com
  2. Navigate to API Keys section
  3. Generate a new API key
  4. Set OPENAI_API_KEY environment variable
Environment Variable: OPENAI_API_KEY

Step 2: Prepare VPS Deployment Environment

  1. Provision VPS: Create a VPS instance with your preferred provider (DigitalOcean, Hetzner, etc.)
  2. Install Node.js: Install Node.js 22+ on the server
    curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash - && sudo apt-get install -y nodejs
  3. Install OpenClaw: Install OpenClaw globally
    npm install -g openclaw@latest
  4. Run Onboarding: Complete setup and install systemd service
    openclaw onboard --install-daemon
  5. Enable Service: Enable and start the systemd service
    systemctl --user enable --now openclaw-gateway.service
  6. Configure Remote Access: Set up SSH tunnel or Tailscale for remote access

Step 3: Configuration

Create your openclaw.json configuration:

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "openai-codex/gpt-5.2"
      }
    }
  },
  "models": {
    "providers": {
      "openai-codex": {
  "models": {
    "providers": {
      "openai-codex": {
        "apiKey": "${OPENAI_API_KEY}
    }
  }
}

Step 4: Deploy

{
  "gateway": {
    "mode": "remote",
    "bind": "loopback",
    "port": 18789,
    "auth": {
      "mode": "token"
    }
  }
}

Step 5: Verify

# Check deployment status
openclaw status

# View logs
openclaw logs --follow

# Test with a message
openclaw test "Hello, are you working?"

Connect to Channels

Now connect your deployed OpenAI Codex assistant to messaging channels: