Skip to content

Self-Hosting OG Engine with Docker

OG Engine is distributed as a Docker image. Self-hosting gives you full control over infrastructure costs, data residency, and uptime SLAs. The API is identical to the hosted version — no API key required for self-hosted instances unless you configure one.

Pull and run the image with a single command:

Terminal window
docker run -p 3000:3000 ghcr.io/atypical-consulting/og-engine:latest

The server starts on port 3000. Test it:

Terminal window
curl -X POST http://localhost:3000/render \
-H "Content-Type: application/json" \
-d '{"format": "og", "title": "Hello from self-hosted OG Engine"}' \
--output test.png

For a persistent deployment with health checks and restart policy:

docker-compose.yml
version: '3.8'
services:
og-engine:
image: ghcr.io/atypical-consulting/og-engine:latest
ports:
- '3000:3000'
environment:
- PORT=3000
- FONTS_DIR=/app/fonts
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:3000/health']
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
reservations:
memory: 128M

Start the service:

Terminal window
docker compose up -d
VariableDefaultDescription
PORT3000HTTP port the server listens on
FONTS_DIR/app/fontsDirectory containing .ttf font files
CACHE_SIZE500LRU cache size for prepared text (items)
LOG_LEVELinfoLog verbosity: debug, info, warn, error

The fonts are bundled inside the Docker image. If you want to add custom fonts, mount a volume over FONTS_DIR:

volumes:
- ./my-fonts:/app/fonts

Custom fonts must be .ttf format. The font family name used in style.font will match the family name embedded in the font file’s metadata.

The /health endpoint is your readiness probe:

Terminal window
curl http://localhost:3000/health
{
"status": "ok",
"version": "0.1.0",
"fonts": ["Outfit", "Inter", "..."],
"formats": ["og", "twitter", "square", "linkedin", "story"],
"templates": ["default", "social-card", "blog-hero", "email-banner"]
}

Deploy to Fly.io in three commands:

Terminal window
# 1. Install flyctl and authenticate
fly auth login
# 2. Launch a new app (follow the prompts; accepts auto-detected Docker config)
fly launch --image ghcr.io/atypical-consulting/og-engine:latest
# 3. Scale to two machines for zero-downtime deploys
fly scale count 2

Fly.io auto-provisions TLS, a global anycast IP, and health-check-based rolling deployments. Recommended machine size: shared-cpu-1x with 512MB RAM for up to ~200 concurrent renders.

app = "og-engine-yourapp"
primary_region = "iad"
[build]
image = "ghcr.io/atypical-consulting/og-engine:latest"
[env]
PORT = "8080"
[http_service]
internal_port = 8080
force_https = true
auto_stop_machines = true
auto_start_machines = true
min_machines_running = 1
[http_service.concurrency]
type = "requests"
hard_limit = 500
soft_limit = 400
[[vm]]
cpu_kind = "shared"
cpus = 1
memory_mb = 512
  1. Create a new Railway project
  2. Add a service from Docker image: ghcr.io/atypical-consulting/og-engine:latest
  3. Set environment variable PORT to $PORT (Railway injects the port)
  4. Add a health check on /health
  • Memory: Allocate at least 256MB. Canvas rendering uses ~10MB per concurrent render. At 512MB you can handle ~50 simultaneous renders comfortably.
  • Replicas: Run at least 2 replicas behind a load balancer for zero-downtime deployments.
  • Caching: Place a CDN (Cloudflare, Fastly) in front of the API if you serve the same OG images repeatedly. Cache on the full request URL + body hash with a 24h TTL.
  • Rate limiting: The self-hosted image has no built-in rate limiting. Add Nginx or Traefik rate limiting in front if needed.

For managed hosting with automatic scaling, CDN, and zero infrastructure work, see Pricing.