Run LeafLock with Docker and Docker Compose. The sections below walk through the architecture, configuration, and tuning tips that matter in development and production.
🐳 Multi-Container Architecture
LeafLock Container Stack:
Frontend: React app served by Nginx
Backend: Go application with Fiber framework
Database: PostgreSQL 15 with encrypted storage
Cache: Redis 7 for session management
Proxy: Nginx reverse proxy (optional)
Benefits:
Isolated service boundaries
Easy scaling and management
Consistent environments
Simple backup and recovery
🐳 Docker Engine
Docker Engine 20.10+
Docker Compose v2.0+
Minimum 4GB RAM
10GB available disk space
🌐 Network Requirements
Internet access for image pulls
Available ports: 80, 443, 8080, 3000
Domain name (for production SSL)
git clone <repository-url>
# Copy environment template
# Edit environment variables
# Frontend: http://localhost:3000
# Backend: http://localhost:8080
# Use production compose file
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
⚛️ React Frontend Container
Base Image: node:18-alpine
→ caddy:2.8-alpine
Build Process:
Install dependencies with pnpm
Build React application with Vite
Serve with Caddy reverse proxy
Key Features:
Multi-stage build for minimal image size
Dynamic DNS resolution (fixes 502 errors)
Automatic security headers
Gzip/Zstd compression enabled
Health checks configured
FROM node:18-alpine AS builder
COPY package*.json pnpm-lock.yaml ./
RUN npm install -g pnpm && pnpm install
RUN apk --no-cache add ca-certificates curl gettext bind-tools
COPY Caddyfile /etc/caddy/Caddyfile
COPY --chmod=0755 docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
COPY --from=builder /app/dist /usr/share/caddy
HEALTHCHECK --interval=30s --timeout=3s --start-period=30s \
CMD curl -f http://localhost/health || exit 1
ENTRYPOINT [ "/usr/local/bin/docker-entrypoint.sh" ]
CMD [ "caddy" , "run" , "--config" , "/etc/caddy/Caddyfile" , "--adapter" , "caddyfile" ]
Important: LeafLock uses Caddy instead of NGINX to avoid 502 Bad Gateway errors caused by DNS caching. Caddy dynamically resolves backend DNS on every request, which is critical for platforms like Railway with dynamic IPv6 IPs.
⚡ Go Backend Container
Base Image: golang:1.21-alpine
→ alpine:3.18
Build Process:
Download Go modules
Build optimized binary
Run in minimal Alpine container
Security Features:
Non-root user execution
Minimal attack surface
Health check endpoints
Graceful shutdown handling
FROM golang:1.21-alpine AS builder
RUN CGO_ENABLED=0 GOOS=linux go build -o app main.go
RUN addgroup -g 1001 -S leaflock && \
adduser -u 1001 -S leaflock -G leaflock
COPY --from=builder /app/app .
HEALTHCHECK --interval=30s --timeout=3s --start-period=30s \
CMD wget -qO- http://localhost:8080/api/v1/health || exit 1
image : postgres:15-alpine
POSTGRES_PASSWORD : ${POSTGRES_PASSWORD}
- postgres_data:/var/lib/postgresql/data
test : [ " CMD-SHELL " , " pg_isready -U postgres " ]
command : redis-server --requirepass ${REDIS_PASSWORD}
test : [ " CMD " , " redis-cli " , " ping " ]
- DATABASE_URL=postgres://postgres:${POSTGRES_PASSWORD}@postgres:5432/notes?sslmode=disable
- REDIS_PASSWORD=${REDIS_PASSWORD}
- JWT_SECRET=${JWT_SECRET}
- SERVER_ENCRYPTION_KEY=${SERVER_ENCRYPTION_KEY}
- CORS_ORIGINS=${CORS_ORIGINS}
condition : service_healthy
condition : service_healthy
- VITE_API_URL=${VITE_API_URL}
- BACKEND_INTERNAL_URL=http://backend:8080
- /var/lib/leaflock/postgres:/var/lib/postgresql/data
ports : [] # Remove external port exposure
POSTGRES_PASSWORD : ${POSTGRES_PASSWORD}
- /var/lib/leaflock/redis:/data
ports : [] # Remove external port exposure
- DATABASE_URL=postgres://postgres:${POSTGRES_PASSWORD}@postgres:5432/notes?sslmode=require
- " 127.0.0.1:8080:8080 " # Bind to localhost only
- BACKEND_INTERNAL_URL=http://backend:8080
# PostgreSQL Configuration
POSTGRES_PASSWORD = your_secure_postgres_password_64_chars_min
DATABASE_URL = postgres://postgres: ${ POSTGRES_PASSWORD } @postgres:5432/notes? sslmode = disable
REDIS_PASSWORD = your_secure_redis_password_32_chars_min
# JWT Secret (64+ characters)
JWT_SECRET = your_64_character_jwt_secret_base64_encoded_string
# Server Encryption Key (32 characters)
SERVER_ENCRYPTION_KEY = your_32_character_encryption_key
CORS_ORIGINS = http://localhost:3000,https://yourdomain.com
VITE_API_URL = http://localhost:8080
# Application Environment
ENABLE_DEFAULT_ADMIN = true
DEFAULT_ADMIN_EMAIL = admin@leaflock.app
DEFAULT_ADMIN_PASSWORD = SecureAdminPassword123!
# Generate all required secrets
echo " POSTGRES_PASSWORD=$( openssl rand -base64 32 ) "
echo " REDIS_PASSWORD=$( openssl rand -base64 32 ) "
echo " JWT_SECRET=$( openssl rand -base64 64 ) "
echo " SERVER_ENCRYPTION_KEY=$( openssl rand -base64 32 ) "
echo " DEFAULT_ADMIN_PASSWORD=$( openssl rand -base64 32 ) "
🔒 HTTPS Setup
Option 1: Caddy Automatic HTTPS (Recommended)
Caddy can automatically obtain and renew SSL certificates:
reverse_proxy /api/* backend:8080
try_files {path} /index.html
Caddy automatically:
Obtains Let’s Encrypt certificates
Renews certificates before expiration
Redirects HTTP to HTTPS
Sets up OCSP stapling
Option 2: Manual Certificates with Certbot
sudo apt-get install certbot
# Obtain certificate (standalone mode)
sudo certbot certonly --standalone -d yourdomain.com
# /etc/letsencrypt/live/yourdomain.com/fullchain.pem
# /etc/letsencrypt/live/yourdomain.com/privkey.pem
# Mount in docker-compose.yml:
- /etc/letsencrypt:/etc/letsencrypt:ro
Then update Caddyfile:
tls /etc/letsencrypt/live/yourdomain.com/fullchain.pem /etc/letsencrypt/live/yourdomain.com/privkey.pem
# ... rest of configuration
image : postgres:15-alpine
POSTGRES_PASSWORD : ${POSTGRES_PASSWORD}
- postgres_data:/var/lib/postgresql/data
test : [ " CMD-SHELL " , " pg_isready -U postgres " ]
command : redis-server --requirepass ${REDIS_PASSWORD}
test : [ " CMD " , " redis-cli " , " ping " ]
DATABASE_URL : postgres://postgres:${POSTGRES_PASSWORD}@postgres:5432/notes?sslmode=require
REDIS_PASSWORD : ${REDIS_PASSWORD}
JWT_SECRET : ${JWT_SECRET}
SERVER_ENCRYPTION_KEY : ${SERVER_ENCRYPTION_KEY}
CORS_ORIGINS : https://yourdomain.com
condition : service_healthy
condition : service_healthy
VITE_API_URL : https://yourdomain.com/api/v1
BACKEND_INTERNAL_URL : http://backend:8080
device : /var/lib/leaflock/postgres
device : /var/lib/leaflock/redis
auto_https off # Disable when behind reverse proxy
trusted_proxies static private_ranges
# Proxy API requests to backend
reverse_proxy {$BACKEND_INTERNAL_URL} {
header_up Host {upstream_hostport}
try_files {path} {path}/ /index.html
X-Content-Type-Options "nosniff"
X-XSS-Protection "1; mode=block"
Referrer-Policy "strict-origin-when-cross-origin"
Note: For production with custom domains, enable auto_https
and configure your domain directly in the Caddyfile. Caddy will automatically obtain SSL certificates.
🔄 Service Management
# Restart specific service
docker compose restart backend
docker compose up --scale backend= 3 -d
📊 Monitoring
docker compose logs -f backend
docker compose exec backend sh
# Check all services health
curl http://localhost:3000/health # Frontend
curl http://localhost:8080/api/v1/health # Backend
docker compose exec postgres pg_isready -U postgres
docker compose exec redis redis-cli ping
💾 PostgreSQL Backup Strategy
Automated Backup:
DATE = $( date +%Y%m%d_%H%M%S )
BACKUP_FILE = " leaflock_backup_ $DATE .sql "
docker compose exec -T postgres pg_dump -U postgres notes > $BACKUP_FILE
# Upload to S3 or backup storage
aws s3 cp $BACKUP_FILE .gz s3://your-backup-bucket/
Restore Process:
docker compose stop backend frontend
gunzip < backup_file.sql.gz | docker compose exec -T postgres psql -U postgres notes
docker compose start backend frontend
# Backup persistent volumes
docker run --rm -v leaflock_postgres_data:/data -v $( pwd ) :/backup alpine tar czf /backup/postgres_backup.tar.gz -C /data .
docker run --rm -v leaflock_redis_data:/data -v $( pwd ) :/backup alpine tar czf /backup/redis_backup.tar.gz -C /data .
docker run --rm -v leaflock_postgres_data:/data -v $( pwd ) :/backup alpine tar xzf /backup/postgres_backup.tar.gz -C /data
🔌 Container Connectivity
Issue: Services can’t communicate
Solutions:
Check network configuration
Verify service names in environment variables
Ensure containers are on same network
Check firewall settings
docker network inspect leaflock_default
🗄️ Database Issues
Issue: Database connection failures
Solutions:
Check PostgreSQL container logs
Verify database credentials
Ensure database is ready before backend starts
Check SSL mode configuration
docker compose logs postgres
docker compose exec postgres psql -U postgres
🏎️ Container Performance
Memory Limits:
Build Optimization:
FROM node:18-alpine AS builder
COPY --from=builder /app/dist /usr/share/caddy
📈 Metrics Collection
Prometheus Integration:
# Add to docker-compose.yml
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- GF_SECURITY_ADMIN_PASSWORD=admin
Health Check Monitoring:
# Custom health check script
services = ( " frontend " " backend " " postgres " " redis " )
for service in " ${ services [ @ ] } " ; do
status = $( docker compose ps -q $service | xargs docker inspect -f ' {{.State.Health.Status}} ' )
# Centralized logging with ELK
# Add to docker-compose.yml
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
- discovery.type=single-node
image: docker.elastic.co/logstash/logstash:7.14.0
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
image: docker.elastic.co/kibana/kibana:7.14.0
🔒 Security Best Practices
Container Hardening:
Run containers as non-root user
Use minimal base images (Alpine)
Scan images for vulnerabilities
Keep base images updated
Use secrets management for sensitive data
Network Security:
Use internal networks for service communication
Expose only necessary ports
Implement proper firewall rules
Use HTTPS for all external communication
# Security-focused compose configuration
- /tmp:rw,noexec,nosuid,size=100m
Docker Production Ready
This Docker deployment guide provides production-ready configurations with security, performance, and monitoring best practices.