Skip to content

Docker

Run LeafLock with Docker and Docker Compose. The sections below walk through the architecture, configuration, and tuning tips that matter in development and production.

🐳 Multi-Container Architecture

LeafLock Container Stack:

  • Frontend: React app served by Nginx
  • Backend: Go application with Fiber framework
  • Database: PostgreSQL 15 with encrypted storage
  • Cache: Redis 7 for session management
  • Proxy: Nginx reverse proxy (optional)

Benefits:

  • Isolated service boundaries
  • Easy scaling and management
  • Consistent environments
  • Simple backup and recovery

🐳 Docker Engine

  • Docker Engine 20.10+
  • Docker Compose v2.0+
  • Minimum 4GB RAM
  • 10GB available disk space

🌐 Network Requirements

  • Internet access for image pulls
  • Available ports: 80, 443, 8080, 3000
  • Domain name (for production SSL)
Terminal window
# Clone repository
git clone <repository-url>
cd LeafLock
# Copy environment template
cp .env.example .env
# Edit environment variables
nano .env
Terminal window
# Start all services
docker compose up -d
# View logs
docker compose logs -f
# Access application
# Frontend: http://localhost:3000
# Backend: http://localhost:8080

⚛️ React Frontend Container

Base Image: node:18-alpinecaddy:2.8-alpine Build Process:

  1. Install dependencies with pnpm
  2. Build React application with Vite
  3. Serve with Caddy reverse proxy

Key Features:

  • Multi-stage build for minimal image size
  • Dynamic DNS resolution (fixes 502 errors)
  • Automatic security headers
  • Gzip/Zstd compression enabled
  • Health checks configured
# frontend/Dockerfile
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json pnpm-lock.yaml ./
RUN npm install -g pnpm && pnpm install
COPY . .
RUN pnpm run build
FROM caddy:2.8-alpine
RUN apk --no-cache add ca-certificates curl gettext bind-tools
COPY Caddyfile /etc/caddy/Caddyfile
COPY --chmod=0755 docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
COPY --from=builder /app/dist /usr/share/caddy
EXPOSE 80
HEALTHCHECK --interval=30s --timeout=3s --start-period=30s \
CMD curl -f http://localhost/health || exit 1
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
CMD ["caddy", "run", "--config", "/etc/caddy/Caddyfile", "--adapter", "caddyfile"]

Important: LeafLock uses Caddy instead of NGINX to avoid 502 Bad Gateway errors caused by DNS caching. Caddy dynamically resolves backend DNS on every request, which is critical for platforms like Railway with dynamic IPv6 IPs.

⚡ Go Backend Container

Base Image: golang:1.21-alpinealpine:3.18 Build Process:

  1. Download Go modules
  2. Build optimized binary
  3. Run in minimal Alpine container

Security Features:

  • Non-root user execution
  • Minimal attack surface
  • Health check endpoints
  • Graceful shutdown handling
# backend/Dockerfile
FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o app main.go
FROM alpine:3.18
RUN addgroup -g 1001 -S leaflock && \
adduser -u 1001 -S leaflock -G leaflock
WORKDIR /app
COPY --from=builder /app/app .
USER leaflock
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=3s --start-period=30s \
CMD wget -qO- http://localhost:8080/api/v1/health || exit 1
CMD ["./app"]
version: '3.8'
services:
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: notes
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 30s
timeout: 20s
retries: 5
start_period: 30s
redis:
image: redis:7-alpine
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis_data:/data
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 3s
retries: 5
backend:
build:
context: ./backend
dockerfile: Dockerfile
environment:
- DATABASE_URL=postgres://postgres:${POSTGRES_PASSWORD}@postgres:5432/notes?sslmode=disable
- REDIS_URL=redis:6379
- REDIS_PASSWORD=${REDIS_PASSWORD}
- JWT_SECRET=${JWT_SECRET}
- SERVER_ENCRYPTION_KEY=${SERVER_ENCRYPTION_KEY}
- CORS_ORIGINS=${CORS_ORIGINS}
- PORT=8080
ports:
- "8080:8080"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
args:
- VITE_API_URL=${VITE_API_URL}
environment:
- BACKEND_INTERNAL_URL=http://backend:8080
- PORT=3000
ports:
- "3000:80"
depends_on:
- backend
restart: unless-stopped
volumes:
postgres_data:
redis_data:
Terminal window
# PostgreSQL Configuration
POSTGRES_PASSWORD=your_secure_postgres_password_64_chars_min
DATABASE_URL=postgres://postgres:${POSTGRES_PASSWORD}@postgres:5432/notes?sslmode=disable
# Redis Configuration
REDIS_PASSWORD=your_secure_redis_password_32_chars_min
REDIS_URL=redis:6379
Terminal window
# Generate all required secrets
echo "POSTGRES_PASSWORD=$(openssl rand -base64 32)"
echo "REDIS_PASSWORD=$(openssl rand -base64 32)"
echo "JWT_SECRET=$(openssl rand -base64 64)"
echo "SERVER_ENCRYPTION_KEY=$(openssl rand -base64 32)"
echo "DEFAULT_ADMIN_PASSWORD=$(openssl rand -base64 32)"

🔒 HTTPS Setup

Option 1: Caddy Automatic HTTPS (Recommended)

Caddy can automatically obtain and renew SSL certificates:

yourdomain.com {
reverse_proxy /api/* backend:8080
root * /usr/share/caddy
file_server
try_files {path} /index.html
}

Caddy automatically:

  • Obtains Let’s Encrypt certificates
  • Renews certificates before expiration
  • Redirects HTTP to HTTPS
  • Sets up OCSP stapling

Option 2: Manual Certificates with Certbot

Terminal window
# Install certbot
sudo apt-get install certbot
# Obtain certificate (standalone mode)
sudo certbot certonly --standalone -d yourdomain.com
# Certificates saved to:
# /etc/letsencrypt/live/yourdomain.com/fullchain.pem
# /etc/letsencrypt/live/yourdomain.com/privkey.pem
# Mount in docker-compose.yml:
volumes:
- /etc/letsencrypt:/etc/letsencrypt:ro

Then update Caddyfile:

yourdomain.com {
tls /etc/letsencrypt/live/yourdomain.com/fullchain.pem /etc/letsencrypt/live/yourdomain.com/privkey.pem
# ... rest of configuration
}
version: '3.8'
services:
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: notes
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- internal
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 30s
timeout: 20s
retries: 5
redis:
image: redis:7-alpine
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis_data:/data
networks:
- internal
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 3s
retries: 5
backend:
build: ./backend
environment:
DATABASE_URL: postgres://postgres:${POSTGRES_PASSWORD}@postgres:5432/notes?sslmode=require
REDIS_URL: redis:6379
REDIS_PASSWORD: ${REDIS_PASSWORD}
JWT_SECRET: ${JWT_SECRET}
SERVER_ENCRYPTION_KEY: ${SERVER_ENCRYPTION_KEY}
CORS_ORIGINS: https://yourdomain.com
APP_ENV: production
networks:
- internal
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
frontend:
build:
context: ./frontend
args:
VITE_API_URL: https://yourdomain.com/api/v1
environment:
BACKEND_INTERNAL_URL: http://backend:8080
ports:
- "80:80"
- "443:443"
networks:
- internal
- external
depends_on:
- backend
restart: unless-stopped
networks:
internal:
driver: bridge
internal: true
external:
driver: bridge
volumes:
postgres_data:
driver: local
driver_opts:
type: none
o: bind
device: /var/lib/leaflock/postgres
redis_data:
driver: local
driver_opts:
type: none
o: bind
device: /var/lib/leaflock/redis

🔄 Service Management

Terminal window
# Start services
docker compose up -d
# Stop services
docker compose down
# Restart specific service
docker compose restart backend
# Scale services
docker compose up --scale backend=3 -d

📊 Monitoring

Terminal window
# View logs
docker compose logs -f backend
# Check service status
docker compose ps
# Monitor resource usage
docker stats
# Inspect containers
docker compose exec backend sh
Terminal window
# Check all services health
docker compose ps
# Test health endpoints
curl http://localhost:3000/health # Frontend
curl http://localhost:8080/api/v1/health # Backend
# Database health
docker compose exec postgres pg_isready -U postgres
# Redis health
docker compose exec redis redis-cli ping

💾 PostgreSQL Backup Strategy

Automated Backup:

backup-script.sh
#!/bin/bash
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="leaflock_backup_$DATE.sql"
docker compose exec -T postgres pg_dump -U postgres notes > $BACKUP_FILE
gzip $BACKUP_FILE
# Upload to S3 or backup storage
aws s3 cp $BACKUP_FILE.gz s3://your-backup-bucket/

Restore Process:

Terminal window
# Stop application
docker compose stop backend frontend
# Restore database
gunzip < backup_file.sql.gz | docker compose exec -T postgres psql -U postgres notes
# Restart application
docker compose start backend frontend
Terminal window
# Backup persistent volumes
docker run --rm -v leaflock_postgres_data:/data -v $(pwd):/backup alpine tar czf /backup/postgres_backup.tar.gz -C /data .
docker run --rm -v leaflock_redis_data:/data -v $(pwd):/backup alpine tar czf /backup/redis_backup.tar.gz -C /data .
# Restore volumes
docker run --rm -v leaflock_postgres_data:/data -v $(pwd):/backup alpine tar xzf /backup/postgres_backup.tar.gz -C /data

🔌 Container Connectivity

Issue: Services can’t communicate

Solutions:

  1. Check network configuration
  2. Verify service names in environment variables
  3. Ensure containers are on same network
  4. Check firewall settings
Terminal window
# Debug network
docker network ls
docker network inspect leaflock_default

🗄️ Database Issues

Issue: Database connection failures

Solutions:

  1. Check PostgreSQL container logs
  2. Verify database credentials
  3. Ensure database is ready before backend starts
  4. Check SSL mode configuration
Terminal window
# Debug database
docker compose logs postgres
docker compose exec postgres psql -U postgres

🏎️ Container Performance

Memory Limits:

services:
backend:
deploy:
resources:
limits:
memory: 512M
reservations:
memory: 256M

Build Optimization:

# Use build cache
COPY go.mod go.sum ./
RUN go mod download
COPY . .
# Multi-stage builds
FROM node:18-alpine AS builder
# ... build steps
FROM caddy:2.8-alpine
COPY --from=builder /app/dist /usr/share/caddy

📈 Metrics Collection

Prometheus Integration:

# Add to docker-compose.yml
prometheus:
image: prom/prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
grafana:
image: grafana/grafana
ports:
- "3001:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin

Health Check Monitoring:

# Custom health check script
#!/bin/bash
services=("frontend" "backend" "postgres" "redis")
for service in "${services[@]}"; do
status=$(docker compose ps -q $service | xargs docker inspect -f '{{.State.Health.Status}}')
echo "$service: $status"
done
Terminal window
# Centralized logging with ELK
# Add to docker-compose.yml
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0
environment:
- discovery.type=single-node
logstash:
image: docker.elastic.co/logstash/logstash:7.14.0
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
kibana:
image: docker.elastic.co/kibana/kibana:7.14.0
ports:
- "5601:5601"

🔒 Security Best Practices

Container Hardening:

  • Run containers as non-root user
  • Use minimal base images (Alpine)
  • Scan images for vulnerabilities
  • Keep base images updated
  • Use secrets management for sensitive data

Network Security:

  • Use internal networks for service communication
  • Expose only necessary ports
  • Implement proper firewall rules
  • Use HTTPS for all external communication
# Security-focused compose configuration
version: '3.8'
services:
app:
security_opt:
- no-new-privileges:true
read_only: true
tmpfs:
- /tmp:rw,noexec,nosuid,size=100m
cap_drop:
- ALL
cap_add:
- CHOWN
- DAC_OVERRIDE

Docker Production Ready

This Docker deployment guide provides production-ready configurations with security, performance, and monitoring best practices.