Docker and Containerization: Best Practices for Production

Docker and Containerization: Best Practices for Production

BySanjay Goraniya
3 min read
Share:

Docker and Containerization: Best Practices for Production

Containerization has revolutionized how we deploy applications. Docker makes it easy to package and run applications consistently across environments. But using Docker effectively in production requires more than just docker run. After containerizing dozens of applications, I've learned what actually works.

Why Docker?

Benefits

  • Consistency - Same environment everywhere
  • Isolation - Applications don't interfere
  • Portability - Run anywhere Docker runs
  • Scalability - Easy to scale horizontally
  • Resource efficiency - Better than VMs

Dockerfile Best Practices

Use Multi-Stage Builds

Code
# Bad: Single stage, large image
FROM node:18
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "index.js"]

# Good: Multi-stage build
# Stage 1: Build
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Production
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
CMD ["node", "dist/index.js"]

Benefits: Smaller final image, faster builds, better security

Leverage Layer Caching

Code
# Bad: Changes invalidate cache
FROM node:18
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "index.js"]

# Good: Copy dependencies first
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
CMD ["node", "index.js"]

Why: Dependencies change less frequently than code

Use Specific Tags

Code
# Bad: Latest tag
FROM node:latest

# Good: Specific version
FROM node:18.17.0-alpine

Why: Latest can change, breaking builds

Minimize Layers

Code
# Bad: Many layers
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y git
RUN apt-get clean

# Good: Single layer
RUN apt-get update && \
    apt-get install -y curl git && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

Use .dockerignore

Code
node_modules
npm-debug.log
.git
.gitignore
.env
.DS_Store
coverage
*.md

Why: Exclude unnecessary files, faster builds

Security Best Practices

Run as Non-Root User

Code
# Bad: Run as root
FROM node:18
CMD ["node", "index.js"]

# Good: Create non-root user
FROM node:18-alpine
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodejs -u 1001
USER nodejs
CMD ["node", "index.js"]

Scan for Vulnerabilities

Code
# Scan image for vulnerabilities
docker scan my-image:latest

Don't Store Secrets in Images

Code
# Bad: Secrets in image
ENV API_KEY=secret123

# Good: Use environment variables or secrets
# Pass at runtime
docker run -e API_KEY=secret123 my-image

Use Minimal Base Images

Code
# Bad: Full OS image
FROM ubuntu:22.04

# Good: Minimal image
FROM alpine:3.18

Production Considerations

Health Checks

Code
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:3000/health || exit 1

Resource Limits

Code
# docker-compose.yml
services:
  app:
    image: my-app:latest
    deploy:
      resources:
        limits:
          cpus: '1'
          memory: 512M
        reservations:
          cpus: '0.5'
          memory: 256M

Logging

Code
# Use JSON logging
CMD ["node", "index.js"]
Code
# Configure log driver
docker run --log-driver json-file \
  --log-opt max-size=10m \
  --log-opt max-file=3 \
  my-image

Docker Compose Best Practices

Use Version 3+

Code
version: '3.8'
services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
    depends_on:
      - db
  db:
    image: postgres:15-alpine
    environment:
      - POSTGRES_PASSWORD=password

Use Environment Files

Code
# docker-compose.yml
services:
  app:
    env_file:
      - .env

Network Isolation

Code
services:
  app:
    networks:
      - app-network
  db:
    networks:
      - app-network

networks:
  app-network:
    driver: bridge

Common Pitfalls

1. Not Using .dockerignore

Problem: Large images, slow builds

Solution: Always use .dockerignore

2. Running as Root

Problem: Security risk

Solution: Create and use non-root user

3. Including Secrets

Problem: Secrets in image

Solution: Use environment variables or secrets management

4. Not Using Multi-Stage Builds

Problem: Large production images

Solution: Use multi-stage builds

5. Not Setting Resource Limits

Problem: Containers consume all resources

Solution: Set CPU and memory limits

Real-World Example

Application: Node.js API with PostgreSQL

Dockerfile:

Code
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage
FROM node:18-alpine
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodejs -u 1001
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
USER nodejs
HEALTHCHECK --interval=30s --timeout=3s \
  CMD node -e "require('http').get('http://localhost:3000/health', (r) => {process.exit(r.statusCode === 200 ? 0 : 1)})"
CMD ["node", "dist/index.js"]

docker-compose.yml:

Code
version: '3.8'
services:
  api:
    build: .
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgresql://user:pass@db:5432/mydb
    depends_on:
      - db
    deploy:
      resources:
        limits:
          memory: 512M
  db:
    image: postgres:15-alpine
    environment:
      - POSTGRES_DB=mydb
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass
    volumes:
      - postgres-data:/var/lib/postgresql/data

volumes:
  postgres-data:

Result:

  • Image size: 150MB (vs 800MB without optimization)
  • Build time: 2 minutes (vs 5 minutes)
  • Security: Non-root user, minimal attack surface

Best Practices Summary

  1. Use multi-stage builds - Smaller images
  2. Leverage layer caching - Faster builds
  3. Use specific tags - Avoid "latest"
  4. Run as non-root - Security
  5. Use .dockerignore - Exclude unnecessary files
  6. Set resource limits - Prevent resource exhaustion
  7. Add health checks - Monitor container health
  8. Scan for vulnerabilities - Security
  9. Use minimal base images - Smaller, more secure
  10. Don't store secrets - Use environment variables

Conclusion

Docker is powerful, but using it effectively requires following best practices. The key is to:

  • Optimize images - Smaller, faster, more secure
  • Follow security practices - Non-root, minimal images
  • Use proper patterns - Multi-stage builds, layer caching
  • Monitor and limit - Health checks, resource limits

Remember: Good Docker practices lead to reliable, scalable, and secure applications. Start with these practices, and your containerized applications will be production-ready.

What Docker challenges have you faced? What practices have worked best for your applications?

Share:

Related Posts