Container Orchestration with Kubernetes: A Practical Guide

Container Orchestration with Kubernetes: A Practical Guide

BySanjay Goraniya
3 min read
Share:

Container Orchestration with Kubernetes: A Practical Guide

Kubernetes has become the de facto standard for container orchestration. But learning Kubernetes can feel overwhelming—there are so many concepts, objects, and configurations. After deploying and managing Kubernetes clusters in production, I've learned what actually matters. Let me share the practical knowledge you need.

Why Kubernetes?

The Problem It Solves

Before Kubernetes, deploying applications meant:

  • Manual server management
  • Inconsistent environments
  • Difficult scaling
  • Complex rollbacks
  • No self-healing

Kubernetes solves these by:

  • Automated deployment - Declarative configuration
  • Consistent environments - Containers everywhere
  • Auto-scaling - Scale based on demand
  • Self-healing - Restart failed containers
  • Rolling updates - Zero-downtime deployments

Core Concepts

Pods

The smallest deployable unit in Kubernetes:

Code
apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  containers:
  - name: app
    image: myapp:1.0.0
    ports:
    - containerPort: 3000

Deployments

Manage Pod replicas and updates:

Code
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: app
        image: myapp:1.0.0
        ports:
        - containerPort: 3000

Services

Expose Pods to network:

Code
apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 3000
  type: LoadBalancer

Common Patterns

1. Health Checks

Code
containers:
- name: app
  image: myapp:1.0.0
  livenessProbe:
    httpGet:
      path: /health
      port: 3000
    initialDelaySeconds: 30
    periodSeconds: 10
  readinessProbe:
    httpGet:
      path: /ready
      port: 3000
    initialDelaySeconds: 5
    periodSeconds: 5

Why it matters: Kubernetes uses these to know when to restart containers and when they're ready to receive traffic.

2. Resource Limits

Code
containers:
- name: app
  image: myapp:1.0.0
  resources:
    requests:
      memory: "256Mi"
      cpu: "250m"
    limits:
      memory: "512Mi"
      cpu: "500m"

Why it matters: Prevents one pod from consuming all resources.

3. ConfigMaps and Secrets

Code
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  database_url: "postgresql://db:5432/mydb"
  api_key: "public-key"

# Secret
apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
data:
  password: <base64-encoded>

Usage in Pod:

Code
containers:
- name: app
  image: myapp:1.0.0
  env:
  - name: DATABASE_URL
    valueFrom:
      configMapKeyRef:
        name: app-config
        key: database_url
  - name: PASSWORD
    valueFrom:
      secretKeyRef:
        name: app-secrets
        key: password

Deployment Strategies

Rolling Update (Default)

Code
spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0

Gradually replaces old pods with new ones.

Blue-Green Deployment

Code
# Deploy new version
kubectl apply -f deployment-v2.yaml

# Switch service
kubectl patch service my-app -p '{"spec":{"selector":{"version":"v2"}}}'

Canary Deployment

Code
# Deploy 10% traffic to new version
apiVersion: v1
kind: Service
metadata:
  name: my-app
spec:
  selector:
    app: my-app
    version: v1  # 90% traffic
---
apiVersion: v1
kind: Service
metadata:
  name: my-app-canary
spec:
  selector:
    app: my-app
    version: v2  # 10% traffic

Scaling

Manual Scaling

Code
kubectl scale deployment my-app --replicas=5

Horizontal Pod Autoscaler

Code
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

Scales based on CPU usage.

Common Issues and Solutions

Issue 1: Pods Not Starting

Check:

Code
kubectl describe pod <pod-name>
kubectl logs <pod-name>

Common causes:

  • Image pull errors
  • Resource constraints
  • Configuration errors

Issue 2: Services Not Routing

Check:

Code
kubectl get endpoints
kubectl describe service <service-name>

Common causes:

  • Selector mismatch
  • Pods not ready
  • Port mismatch

Issue 3: High Resource Usage

Check:

Code
kubectl top pods
kubectl top nodes

Solutions:

  • Set resource limits
  • Optimize application
  • Scale horizontally

Best Practices

1. Use Namespaces

Code
apiVersion: v1
kind: Namespace
metadata:
  name: production

Organize resources by environment or team.

2. Label Everything

Code
metadata:
  labels:
    app: my-app
    environment: production
    team: backend

Makes querying and organizing easier.

3. Use Deployments, Not Pods

Deployments provide:

  • Rolling updates
  • Rollback capability
  • Replica management

4. Set Resource Limits

Always set requests and limits:

Code
resources:
  requests:
    memory: "256Mi"
    cpu: "250m"
  limits:
    memory: "512Mi"
    cpu: "500m"

5. Use Health Checks

Code
livenessProbe:
  httpGet:
    path: /health
    port: 3000
readinessProbe:
  httpGet:
    path: /ready
    port: 3000

Real-World Example

Application: Node.js API serving 100K requests/day

Kubernetes Setup:

Code
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: api
        image: myapi:1.0.0
        ports:
        - containerPort: 3000
        resources:
          requests:
            memory: "512Mi"
            cpu: "500m"
          limits:
            memory: "1Gi"
            cpu: "1000m"
        livenessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 30
        readinessProbe:
          httpGet:
            path: /ready
            port: 3000
          initialDelaySeconds: 5
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: db-secret
              key: url
---
apiVersion: v1
kind: Service
metadata:
  name: api-service
spec:
  selector:
    app: api
  ports:
  - port: 80
    targetPort: 3000
  type: LoadBalancer
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: api-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: api
  minReplicas: 3
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

Result:

  • Auto-scales from 3 to 10 pods based on load
  • Self-healing (restarts failed pods)
  • Zero-downtime deployments
  • Handles 500K requests/day

Learning Path

  1. Start local - Minikube or Docker Desktop
  2. Learn basics - Pods, Deployments, Services
  3. Practice - Deploy a simple app
  4. Learn advanced - ConfigMaps, Secrets, HPA
  5. Production patterns - Monitoring, logging, security

Conclusion

Kubernetes is powerful but complex. Start simple, learn incrementally, and practice. The key concepts are:

  • Pods - Your containers
  • Deployments - Managing pods
  • Services - Exposing pods
  • Scaling - Growing with demand
  • Health checks - Keeping things running

Remember: You don't need to know everything. Start with the basics, deploy something, and learn as you go. Kubernetes is a journey, not a destination.

What Kubernetes challenges have you faced? What patterns have worked best for your deployments?

Share:

Related Posts