Containerization has fundamentally changed how we build, ship, and run applications. Docker and Kubernetes have become essential tools in every developer's toolkit. In this practical guide, I'll walk you through the concepts and workflows that have proven most valuable in production environments.
Why Containers?
Before diving into Docker, let's understand the core problem it solves. The classic "it works on my machine" issue stems from environment inconsistencies. Containers package your application with its exact runtime environment, ensuring consistency from development to production.
"Containers don't just solve deployment problems — they change how you think about application architecture."
Docker Essentials
A well-written Dockerfile is the foundation of containerized development. Here's a production-ready multi-stage Dockerfile for a Node.js application:
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package.json ./
EXPOSE 3000
USER node
CMD ["node", "dist/server.js"]
Key Dockerfile Best Practices
- Multi-stage builds — Reduce final image size by separating build and runtime environments
- Layer caching — Order instructions from least to most frequently changing
- Non-root user — Always run as a non-root user for security
- Alpine base — Use minimal base images to reduce attack surface
- .dockerignore — Exclude unnecessary files from the build context
Docker Compose for Development
Docker Compose is invaluable for local development with multiple services. Here's a typical setup for a full-stack application:
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
volumes:
- ./src:/app/src
environment:
- DATABASE_URL=postgres://user:pass@db:5432/myapp
- REDIS_URL=redis://cache:6379
depends_on:
- db
- cache
db:
image: postgres:16-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=myapp
cache:
image: redis:7-alpine
volumes:
postgres_data:
Kubernetes: Orchestrating at Scale
Once your application is containerized, Kubernetes (K8s) handles orchestration — managing deployment, scaling, networking, and self-healing across a cluster.
Core Concepts
- Pods — The smallest deployable unit, running one or more containers
- Deployments — Manage replica sets and rolling updates
- Services — Expose pods with stable networking
- Ingress — Route external HTTP traffic to services
- ConfigMaps & Secrets — Externalize configuration and sensitive data
A Production Deployment
Here's a Kubernetes deployment manifest with health checks, resource limits, and rolling update strategy:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
labels:
app: web-app
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: registry/web-app:v1.2.0
ports:
- containerPort: 3000
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
CI/CD Pipeline Integration
The real power of containerization shines when integrated into a CI/CD pipeline. A typical workflow looks like:
- Build — Run tests, lint, and build the Docker image
- Push — Tag and push the image to a container registry
- Deploy — Update the Kubernetes deployment with the new image tag
- Verify — Run health checks and smoke tests on the deployment
- Rollback — Automatically revert if health checks fail
Conclusion
Docker and Kubernetes are powerful tools, but they work best when you understand the problems they solve. Start with Docker for local development, add Docker Compose for multi-service setups, and introduce Kubernetes when you need production-grade orchestration. Don't over-engineer — let your scaling needs guide your infrastructure decisions.