This docker kubernetes tutorial takes you from writing your first Dockerfile to running a production-grade deployment with autoscaling, health checks, and rolling updates. Most Docker Kubernetes tutorials stop at Minikube with a hello-world container. This one does not. The goal is a working deployment on a real cluster: the commands, the YAML, and the reasoning behind each decision.
Docker and Kubernetes solve different problems at different layers. Docker packages your application and its dependencies into a portable container. Kubernetes runs that container at scale, scheduling it across nodes, restarting it when it fails, routing traffic to it, and scaling it up or down based on load. They are not alternatives. They are sequential tools in the same workflow. This docker kubernetes tutorial covers both in that sequence.
What You Need Before Starting
This docker kubernetes tutorial assumes:
- Docker Desktop installed (Mac/Windows) or Docker Engine on Linux.
- A Kubernetes cluster, either Docker Desktop’s built-in Kubernetes, or a cloud cluster (EKS, GKE, AKS).
kubectlinstalled and configured.- Basic familiarity with the command line.
To verify your setup:
# Verify Docker
docker --version
# Docker version 27.x.x
# Verify kubectl and cluster connection
kubectl cluster-info
kubectl get nodesIf kubectl get nodes returns at least one node in Ready state, you are ready to follow this docker kubernetes tutorial end to end.
Part 1 – Docker: Containerizing Your Application
What a Container Actually Is
A container is a process running in isolation. It has its own filesystem, its own network interface, and its own process space. It shares the host kernel but sees nothing outside its own boundaries.
The difference between a container and a virtual machine: a VM virtualizes hardware, including its own kernel. A container shares the host kernel and virtualizes only the user space. Containers start in milliseconds. VMs take seconds to minutes. Containers use megabytes of overhead. VMs use gigabytes.
This is why containers became the standard unit of deployment: they are fast, lightweight, and portable, the same container image runs identically on a developer’s laptop, in CI, and in production.
Writing a Dockerfile
A Dockerfile is the recipe for building a container image. Every instruction creates a layer. Layers are cached, if a layer has not changed, Docker reuses the cached version on the next build.
Here is a production-quality Dockerfile for a Node.js application, with the decisions explained:
# Use a specific version, never 'latest' in production
# Alpine variant is significantly smaller than the full image
FROM node:20-alpine
# Set working directory inside the container
WORKDIR /app
# Copy dependency files FIRST - before copying source code
# This layer is cached as long as package.json doesn't change
# Source code changes don't invalidate the dependency layer
COPY package*.json ./
# Install only production dependencies
RUN npm ci --only=production
# Copy application source after dependencies are installed
COPY . .
# Document which port the app listens on (does not publish it)
EXPOSE 3000
# Run as non-root user - security requirement for production
USER node
# Use exec form (array) not shell form (string)
# Exec form receives signals directly - SIGTERM works correctly
CMD ["node", "src/index.js"]The layer ordering is the most important decision in a Dockerfile for performance. Copying package.json before the source code means that npm install only runs again when dependencies change, not every time you change a line of application code.
Building and Running the Container
# Build the image
# -t tags it with a name:version
# . means use the current directory as the build context
docker build -t my-app:1.0.0 .
# Run the container locally
# -p 3000:3000 maps host port to container port
# --rm removes the container when it stops
# -e passes an environment variable
docker run -p 3000:3000 --rm -e NODE_ENV=production my-app:1.0.0
# Verify the app is running
curl http://localhost:3000/healthMulti-Stage Builds for Production
Multi-stage builds produce smaller, more secure images by separating the build environment from the runtime environment. The final image contains only what is needed to run, not the build tools, compilers, or test dependencies.
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production runtime
# This is the image that gets deployed
FROM node:20-alpine AS production
WORKDIR /app
# Copy only the compiled output and production dependencies
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
USER node
EXPOSE 3000
CMD ["node", "dist/index.js"]The production image contains no TypeScript compiler, no dev dependencies, no test files. The result is typically 60-80% smaller than a naive single-stage build and a smaller image means a smaller attack surface.
Pushing to a Container Registry
Kubernetes pulls images from a registry. You need to push your image before Kubernetes can run it.
# Tag for Docker Hub
docker tag my-app:1.0.0 yourusername/my-app:1.0.0
# Or tag for AWS ECR
docker tag my-app:1.0.0 123456789.dkr.ecr.us-east-1.amazonaws.com/my-app:1.0.0
# Push
docker push yourusername/my-app:1.0.0
# For ECR, authenticate first
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin \
123456789.dkr.ecr.us-east-1.amazonaws.comPart 2 – Kubernetes: Running Containers at Scale
The Kubernetes Architecture in Plain Terms
Before deploying anything, understand the three-layer architecture:
Control Plane: The brain. Contains the API server (every kubectl command talks to it), etcd (the cluster’s source of truth, a distributed key-value store of all cluster state), the scheduler (decides which node runs each pod), and the controller manager (maintains desired state, if a pod dies, the controller creates a new one).
Nodes: The workers. Each node runs kubelet (the agent that talks to the control plane and manages pods on that node), kube-proxy (handles network routing), and a container runtime (containerd, which actually runs the containers).
Pods: The smallest deployable unit. A pod is one or more containers that share a network namespace and storage. Most pods contain a single container. The pod is the unit Kubernetes schedules, scales, and manages.
The key insight for this docker kubernetes tutorial: you do not manage containers directly in Kubernetes. You declare the desired state (I want 3 replicas of this pod running), and Kubernetes continuously works to make reality match that declaration. This is the reconciliation loop, the core principle that makes Kubernetes self-healing.
Namespaces
Namespaces divide a cluster into virtual sub-clusters. Use them to separate environments or teams.
# Create a namespace for your application
kubectl create namespace my-app
# Set it as the default for subsequent commands
kubectl config set-context --current --namespace=my-app
# Verify
kubectl config view --minify | grep namespaceYour First Pod
A Pod manifest declares what container to run:
# pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: my-app-pod
namespace: my-app
labels:
app: my-app
spec:
containers:
- name: my-app
image: yourusername/my-app:1.0.0
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: productionkubectl apply -f pod.yaml
kubectl get pods
kubectl logs my-app-pod
kubectl describe pod my-app-podDo not run standalone pods in production. If a pod dies, nothing creates a replacement. That is what Deployments are for.
Deployments: The Production Unit
A Deployment manages a ReplicaSet, which manages pods. It handles rolling updates, rollbacks, and desired replica count. This is the object you actually use in production:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: my-app
spec:
replicas: 3
# Rolling update strategy - zero downtime deployments
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Start 1 new pod before removing old ones
maxUnavailable: 0 # Never have fewer than replicas available
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: yourusername/my-app:1.0.0
ports:
- containerPort: 3000
# Resource requests and limits - required for scheduling
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
# Liveness probe: restart the container if this fails
livenessProbe:
httpGet:
path: /healthz/live
port: 3000
initialDelaySeconds: 30
periodSeconds: 15
failureThreshold: 3
# Readiness probe: remove from load balancer if this fails
readinessProbe:
httpGet:
path: /healthz/ready
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 2
env:
- name: NODE_ENV
value: productionkubectl apply -f deployment.yaml
kubectl get deployments
kubectl get pods
kubectl rollout status deployment/my-appThe resource requests and limits are not optional. Without requests, the scheduler cannot make informed placement decisions. Without limits, a memory leak in one pod can starve the entire node.
Services: Exposing Your Application
A Service provides a stable network endpoint for a set of pods. Pods come and go, their IP addresses change. A Service gives you a fixed address that automatically routes to healthy pods.
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-app
namespace: my-app
spec:
selector:
app: my-app # Routes to pods with this label
ports:
- port: 80 # Service port
targetPort: 3000 # Container port
type: ClusterIP # Internal only - use Ingress for external traffickubectl apply -f service.yaml
kubectl get services
# Test from inside the cluster
kubectl run test --image=curlimages/curl --rm -it --restart=Never -- \
curl http://my-app.my-app.svc.cluster.local/healthThe DNS pattern service-name.namespace.svc.cluster.local is how services find each other inside a Kubernetes cluster. Your application code uses this address, not IP addresses.
Ingress: External Traffic
ClusterIP services are internal. To expose your application to external traffic, use an Ingress which requires an Ingress controller (nginx-ingress is the most common):
# Install nginx ingress controller (if not already installed)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.0/deploy/static/provider/cloud/deploy.yaml# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
namespace: my-app
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app
port:
number: 80kubectl apply -f ingress.yaml
kubectl get ingressConfigMaps and Secrets
Never hardcode configuration or credentials in container images. Use ConfigMaps for non-sensitive configuration and Secrets for credentials:
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-app-config
namespace: my-app
data:
LOG_LEVEL: "info"
API_BASE_URL: "https://api.example.com"# secret.yaml - base64 encoded values
apiVersion: v1
kind: Secret
metadata:
name: my-app-secrets
namespace: my-app
type: Opaque
data:
DATABASE_URL: cG9zdGdyZXM6Ly91c2VyOnBhc3N3b3JkQGRiL215YXBw # base64# Create secret from literal (no base64 required)
kubectl create secret generic my-app-secrets \
--from-literal=DATABASE_URL=postgres://user:password@db/myapp \
-n my-appReference them in your Deployment:
spec:
containers:
- name: my-app
envFrom:
- configMapRef:
name: my-app-config
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: my-app-secrets
key: DATABASE_URLPart 3 – Production Patterns
Horizontal Pod Autoscaler
HPA automatically scales the number of pods based on CPU or memory utilization:
# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app
namespace: my-app
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70kubectl apply -f hpa.yaml
kubectl get hpa
# Watch it scale in real time
kubectl get hpa -wHPA requires the metrics server to be installed in the cluster. On managed clusters (EKS, GKE, AKS) it is typically pre-installed. On Minikube: minikube addons enable metrics-server.
Rolling Updates and Rollbacks
# Update the image version
kubectl set image deployment/my-app my-app=yourusername/my-app:1.1.0 -n my-app
# Watch the rollout
kubectl rollout status deployment/my-app -n my-app
# Check rollout history
kubectl rollout history deployment/my-app -n my-app
# Rollback to previous version
kubectl rollout undo deployment/my-app -n my-app
# Rollback to a specific revision
kubectl rollout undo deployment/my-app --to-revision=2 -n my-appThe maxUnavailable: 0 and maxSurge: 1 settings in the deployment strategy ensure zero-downtime rollouts: Kubernetes starts a new pod, waits for it to pass the readiness probe, then terminates an old pod.
Debugging Commands
# Get all resources in a namespace
kubectl get all -n my-app
# Describe a pod (shows events, which reveal why pods fail to start)
kubectl describe pod <pod-name> -n my-app
# Get logs
kubectl logs <pod-name> -n my-app
kubectl logs <pod-name> -n my-app --previous # Logs from crashed container
# Execute a command inside a running container
kubectl exec -it <pod-name> -n my-app -- sh
# Port-forward for local debugging
kubectl port-forward pod/<pod-name> 8080:3000 -n my-app
# Watch pods in real time
kubectl get pods -n my-app -w
# Check resource usage
kubectl top pods -n my-app
kubectl top nodesThe describe command is the most useful debugging tool in this docker kubernetes tutorial. The Events section at the bottom shows exactly what happened: image pull failures, OOMKills, readiness probe failures, scheduling failures.
The Complete File Structure
By the end of this docker kubernetes tutorial, your deployment files should look like this:
k8s/
├── namespace.yaml
├── configmap.yaml
├── secret.yaml # Do not commit to Git - use sealed-secrets or external-secrets
├── deployment.yaml
├── service.yaml
├── ingress.yaml
└── hpa.yamlApply in order:
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secret.yaml
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml
kubectl apply -f k8s/ingress.yaml
kubectl apply -f k8s/hpa.yaml
# Or apply the entire directory
kubectl apply -f k8s/What Comes Next
This docker kubernetes tutorial covers the fundamentals for running applications in production. The natural next steps are:
Helm: Package your Kubernetes manifests into reusable charts with templating. Helm is how most production teams manage multi-environment deployments.
GitOps with ArgoCD or Flux: Instead of running kubectl apply manually, commits to a Git repository trigger automatic deployments. The cluster continuously reconciles itself with the Git state.
Monitoring: Connect your cluster to Prometheus and Grafana. See our Prometheus Alertmanager setup guide for the configuration that takes you from raw metrics to actionable alerts.
Kubernetes best practices: See our Kubernetes deployment best practices guide for the patterns: pod disruption budgets, pod anti-affinity, network policies, that move from working to production-hardened.
At The Good Shell we design and operate Kubernetes infrastructure for startups that need production-grade reliability without building a dedicated platform team. See our DevOps and infrastructure services or our case studies.
For the authoritative reference on Kubernetes concepts, the official Kubernetes documentation is updated with each release and covers every object in this guide in depth.

