This helm chart tutorial takes you from zero Helm knowledge to deploying a production-ready application with multi-environment values, dependency management, hooks, and CI/CD integration. Helm is used by over 80% of Kubernetes adopters according to the CNCF annual survey, and for good reason: without it, deploying a single application to Kubernetes means writing and managing a Deployment, Service, ConfigMap, Secret, Ingress, and HorizontalPodAutoscaler as separate YAML files and maintaining different versions of each file for dev, staging, and production. Young Upstarts
Helm solves this by packaging all of those manifests into a single versioned chart with configurable values. The same chart deploys to every environment. The only thing that changes is the values file.
One timing note: Helm 4.0.0 was released at KubeCon 2025, the first major version bump in six years. The latest stable release as of early 2026 is Helm 4.1.3, which introduces server-side apply, a WebAssembly-based plugin system, and local content-based caching. Helm 3 charts are fully compatible with Helm 4, no migration of chart templates is required. This helm chart tutorial uses Helm 4 syntax throughout. Young Upstarts
What You Need Before Starting
- A Kubernetes cluster (Docker Desktop, Minikube, or a cloud cluster: EKS, GKE, AKS).
kubectlinstalled and configured.- Helm 4 installed.
Installing Helm 4:
# macOS
brew install helm
# Linux (official script)
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-4 | bash
# Windows (Chocolatey)
choco install kubernetes-helm
# Verify installation
helm version
# version.BuildInfo{Version:"v4.1.3", ...}What Is a Helm Chart?
A Helm chart is a package of Kubernetes manifests with Go templating and a values layer. The relationship is simple:
Chart = Templates + Values → Rendered YAML → Kubernetes ResourcesWhen you run helm install, Helm takes your templates, injects the values from values.yaml (and any override files you pass), renders the result into standard Kubernetes YAML, and applies it to the cluster. It then tracks that deployment as a release with a revision history, which is what enables upgrades and rollbacks.
Three core concepts this helm chart tutorial builds on:
Chart: the package. A directory with a specific structure containing your templates, values, and metadata.
Release: an installed instance of a chart. You can install the same chart multiple times with different release names. Each installation is an independent release with its own revision history.
Repository: a server hosting packaged charts, like Artifact Hub. Used to distribute and consume charts others have published (Prometheus, PostgreSQL, Redis, Nginx-ingress).
Helm Chart Structure
Every Helm chart follows the same directory structure. Understanding it is the foundation of this helm chart tutorial:
my-app/
├── Chart.yaml # Chart metadata: name, version, description
├── values.yaml # Default configuration values
├── values-dev.yaml # Dev environment overrides
├── values-staging.yaml # Staging environment overrides
├── values-prod.yaml # Production environment overrides
├── charts/ # Chart dependencies (sub-charts)
├── templates/ # Kubernetes manifest templates
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── ingress.yaml
│ ├── configmap.yaml
│ ├── hpa.yaml
│ ├── serviceaccount.yaml
│ ├── _helpers.tpl # Reusable template functions
│ └── NOTES.txt # Post-install notes shown to the user
└── .helmignore # Files to exclude from packagingGenerate this structure automatically:
helm create my-apphelm create scaffolds a complete working chart for a web application. Most of the time, you modify the generated templates rather than writing from scratch. Clean up the generated chart to start fresh:
# Remove the default templates to start with a clean slate
rm -rf my-app/templates/*
rm my-app/values.yaml
touch my-app/values.yamlChart.yaml: The Chart’s Identity
Chart.yaml contains the metadata that identifies your chart:
# my-app/Chart.yaml
apiVersion: v2 # Helm 3+ API version (use v2 for Helm 4)
name: my-app
description: A production-ready Node.js application
type: application # 'application' or 'library'
version: 1.2.0 # Chart version - bump this when chart structure changes
appVersion: "2.4.1" # Application version - the app inside the chart
maintainers:
- name: Platform Team
email: [email protected]
keywords:
- nodejs
- api
- productionThe distinction between version and appVersion is important and frequently confused in helm chart tutorials. version is the chart’s own version, bump it when you change templates or default values. appVersion is the version of the application the chart deploys, typically the container image tag. Helm uses version for release tracking. appVersion is informational.
values.yaml: The Configuration Layer
values.yaml is where all environment-specific configuration lives. Templates reference values from this file. The goal: templates should be generic and reusable, values should be environment-specific.
# my-app/values.yaml - production defaults
replicaCount: 3
image:
repository: your-registry/my-app
tag: "2.4.1"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
targetPort: 3000
ingress:
enabled: true
className: nginx
host: myapp.example.com
tls: true
tlsSecretName: myapp-tls
resources:
requests:
cpu: 150m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
env:
NODE_ENV: production
LOG_LEVEL: info
API_BASE_URL: https://api.example.com
secrets:
# Secrets are referenced from Kubernetes Secrets, not stored here
databaseUrlSecretName: my-app-db-secret
databaseUrlSecretKey: DATABASE_URL
serviceAccount:
create: true
name: my-app
nodeSelector: {}
tolerations: []
affinity: {}The multi-environment values pattern:
# my-app/values-dev.yaml - only overrides that differ from defaults
replicaCount: 1
image:
tag: "latest"
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
autoscaling:
enabled: false
ingress:
host: myapp-dev.example.com
tls: false
env:
NODE_ENV: development
LOG_LEVEL: debug# my-app/values-staging.yaml
replicaCount: 2
image:
tag: "2.4.1-rc1"
ingress:
host: myapp-staging.example.com
env:
NODE_ENV: staging
LOG_LEVEL: infoDo not modify values.yaml for different environments. Keep defaults at production level, the environment that matters most and override only the differences in environment-specific files. This prevents the common mistake of configuring staging in values.yaml and then forgetting to override for production.
Templates: Go Templating in Practice
Helm templates are Kubernetes YAML with Go template syntax. The {{ and }} delimiters mark template expressions. Here is a complete, production-quality Deployment template:
# my-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "my-app.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "my-app.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "my-app.selectorLabels" . | nindent 6 }}
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
{{- include "my-app.selectorLabels" . | nindent 8 }}
annotations:
# Force pod restart when ConfigMap changes
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
spec:
serviceAccountName: {{ .Values.serviceAccount.name }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.targetPort }}
envFrom:
- configMapRef:
name: {{ include "my-app.fullname" . }}-config
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: {{ .Values.secrets.databaseUrlSecretName }}
key: {{ .Values.secrets.databaseUrlSecretKey }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
livenessProbe:
httpGet:
path: /healthz/live
port: {{ .Values.service.targetPort }}
initialDelaySeconds: 30
periodSeconds: 15
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz/ready
port: {{ .Values.service.targetPort }}
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 2
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}Key template patterns explained:
{{ include "my-app.fullname" . }} – calls a named template from _helpers.tpl. The dot (.) passes the current context. Always use named templates for values that appear in multiple manifests (the release name, labels, selectors) to ensure consistency.
{{- if not .Values.autoscaling.enabled }} – conditional blocks. When HPA is enabled, the Deployment should not set replicas, HPA manages that. The {{- trims the whitespace before the expression, preventing empty lines in the rendered YAML.
{{- toYaml .Values.resources | nindent 12 }} – converts a values map to YAML and indents it correctly. This is the standard pattern for values that are themselves YAML blocks (resources, nodeSelector, tolerations, affinity).
checksum/config annotation – forces pod restart when the ConfigMap changes. Without this, updating a ConfigMap does not restart the pods reading it. This is one of the most commonly missed patterns in helm chart tutorials.
_helpers.tpl: Reusable Template Functions
_helpers.tpl defines the named templates used across all manifests. Always create this file, it is the mechanism that keeps labels, names, and selectors consistent across every resource in your chart:
# my-app/templates/_helpers.tpl
{{/*
Expand the name of the chart.
*/}}
{{- define "my-app.name" -}}
{{- .Chart.Name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a fully qualified name.
Format: release-name-chart-name (truncated to 63 chars)
*/}}
{{- define "my-app.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{/*
Common labels - applied to every resource
*/}}
{{- define "my-app.labels" -}}
helm.sh/chart: {{ printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" }}
{{ include "my-app.selectorLabels" . }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels - used in matchLabels and pod template labels
These must be identical in Deployment spec.selector and pod template
*/}}
{{- define "my-app.selectorLabels" -}}
app.kubernetes.io/name: {{ include "my-app.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}The 63-character limit on Kubernetes resource names is enforced by trunc 63. Release names concatenated with chart names can exceed this limit without truncation.
The Complete Template Set
With the patterns from the deployment above, here are the remaining templates for a complete application:
Service:
# my-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "my-app.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "my-app.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
selector:
{{- include "my-app.selectorLabels" . | nindent 4 }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.targetPort }}
protocol: TCPConfigMap:
# my-app/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "my-app.fullname" . }}-config
namespace: {{ .Release.Namespace }}
labels:
{{- include "my-app.labels" . | nindent 4 }}
data:
{{- range $key, $val := .Values.env }}
{{ $key }}: {{ $val | quote }}
{{- end }}HorizontalPodAutoscaler:
# my-app/templates/hpa.yaml
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "my-app.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "my-app.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "my-app.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}Ingress:
# my-app/templates/ingress.yaml
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "my-app.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "my-app.labels" . | nindent 4 }}
spec:
ingressClassName: {{ .Values.ingress.className }}
{{- if .Values.ingress.tls }}
tls:
- hosts:
- {{ .Values.ingress.host }}
secretName: {{ .Values.ingress.tlsSecretName }}
{{- end }}
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ include "my-app.fullname" . }}
port:
number: {{ .Values.service.port }}
{{- end }}Installing, Upgrading and Rolling Back:
# Render templates locally without deploying - essential for debugging
helm template my-app ./my-app -f values-dev.yaml
# Validate chart syntax
helm lint ./my-app
helm lint ./my-app -f values-prod.yaml
# Dry run - simulates install without applying
helm install my-app ./my-app \
-f values-dev.yaml \
--namespace dev \
--create-namespace \
--dry-run --debug
# Install
helm install my-app ./my-app \
-f values-dev.yaml \
--namespace dev \
--create-namespace
# Upgrade - use upgrade --install for idempotent CI/CD
helm upgrade --install my-app ./my-app \
-f values-prod.yaml \
--namespace production \
--create-namespace \
--atomic \ # Automatically rollback if deployment fails
--wait \ # Wait until all pods are ready
--timeout 5m
# Check release status
helm status my-app -n production
helm list -n production
# Release history
helm history my-app -n production
# REVISION STATUS DESCRIPTION
# 1 superseded Install complete
# 2 superseded Upgrade complete
# 3 deployed Upgrade complete
# Rollback to previous revision
helm rollback my-app -n production
# Rollback to specific revision
helm rollback my-app 2 -n production
# Uninstall
helm uninstall my-app -n production--atomic is the most important production flag in this helm chart tutorial. It means: if the deployment fails (pods do not reach Ready state within the timeout), Helm automatically rolls back to the previous successful revision. Without it, a failed upgrade leaves the release in a failed state that requires manual intervention.
Chart Dependencies
Charts can depend on other charts. A backend service depending on PostgreSQL and Redis is a common pattern:
# my-app/Chart.yaml - add dependencies section
dependencies:
- name: postgresql
version: "15.5.x"
repository: https://charts.bitnami.com/bitnami
condition: postgresql.enabled # Only install if postgresql.enabled is true
- name: redis
version: "19.x.x"
repository: https://charts.bitnami.com/bitnami
condition: redis.enabled# my-app/values.yaml - dependency configuration
postgresql:
enabled: true
auth:
database: myapp
username: myapp
existingSecret: myapp-postgres-secret
primary:
persistence:
size: 20Gi
redis:
enabled: true
auth:
enabled: true
existingSecret: myapp-redis-secret
master:
persistence:
size: 5Gi# Download dependencies into charts/ directory
helm dependency update ./my-app
# This creates:
# my-app/charts/postgresql-15.5.3.tgz
# my-app/charts/redis-19.2.1.tgz
# my-app/Chart.lockAlways commit Chart.lock. It pins the exact dependency versions, the same way package-lock.json pins npm dependencies. Without it, two engineers running helm dependency update at different times might get different dependency versions.
Hooks: Lifecycle Events
Helm hooks let you run Jobs at specific points in the release lifecycle. The two most common use cases are database migrations before a deployment and smoke tests after:
# my-app/templates/pre-upgrade-migration.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "my-app.fullname" . }}-migration
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-upgrade,pre-install
"helm.sh/hook-weight": "-5" # Lower = runs first
"helm.sh/hook-delete-policy": hook-succeeded # Delete after success
spec:
template:
spec:
restartPolicy: Never
containers:
- name: migration
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command: ["node", "src/migrations/run.js"]
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: {{ .Values.secrets.databaseUrlSecretName }}
key: {{ .Values.secrets.databaseUrlSecretKey }}# my-app/templates/post-install-test.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "my-app.fullname" . }}-smoke-test
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
spec:
restartPolicy: Never
containers:
- name: smoke-test
image: curlimages/curl
command:
- /bin/sh
- -c
- |
curl -f http://{{ include "my-app.fullname" . }}.{{ .Release.Namespace }}.svc/healthz/ready
echo "Smoke test passed"Hook execution order: pre-install → main install → post-install. If a pre-upgrade hook Job fails, Helm aborts the upgrade. This is the mechanism that prevents a release from deploying when the database migration fails — Helm catches the failed Job and does not proceed.
Helm in CI/CD
The production helm chart tutorial pattern for GitHub Actions:
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
env:
HELM_VERSION: v4.1.3
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Helm
uses: azure/setup-helm@v4
with:
version: ${{ env.HELM_VERSION }}
- name: Configure kubectl
uses: azure/k8s-set-context@v4
with:
kubeconfig: ${{ secrets.KUBECONFIG }}
- name: Lint chart
run: |
helm lint ./charts/my-app
helm lint ./charts/my-app -f charts/my-app/values-prod.yaml
- name: Deploy to staging
run: |
helm upgrade --install my-app ./charts/my-app \
-f charts/my-app/values-staging.yaml \
--set image.tag=${{ github.sha }} \
--namespace staging \
--create-namespace \
--atomic \
--wait \
--timeout 5m
- name: Deploy to production
if: github.ref == 'refs/heads/main'
run: |
helm upgrade --install my-app ./charts/my-app \
-f charts/my-app/values-prod.yaml \
--set image.tag=${{ github.sha }} \
--namespace production \
--create-namespace \
--atomic \
--wait \
--timeout 10m--set image.tag=${{ github.sha }} is the standard pattern for injecting the image tag at deploy time. The chart values.yaml contains a default tag. The CI pipeline overrides it with the current commit SHA, which is the exact image built and pushed in the previous CI step. This creates an unambiguous link between the deployed image and the Git commit.
Publishing Charts to OCI Registries
Helm 4 fully supports OCI (Open Container Initiative) registries for chart distribution, the same registries that store container images (ECR, GCR, Docker Hub, GHCR):
# Package the chart into a .tgz
helm package ./my-app
# Creates: my-app-1.2.0.tgz
# Push to GHCR (GitHub Container Registry)
helm push my-app-1.2.0.tgz oci://ghcr.io/your-org/charts
# Push to AWS ECR
aws ecr create-repository --repository-name charts/my-app --region us-east-1
helm push my-app-1.2.0.tgz oci://123456789.dkr.ecr.us-east-1.amazonaws.com/charts
# Install directly from OCI registry
helm install my-app oci://ghcr.io/your-org/charts/my-app --version 1.2.0
# List available versions
helm show chart oci://ghcr.io/your-org/charts/my-appOCI registries eliminate the need to maintain a separate Helm chart repository (Chart Museum, GitHub Pages). If you are already using ECR or GCR for container images, use the same registry for charts. One authentication mechanism, one access control model, one audit trail.
Essential Debugging Commands
# Render templates without connecting to cluster
helm template my-app ./my-app -f values-prod.yaml
# Validate chart - catches YAML syntax errors and missing required values
helm lint ./my-app -f values-prod.yaml
# Dry run with full debug output — shows exact YAML that would be applied
helm install my-app ./my-app -f values-prod.yaml \
--dry-run --debug 2>&1 | head -100
# Show computed values (what Helm sees after merging all values sources)
helm get values my-app -n production
helm get values my-app -n production --all # Includes defaults
# Show rendered manifests of a deployed release
helm get manifest my-app -n production
# Get release status
helm status my-app -n production
# Check history
helm history my-app -n productionhelm get values --all is the most useful debugging command when a release is not behaving as expected. It shows the final merged values including defaults, which reveals configuration conflicts between your override files and the chart defaults.
What Comes After This Helm Chart Tutorial
Helm is the foundation. The natural next step is GitOps: instead of running helm upgrade from a CI pipeline, ArgoCD watches a Git repository and automatically reconciles the cluster state with whatever is in Git. See our Kubernetes deployment best practices guide for the production patterns that underpin well-structured charts, and our GitHub Actions CI/CD pipeline tutorial for the complete pipeline structure that this helm chart tutorial’s CI section builds on.
At The Good Shell we design and operate Kubernetes infrastructure for startups, including Helm chart architecture, GitOps pipelines, and multi-environment deployment patterns. See our DevOps and infrastructure services or our case studies.
For the authoritative Helm 4 reference, the official Helm documentation covers every command and template function with complete examples.
