GitOps Kubernetes: The Essential Guide to ArgoCD and Flux in 2026

GitOps Kubernetes is no longer a pattern that forward-thinking teams adopt early, it is the default delivery model for production Kubernetes in 2026. GitOps adoption has reached 64% of enterprises as the primary delivery mechanism, leading to measurable increases in infrastructure reliability and rollback velocity. The question engineering teams are asking is not whether to adopt GitOps but which tool to use, how to structure their repositories, and how to handle the failure modes that do not appear in getting-started tutorials. ArgoCD documentation

This guide covers GitOps Kubernetes end to end: what the GitOps model actually means in practice and how it differs from traditional CI/CD, ArgoCD versus Flux compared on the dimensions that matter in production, the repository structure patterns that scale, the App of Apps pattern for managing many applications, multi-environment promotion, drift detection and self-healing, secrets management, and the production configuration decisions that generic tutorials skip.

What GitOps Actually Means

GitOps is an operational model where Git is the single source of truth for your desired infrastructure and application state. A controller running inside the Kubernetes cluster continuously compares the live state against the declared state in Git. When it detects drift, someone applied a manifest directly with kubectl, a pod restarted with different configuration, a resource was manually deleted, it reconciles, pulling the cluster back into alignment automatically.

The GitOps Kubernetes workflow in three steps:

1. Developer commits a change to the Git repository
   (manifest update, Helm values change, image tag bump)

2. GitOps controller detects the commit
   (either by polling the repository every N seconds
   or by receiving a webhook notification)

3. Controller applies the delta to the cluster
   and continues monitoring for further drift

The properties this produces that traditional push-based CI/CD cannot guarantee on their own:

Complete audit trail. Every change to production is a Git commit with an author, timestamp, and message. Reproducing exactly what was running six months ago is a git checkout. Identifying who changed a configuration and why is git blame.

Automatic drift detection. Manual kubectl changes in production are immediately detected and reverted. The cluster cannot silently drift from its declared state without the GitOps controller flagging it.

Rollback is a Git revert. Rolling back a bad deployment is git revert followed by a commit push. The GitOps controller handles the rest. No manual helm rollback, no kubectl apply of old manifests.

Separation of CI and CD. CI builds the container image and pushes it to a registry. CD (the GitOps controller) watches Git and deploys. Engineers with CI pipeline access cannot deploy directly to production, they commit to Git and the controller handles promotion. This is the access control model that makes GitOps Kubernetes appealing to security-conscious organizations.

GitOps Kubernetes is Not Just CI/CD With Git

The most common misconception about GitOps Kubernetes is that it is simply “storing your Kubernetes manifests in Git and running kubectl apply from a pipeline.” That is not GitOps. That is CI/CD with a Git step.

The critical distinction is the reconciliation loop. A CI/CD pipeline applies changes when triggered by a push event and then stops. If something changes in the cluster between pipeline runs — a manual change, a pod restart that corrupts configuration, the pipeline does not react. The cluster drifts silently.

A GitOps controller runs continuously. It is always comparing desired state (Git) against actual state (cluster). It does not wait for a trigger. When drift occurs, it acts automatically within seconds to minutes of detection.

This is the operational property that makes GitOps Kubernetes reliable for regulated industries, large teams, and environments where manual changes to production are a compliance risk.

ArgoCD vs Flux: The Tool Decision

By 2026 the question is no longer whether to adopt GitOps, but which tool to use. Both ArgoCD and Flux are CNCF Graduated projects powering production Kubernetes worldwide. ArgoCD docs

The 2025 CNCF End User Survey shows 60% of Kubernetes clusters use ArgoCD for application delivery, with 97% of respondents using it in production, up from 93% in 2023. FluxCD documentation

That adoption data does not make Flux the wrong choice. It reflects ArgoCD’s stronger onboarding experience for teams new to GitOps, not superior technical capability. Both tools implement the GitOps model faithfully. The differences are in architecture and operator experience.

ArgoCD:

ArgoCD renders manifests first, whether they originate as Helm charts, Kustomize bases, or plain YAML and applies them using its internal GitOps engine. The application controller maintains a full in-memory graph of every managed resource, which powers its real-time UI and enables the resource tree visualization that makes it easy to see exactly what is deployed and why.

ArgoCD is the better choice if you want a rich web UI for visualizing application state, need multi-tenancy with fine-grained RBAC, or prefer a centralized management plane for multiple clusters. OpenGitOps principles

In benchmarking, ArgoCD consumes roughly twice the CPU and memory of Flux during initial synchronization. This gap narrows during steady-state reconciliation but remains meaningful. ArgoCD repo

Flux:

Instead of one monolithic controller, Flux is built from specialized controllers: source-controller, kustomize-controller, helm-controller, notification-controller, and image-automation-controller. There is no built-in UI; Flux is designed to be operated entirely through Kubernetes custom resources and the Flux CLI. Flux CLI documentation

Flux is the better choice if you prefer a lightweight, CLI-driven approach, want tighter integration with cloud-native tooling like Flagger for progressive delivery, or need native image automation without external tooling. Flux v2 repo

The decision framework:

NeedArgoCDFlux
Web UI for deployment visibilityYesNo
Multi-cluster managementNativeVia Cluster API
Multi-tenancy and RBACNative, fine-grainedKubernetes RBAC
Resource footprintHigherLower
Image automation (auto-update tags)Via external toolNative controller
Kustomize integrationSupportedNative, tighter
Helm integrationRenders as manifestsNative Helm releases
Air-gapped environmentsHarderDesigned for it
Team preferenceUI-drivenCLI/code-driven

Many organizations use both: ArgoCD for application delivery, Flux for cluster bootstrapping. This is not a compromise, it is a deliberate architecture where each tool does what it does best. ArgoCD declarative setup

This guide uses ArgoCD for the implementation examples. The patterns apply equally to Flux with equivalent Flux CRD syntax.

Installing ArgoCD

# Create the ArgoCD namespace
kubectl create namespace argocd

# Install ArgoCD (stable release)
kubectl apply -n argocd \
  -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# Wait for ArgoCD to be ready
kubectl wait --for=condition=available deployment \
  --all -n argocd --timeout=120s

# Get the initial admin password
kubectl get secret argocd-initial-admin-secret \
  -n argocd \
  -o jsonpath="{.data.password}" | base64 -d

# Access the UI via port-forward
kubectl port-forward svc/argocd-server -n argocd 8080:443
# Open https://localhost:8080 - login: admin / [password from above]

# Install the ArgoCD CLI (macOS)
brew install argocd

# Login via CLI
argocd login localhost:8080 \
  --username admin \
  --password [password] \
  --insecure

# Change the admin password
argocd account update-password

For production, expose ArgoCD via Ingress rather than port-forward:

# argocd-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: argocd-server
  namespace: argocd
  annotations:
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  ingressClassName: nginx
  rules:
    - host: argocd.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: argocd-server
                port:
                  number: 443
  tls:
    - hosts:
        - argocd.example.com
      secretName: argocd-tls

Repository Structure: The Foundation of GitOps Kubernetes

How you structure your Git repository determines how cleanly your GitOps Kubernetes workflow scales. The most important principle: separate application code and manifests, keep Kubernetes manifests in a dedicated repository (or monorepo directory), not in the application source repository. FluxCD concepts

Pattern 1 – Monorepo (recommended for startups):

infrastructure/
├── apps/
│   ├── payment-api/
│   │   ├── base/
│   │   │   ├── deployment.yaml
│   │   │   ├── service.yaml
│   │   │   └── kustomization.yaml
│   │   └── overlays/
│   │       ├── dev/
│   │       │   ├── patch-replicas.yaml
│   │       │   └── kustomization.yaml
│   │       ├── staging/
│   │       │   └── kustomization.yaml
│   │       └── production/
│   │           ├── patch-replicas.yaml
│   │           ├── patch-resources.yaml
│   │           └── kustomization.yaml
│   └── user-service/
│       ├── base/
│       └── overlays/
├── clusters/
│   ├── dev/
│   │   └── argocd-apps.yaml
│   ├── staging/
│   │   └── argocd-apps.yaml
│   └── production/
│       └── argocd-apps.yaml
└── infrastructure/
    ├── cert-manager/
    ├── ingress-nginx/
    └── monitoring/

Pattern 2 – Multi-repo (recommended for larger teams):

# Separate repos:
org/app-manifests-payment-api    # Manifests for payment service
org/app-manifests-user-service   # Manifests for user service
org/cluster-config               # Cluster-level GitOps config
org/infrastructure               # Platform components (cert-manager, monitoring)

The multi-repo pattern provides team isolation, the payments team controls their own manifests repository without touching the user service team’s repository. The monorepo pattern is simpler to start with and provides full visibility across all services in one place.

The branch-per-environment anti-pattern:

Use overlays for environments, Kustomize overlays or Helm value files for dev/staging/production, never branch-per-environment. Argo CD home

Branch-per-environment was common before Kustomize and Helm value files were widely adopted. It requires cherry-picking changes between branches, creates divergence between environments, and makes it difficult to see the full set of changes going to production. Overlays solve this correctly, a single base that all environments derive from, with environment-specific patches applied on top.

Your First ArgoCD Application

An ArgoCD Application is a CRD that tells ArgoCD where to find your manifests and where to deploy them:

# apps/payment-api-production.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: payment-api-production
  namespace: argocd
  labels:
    team: payments
    environment: production
spec:
  project: default

  source:
    repoURL: https://github.com/your-org/infrastructure.git
    targetRevision: main
    path: apps/payment-api/overlays/production

  destination:
    server: https://kubernetes.default.svc
    namespace: production

  syncPolicy:
    automated:
      prune: true       # Remove resources deleted from Git
      selfHeal: true    # Revert manual changes in the cluster
      allowEmpty: false # Never sync to an empty state (safety)
    syncOptions:
      - CreateNamespace=true
      - ServerSideApply=true  # Required for Helm charts, avoids annotation limits
    retry:
      limit: 5
      backoff:
        duration: 5s
        factor: 2
        maxDuration: 3m
kubectl apply -f apps/payment-api-production.yaml

# Check sync status
argocd app get payment-api-production

# Watch the sync
argocd app sync payment-api-production
argocd app wait payment-api-production --health

Critical settings explained:

prune: true removes resources from the cluster when they are deleted from Git. Without this, old resources linger indefinitely: Deployments, Services, and ConfigMaps that no longer exist in Git remain in the cluster consuming resources and potentially creating security exposure.

selfHeal: true reverts manual kubectl changes. This is the setting that makes the cluster truly immutable from a GitOps perspective. Without it, manual changes persist until the next sync cycle.

ServerSideApply=true avoids annotation size limits and handles field ownership correctly. Required for most Helm charts and large manifests. Enable it globally. Weaveworks GitOps

targetRevision: main should be targetRevision: v2.3.1 or a specific Git SHA for production. Pin versions explicitly, use a specific tag or SHA, not HEAD or main for production apps. Pointing at main means a bad commit immediately syncs to production without any gate. Kubernetes Secrets docs

The App of Apps Pattern

As your GitOps Kubernetes deployment grows beyond three or four applications, managing individual ArgoCD Application manifests becomes unwieldy. The App of Apps pattern solves this: a single root Application manages all other Applications.

# clusters/production/root-app.yaml
# This is the only Application you create manually
# Everything else is managed by GitOps from here
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: root-production
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/your-org/infrastructure.git
    targetRevision: main
    path: clusters/production
  destination:
    server: https://kubernetes.default.svc
    namespace: argocd
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
# clusters/production/ - ArgoCD watches this directory
clusters/production/
├── root-app.yaml           # Applied manually once
├── payment-api.yaml        # Application manifest
├── user-service.yaml       # Application manifest
├── auth-service.yaml       # Application manifest
└── infrastructure/
    ├── cert-manager.yaml   # Application for cert-manager
    ├── ingress-nginx.yaml  # Application for nginx ingress
    └── monitoring.yaml     # Application for Prometheus stack

When you add a new application, you commit a new Application manifest to clusters/production/. ArgoCD’s root application detects the new file, syncs it, and ArgoCD starts managing the new application automatically. No manual kubectl apply required after the initial setup.

ApplicationSets for scaling further:

When you have dozens of applications with the same structure, ApplicationSets generate Application manifests from a template:

# applicationset.yaml - generates one Application per directory in apps/
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: all-apps-production
  namespace: argocd
spec:
  generators:
    - git:
        repoURL: https://github.com/your-org/infrastructure.git
        revision: main
        directories:
          - path: apps/*/overlays/production
  template:
    metadata:
      name: '{{path.basename}}-production'
      labels:
        environment: production
    spec:
      project: production
      source:
        repoURL: https://github.com/your-org/infrastructure.git
        targetRevision: main
        path: '{{path}}'
      destination:
        server: https://kubernetes.default.svc
        namespace: production
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
          - ServerSideApply=true

Adding a new service is now as simple as creating apps/new-service/overlays/production/. The ApplicationSet generates the Application manifest automatically.

Multi-Environment Promotion with Kustomize

The production GitOps Kubernetes pattern uses Kustomize overlays for environment promotion. The base contains the shared configuration. Each overlay applies environment-specific patches.

# apps/payment-api/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - deployment.yaml
  - service.yaml
  - configmap.yaml
  - hpa.yaml

commonLabels:
  app: payment-api
  managed-by: argocd
# apps/payment-api/overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: production

bases:
  - ../../base

patches:
  - path: patch-replicas.yaml
  - path: patch-resources.yaml

images:
  - name: your-registry/payment-api
    newTag: v2.4.1   # This is what you update to promote a new version
# apps/payment-api/overlays/production/patch-replicas.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: payment-api
spec:
  replicas: 5   # Production: 5 replicas
# apps/payment-api/overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: dev

bases:
  - ../../base

images:
  - name: your-registry/payment-api
    newTag: latest   # Dev tracks latest

Promotion workflow:

# Promote v2.5.0 from staging to production
# 1. Update the image tag in the production overlay
cd apps/payment-api/overlays/production
kustomize edit set image your-registry/payment-api:v2.5.0

# 2. Commit and push
git add kustomization.yaml
git commit -m "chore: promote payment-api v2.5.0 to production"
git push origin main

# 3. ArgoCD detects the change and syncs automatically
# Monitor the rollout:
argocd app get payment-api-production
argocd app wait payment-api-production --health --timeout 300

The entire promotion is a Git commit. The audit trail shows who promoted what version and when. Rolling back is git revert on that commit.

Secrets Management in GitOps Kubernetes

Committing Kubernetes Secrets to Git is not GitOps, it is a security incident. Base64 encoding is not encryption. The three patterns that solve this correctly:

Pattern 1 – Sealed Secrets (simplest, no external dependency):

# Install Sealed Secrets controller
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
helm install sealed-secrets sealed-secrets/sealed-secrets \
  --namespace kube-system

# Install kubeseal CLI
brew install kubeseal

# Seal a secret (output is safe to commit to Git)
kubectl create secret generic db-secret \
  --from-literal=DATABASE_URL=postgres://user:pass@db/myapp \
  --dry-run=client -o yaml | \
  kubeseal \
  --controller-name=sealed-secrets \
  --controller-namespace=kube-system \
  --format yaml > sealed-db-secret.yaml

# Commit the sealed secret - it is encrypted and safe
git add sealed-db-secret.yaml
git commit -m "feat: add database sealed secret"

The SealedSecret can only be decrypted by the controller in your cluster. Nobody with Git access can decrypt it without the controller’s private key.

Pattern 2 – External Secrets Operator (best for teams already using Vault or AWS Secrets Manager):

# external-secret.yaml - references an external secret store
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: db-secret
  namespace: production
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secrets-manager
    kind: ClusterSecretStore
  target:
    name: db-secret  # Creates this Kubernetes Secret
    creationPolicy: Owner
  data:
    - secretKey: DATABASE_URL
      remoteRef:
        key: production/payment-api/db
        property: url

The External Secrets Operator fetches the secret from AWS Secrets Manager, Vault, or GCP Secret Manager and creates a Kubernetes Secret. What is committed to Git is the ExternalSecret reference, not the secret value. Rotation in the secret store automatically propagates to the cluster.

Pattern 3 – SOPS with Flux (native to Flux, requires AGE or GPG key):

# Encrypt a secret file with SOPS + AGE
sops --age=age1... --encrypt secret.yaml > secret.enc.yaml

# Commit the encrypted file
git add secret.enc.yaml
git commit -m "feat: add encrypted database secret"

# Flux decrypts automatically at reconciliation time
# using the AGE key stored as a Kubernetes Secret

Never commit plain Secrets to Git, use Sealed Secrets, External Secrets Operator, or Vault. Base64 is not encryption. Sealed Secrets repo

Drift Detection and Alerting

A GitOps Kubernetes setup without drift detection alerting is operating blind. ArgoCD may detect and correct drift automatically, but teams need to know when drift occurs, especially if manual changes indicate a security event or a misconfigured deployment process.

ArgoCD metrics for Prometheus:

# Prometheus ServiceMonitor for ArgoCD metrics
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: argocd-metrics
  namespace: argocd
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: argocd-metrics
  endpoints:
    - port: metrics

Critical ArgoCD alerts:

# prometheus-rules-argocd.yaml
groups:
- name: argocd
  rules:
  - alert: ArgoCDAppOutOfSync
    expr: |
      argocd_app_info{sync_status="OutOfSync"} == 1
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "ArgoCD application {{ $labels.name }} is out of sync"
      description: "Application {{ $labels.name }} in namespace {{ $labels.dest_namespace }} has been out of sync for more than 5 minutes"

  - alert: ArgoCDAppSyncFailed
    expr: |
      argocd_app_info{health_status="Degraded"} == 1
    for: 2m
    labels:
      severity: critical
    annotations:
      summary: "ArgoCD application {{ $labels.name }} is degraded"
      description: "Application {{ $labels.name }} sync has failed — immediate attention required"

  - alert: ArgoCDSyncError
    expr: |
      argocd_app_info{sync_status="Unknown"} == 1
    for: 10m
    labels:
      severity: warning
    annotations:
      summary: "ArgoCD cannot determine sync status for {{ $labels.name }}"

A sync failure in ArgoCD or Flux may not surface in your existing alerting unless you instrument it specifically. The reconciler might fail to apply a manifest due to a webhook validation rejection, a resource quota limit, or a CRD version mismatch. Without alerts on these conditions, your deployment appears to succeed from the CI perspective while the cluster quietly ignores the change. External Secrets Operator

Progressive Delivery: Canary and Blue-Green in GitOps

GitOps establishes the mechanism for getting a desired state into a cluster. Progressive delivery extends that with controls over how much traffic a new version receives and how fast that traffic shifts. SOPS repo

Flagger is the standard integration for progressive delivery with both ArgoCD and Flux. It automates canary releases based on Prometheus metrics.

# canary.yaml - Flagger Canary for the payment API
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: payment-api
  namespace: production
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: payment-api
  service:
    port: 80
    targetPort: 3000
  analysis:
    interval: 30s
    threshold: 5      # Max failed checks before rollback
    maxWeight: 50     # Max traffic to canary: 50%
    stepWeight: 10    # Increase by 10% per interval
    metrics:
      - name: request-success-rate
        thresholdRange:
          min: 99     # Rollback if success rate drops below 99%
        interval: 30s
      - name: request-duration
        thresholdRange:
          max: 500    # Rollback if p99 latency exceeds 500ms
        interval: 30s

When ArgoCD syncs a new image tag, Flagger intercepts and starts the canary analysis: 10% traffic to the new version, checking success rate and latency every 30 seconds. If metrics stay healthy across 5 checks, traffic shifts to 20%, then 30%, up to 50%. At 50% healthy traffic, Flagger promotes the canary to full production. If metrics degrade at any point, Flagger rolls back automatically. The GitOps commit that triggered the deployment remains, the rollback is a Flagger decision, not a Git revert, and the next push of a fixed version restarts the canary analysis.

Conclusion

GitOps Kubernetes is not a tool choice, it is an operational model. Git as the single source of truth, continuous reconciliation, automatic drift detection, and rollback-by-revert are the properties that make production Kubernetes reliable and auditable at scale. The 64% adoption rate reflects organizations that have made the shift and found the reliability improvement measurable.

The implementation choices: ArgoCD or Flux, monorepo or multi-repo, Sealed Secrets or External Secrets Operator, matter less than having the fundamental model in place. Pick the tool that fits your team’s preference and start with one application. The App of Apps pattern and ApplicationSets handle the growth.

At The Good Shell we implement GitOps Kubernetes pipelines for startups and platform engineering teams. See our DevOps and infrastructure services or our case studies.

For the tooling, the ArgoCD documentation and the Flux documentation cover every pattern in this guide with up-to-date configuration reference.