ArgoCD
ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes that automates application deployment and lifecycle management.
Overview
- Namespace:
argocd - Helm Chart:
argoproj/argo-cd - Chart Version:
9.4.2 - App Version:
v3.3.0 - Deployment: Self-managed via ArgoCD
- Sync Wave:
-50(first application to deploy) - Sync Options:
ServerSideApply=true - URL:
https://argocd.k8s.n37.ca
Upgraded from chart 9.2.4 to 9.4.1. Server-Side Apply enabled (PR #376) for better handling of large CRDs and reduced sync conflicts.
Purpose
ArgoCD serves as the foundation of the GitOps workflow by:
- Monitoring git repositories for configuration changes
- Automatically syncing desired state to the cluster
- Providing visualization of application health
- Managing application lifecycle and rollbacks
- Enforcing declarative infrastructure as code
Architecture
Self-Management
ArgoCD manages its own deployment through a bootstrap Application manifest. This creates a self-healing, self-upgrading system where ArgoCD's configuration is version-controlled in git.
Components
Application Controller:
- Monitors git repositories
- Compares desired state (git) vs actual state (cluster)
- Synchronizes resources
- Replicas: 2 (for high availability)
Repo Server:
- Clones git repositories
- Renders Helm charts and Kustomize configurations
- Caches rendered manifests
API Server:
- Provides web UI and API
- Handles authentication and authorization
- Service Type: ClusterIP
Dex (Optional):
- OAuth2/OIDC authentication provider
- Supports GitHub, Google, LDAP integration
- Currently disabled (can be enabled for SSO)
Deployment Configuration
Application Manifest
Location: manifests/applications/argocd.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argocd
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "-50"
spec:
project: infrastructure
sources:
- chart: argo-cd
repoURL: https://argoproj.github.io/argo-helm
targetRevision: 9.4.2
helm:
releaseName: argocd
valueFiles:
- $argocd/manifests/base/argocd/argocd-config.yaml
- path: manifests/base/argocd
repoURL: git@github.com:imcbeth/homelab.git
targetRevision: HEAD
ref: argocd
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- ServerSideApply=true
Configuration Values
Location: manifests/base/argocd/argocd-config.yaml
server:
service:
type: ClusterIP
configs:
cm:
url: https://argocd.k8s.n37.ca
controller:
replicas: 2
Access and Authentication
Web UI Access
External URL: https://argocd.k8s.n37.ca
Port Forward (for local access):
kubectl port-forward svc/argocd-server -n argocd 8080:443
Then open: https://localhost:8080
CLI Access
Login:
argocd login argocd.k8s.n37.ca --grpc-web
Get Initial Admin Password:
kubectl get secret argocd-initial-admin-secret -n argocd \
-o jsonpath="{.data.password}" | base64 -d
Change Admin Password:
argocd account update-password --grpc-web
Projects
ArgoCD organizes applications into projects for access control and resource management.
Infrastructure Project
Name: infrastructure
Purpose: Core cluster infrastructure and platform services
Applications:
- argocd (self-management)
- ingress-nginx
- metal-lb
- synology-csi
- cert-manager
- kube-prometheus-stack
- unipoller
- gatekeeper, gatekeeper-policies
- istio-base, istiod, istio-cni, istio-ztunnel
- tigera-operator
- sealed-secrets
- external-dns
- loki, promtail
- argo-workflows
- velero
- falco
- trivy-operator
- metrics-server
- network-policies
Applications Project
Name: applications (default)
Purpose: User-facing applications and services
Applications:
- localstack
Sync Waves
ArgoCD uses sync waves to control deployment order. Applications are deployed in ascending order by sync wave annotation.
Current Sync Wave Configuration:
-100: tigera-operator (CNI foundation)
-50: ArgoCD (self-management)
-45: istio-base (mesh CRDs)
-44: istiod (mesh control plane)
-42: istio-cni, istio-ztunnel (mesh data plane)
-40: network-policies (must be in place before workloads)
-35: MetalLB (networking layer)
-30: synology-csi (storage), ingress-nginx (ingress controller)
-25: sealed-secrets (decrypt before other apps)
-20: unipoller (metrics collection)
-15: kube-prometheus-stack (monitoring)
-12: loki (log aggregation)
-11: promtail (log collection)
-10: cert-manager, external-dns, metrics-server
-8: argo-workflows (CI/CD automation)
-7: localstack (S3 mock for Velero)
-6: gatekeeper (admission control + ConstraintTemplates)
-5: gatekeeper-policies, velero, falco
0: (default) most applications
Why This Matters:
- Storage must be ready before applications request PVCs
- Load balancer must be ready before services request LoadBalancer IPs
- Monitoring should deploy early to track other applications
Automated Sync Policies
All applications have automated sync enabled with these policies:
prune: true
- Automatically removes resources deleted from git
- Keeps cluster in sync with repository
selfHeal: true
- Automatically reverts manual changes to resources
- Enforces git as the single source of truth
CreateNamespace: true
- Automatically creates target namespace if it doesn't exist
Common Operations
List All Applications
# CLI
argocd app list --grpc-web
# kubectl
kubectl get applications -n argocd
Get Application Status
# Detailed status
argocd app get <app-name> --grpc-web
# YAML manifest
kubectl get application <app-name> -n argocd -o yaml
Manually Sync Application
# Sync specific application
argocd app sync <app-name> --grpc-web
# Sync with prune
argocd app sync <app-name> --prune --grpc-web
argocd app sync --force is incompatible with ServerSideApply=true (used by most apps in this cluster). It will fail with "error validating options: --force cannot be used with --server-side". Use normal sync or Replace=true instead.
Refresh Application
# Refresh (re-compare git vs cluster)
argocd app get <app-name> --refresh --grpc-web
# Hard refresh (clear cache)
argocd app get <app-name> --hard-refresh --grpc-web
View Application Logs
# Sync logs
argocd app logs <app-name> --grpc-web
# Follow logs
argocd app logs <app-name> --follow --grpc-web
Git Repository Configuration
Repository Secret
Location: secrets/argocd-git-access.yaml (git-crypt encrypted)
Contains SSH private key for accessing the private homelab repository.
Apply Secret:
kubectl apply -f secrets/argocd-git-access.yaml
Repository Connection
# List connected repositories
argocd repo list --grpc-web
# Add new repository (if needed)
argocd repo add git@github.com:imcbeth/homelab.git \
--ssh-private-key-path ~/.ssh/id_rsa --grpc-web
Troubleshooting
Application Won't Sync
Check sync status:
kubectl get application <name> -n argocd -o yaml | grep -A 20 "status:"
Common issues:
- Invalid YAML syntax in manifests
- Missing dependencies (e.g., storage class doesn't exist)
- Resource conflicts
- CRD not installed
Force refresh and sync:
argocd app get <name> --refresh --grpc-web
argocd app sync <name> --grpc-web
Out of Sync Resources
View diff:
argocd app diff <name> --grpc-web
Manual kubectl apply:
kubectl apply -f manifests/applications/<app-name>.yaml
Files in manifests/applications/ are NOT auto-deployed by ArgoCD self-management. After merging changes to Application manifests, you must run kubectl apply -f manifests/applications/<app>.yaml to update the Application spec in-cluster.
ServerSideApply Drift (ignoreDifferences)
When ServerSideApply=true is enabled, Kubernetes populates default values on resources that aren't in the Helm chart template (e.g., imagePullPolicy, revisionHistoryLimit, readiness probe defaults, dnsPolicy, restartPolicy, schedulerName, etc.). This causes perpetual OutOfSync in ArgoCD.
Solution: Add comprehensive ignoreDifferences with jqPathExpressions and enable RespectIgnoreDifferences=true:
spec:
ignoreDifferences:
- group: apps
kind: DaemonSet
jqPathExpressions:
- .metadata.labels
- .metadata.annotations
- .spec.revisionHistoryLimit
- .spec.template.spec.containers[].imagePullPolicy
# ... all K8s-defaulted fields
syncPolicy:
syncOptions:
- ServerSideApply=true
- RespectIgnoreDifferences=true
Affected applications (as of 2026-02-05):
istio-ztunnel- DaemonSet K8s-defaulted fields (PR #379, #380)tigera-operator- Installation CR operator-populated defaults (PR #381)kube-prometheus-stack- Grafana secret checksum drift
ArgoCD Pods Not Running
Check pod status:
kubectl get pods -n argocd
View logs:
kubectl logs -n argocd deployment/argocd-server
kubectl logs -n argocd deployment/argocd-application-controller
kubectl logs -n argocd deployment/argocd-repo-server
Git Repository Connection Issues
Test SSH key:
ssh -T git@github.com
Verify repository secret:
kubectl get secret -n argocd | grep repo
Self-Healing Not Working
Check sync policy:
kubectl get application <name> -n argocd -o yaml | grep -A 5 "syncPolicy:"
Ensure selfHeal is enabled:
syncPolicy:
automated:
selfHeal: true
Updating ArgoCD
Helm Chart Updates
To update to a newer ArgoCD version:
- Check ArgoCD releases for changes
- Update
targetRevisioninmanifests/applications/argocd.yaml - Review CHANGELOG for breaking changes
- Commit and push changes
- ArgoCD will self-upgrade automatically
Example:
targetRevision: 9.5.0 # Update from 9.4.2
Configuration Changes
To modify ArgoCD settings:
- Edit
manifests/base/argocd/argocd-config.yaml - Commit and push
- ArgoCD syncs and applies changes automatically
Best Practices
Application Organization
- Use descriptive application names
- Set appropriate sync waves
- Use projects for access control
- Document sync dependencies
Git Workflow
- Always use pull requests for main branch
- Test changes in feature branches
- Use meaningful commit messages
- Tag releases for rollback capability
Sync Policies
- Enable automated sync for stable applications
- Use manual sync for critical changes
- Enable prune for complete state management
- Use selfHeal for production stability
Monitoring
- Watch ArgoCD UI for sync failures
- Set up alerts for out-of-sync applications
- Monitor ArgoCD resource usage
- Review sync history regularly
Resource Usage
Application Controller:
- CPU: ~100-200m under normal load
- Memory: ~256-512Mi
Repo Server:
- CPU: ~50-100m
- Memory: ~128-256Mi
API Server:
- CPU: ~50-100m
- Memory: ~128-256Mi
Total: Minimal overhead for powerful automation capabilities on the Raspberry Pi cluster.