Setup ArgoCD using Terraform and ALB Controller

4 minute read

This is a guide to getting started with ArgoCD using Terraform and the AWS LB controller as the ingress. You can see my guide here on how to setup and use the ALB controller. I’m going to go through the Terraform code and a demo app to deploy to your K8s cluster through ArgoCD. All the referenced Terraform code can be obtained here.

These are the providers that we’ll be using in the environment. You may need to adjust how the helm and kubectl providers are getting the cluster name and token for your environment.

Providers/Versions

providers.tf

 1locals {
 2  env    = "sandbox"
 3  region = "us-east-1"
 4}
 5
 6provider "aws" {
 7  region = local.region
 8  default_tags {
 9    tags = {
10      env       = local.env
11      terraform = true
12    }
13  }
14}
15
16provider "helm" {
17  kubernetes {
18    host                   = module.eks-cluster.endpoint
19    cluster_ca_certificate = base64decode(module.eks-cluster.certificate)
20    exec {
21      api_version = "client.authentication.k8s.io/v1beta1"
22      # This requires the awscli to be installed locally where Terraform is executed
23      args        = ["eks", "get-token", "--cluster-name", module.eks-cluster.name]
24      command     = "aws"
25    }
26  }
27}
28
29provider "kubectl" {
30  apply_retry_count      = 5
31  host                   = module.eks-cluster.endpoint
32  cluster_ca_certificate = base64decode(module.eks-cluster.certificate)
33  load_config_file       = false
34
35  exec {
36    api_version = "client.authentication.k8s.io/v1beta1"
37    command     = "aws"
38    # This requires the awscli to be installed locally where Terraform is executed
39    args = ["eks", "get-token", "--cluster-name", module.eks-cluster.name]
40  }
41}

versions.tf

 1terraform {
 2  required_providers {
 3    aws = {
 4      source  = "hashicorp/aws"
 5      version = "~> 5.0"
 6    }
 7    kubectl = {
 8      source  = "alekc/kubectl"
 9      version = "~> 2.0.3"
10    }
11    helm = {
12      source  = "hashicorp/helm"
13      version = "~> 2.11.0"
14    }
15  }
16  required_version = "~> 1.5.7"
17}

Module

Initialize the module where needed. Here we’re installing ArgoCD to your K8s cluster through Helm and providing a values file through the templatefile function so we can have variable subsitution. In this demo, I’m using a public LB; however, if possible, stick it behind an internal LB with access by VPN.

1module "argocd" {
2  source            = "../../modules/argocd"
3  name              = "argocd"
4  env               = local.env
5  region            = local.region
6  argocd_version    = "3.35.4"
7  loadbalancer_dns  = module.public_loadbalancer.dns_name
8  fqdn              = "argocd.sandbox.demo"
9}

Module files

main.tf

 1resource "helm_release" "argocd" {
 2  namespace        = "argocd"
 3  create_namespace = true
 4  name             = "argo-cd"
 5  repository       = "https://argoproj.github.io/argo-helm"
 6  chart            = "argo-cd"
 7  version          = var.argocd_version
 8  values = ["${templatefile("../../modules/argocd/files/values.yaml", {
 9    ENV     = var.env
10    FQDN    = var.fqdn
11    LB_NAME = "${var.env}-public-application"
12  })}"]
13}

In this values file, we’re running a basic HA setup and using the ALB controller for the ingress. This example is using a shared LB by setting the “group.name” annotation and creates two LB rules for HTTP and GRPC traffic as recommended by the ArgoCD docs when using an ALB. Also take note of the node affinity to my core node group since we don’t want these pods shifted to nodes managed by Karpenter.

values.yaml

 1redis-ha:
 2  enabled: true
 3controller:
 4  replicas: 1
 5server:
 6  replicas: 2
 7  ingress:
 8    enabled: true
 9    ingressClassName: alb
10    hosts:
11      - ${FQDN}
12    annotations:
13      alb.ingress.kubernetes.io/backend-protocol: HTTPS
14      alb.ingress.kubernetes.io/group.name: ${LB_NAME}
15      alb.ingress.kubernetes.io/healthcheck-interval-seconds: "30"
16      alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
17      alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
18      alb.ingress.kubernetes.io/load-balancer-name: ${LB_NAME}
19      alb.ingress.kubernetes.io/scheme: internet-facing
20      alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-2019-08
21      alb.ingress.kubernetes.io/tags: "env=${ENV},terraform=true"
22      alb.ingress.kubernetes.io/target-type: ip
23  ingressGrpc:
24    enabled: true
25    isAWSALB: true
26    awsALB:
27      serviceType: ClusterIP
28      backendProtocolVersion: GRPC
29    hosts:
30      - ${FQDN}
31repoServer:
32  replicas: 2
33applicationSet:
34  replicas: 2
35global:
36  affinity:
37    nodeAffinity:
38      requiredDuringSchedulingIgnoredDuringExecution:
39        nodeSelectorTerms:
40        - matchExpressions:
41          - key: role
42            operator: In
43            values:
44            - core

Change this to your DNS provider.

dns.tf

1resource "cloudflare_record" "argocd" {
2  zone_id         = "your_zone_id"
3  name            = "argocd.${var.env}"
4  value           = var.loadbalancer_dns
5  type            = "CNAME"
6  ttl             = 3600
7  allow_overwrite = true
8}

variables.tf

 1variable "argocd_version" {
 2  type = string
 3}
 4variable "env" {
 5  type = string
 6}
 7variable "fqdn" {
 8  type = string
 9}
10variable "loadbalancer_dns" {
11  type = string
12}
13variable "name" {
14  type = string
15}
16variable "region" {
17  type = string
18}

Demo App

Once it’s installed to your K8s cluster, you should be able to reach the login page of ArgoCD. The admin password is generated during the install and saved to a K8s secret that can be obtained by running the command below. For security, it’s recommended to delete the secret once you have it.

1kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath='{.data.password}' | base64 --decode

This demo app is a Helm chart in a github repo and we’re going to use Terraform to apply the Application manifest for ArgoCD to manage.

main.tf

1resource "kubectl_manifest" "argocd_app" {
2  yaml_body = templatefile("../../modules/guestbook/files/app_manifest.yaml", {
3    ENV = var.env
4  })
5}

app_manifest.yaml

 1apiVersion: argoproj.io/v1alpha1
 2kind: Application
 3metadata:
 4  name: guestbook
 5  namespace: argocd
 6spec:
 7  project: default
 8  source:
 9    repoURL: https://github.com/argoproj/argocd-example-apps.git
10    path: helm-guestbook
11    targetRevision: HEAD
12  destination:
13    server: "https://kubernetes.default.svc"
14    namespace: ${ENV}
15  syncPolicy:
16    automated:
17      prune: false
18      selfHeal: true
19    syncOptions:
20      - CreateNamespace=false
21      - Validate=true

After you apply it via Terraform, you should see the app in the ArgoCD dashboard immediately, where it will then install to the local K8s cluster and sync any changes made from the project repo.