Simulate Kubernetes Multi-Cluster Deployments Locally With ArgoCD: Part I

TL;DR

So this writing was reflectedfrom my tinkering experience while playing with GitOps in kubernetes when using ApplicationSet object with Cluster Generator approach with utilizing Kind (Kubernetes in Docker) , vCluster , and Argo CD itself.

With the trend of adopting Kubernetes clusters for main container workloads, startups, unicorns, and potential big enterprises sometimes treat multiple Kubernetes clusters as a fleet, referring to them as collections or groups of Kubernetes clusters.

One of the main aspects when dealing with fleet was having management to deal with something like Application Deployment and might some Configuration Management capabilities. Why? because in my personal opinion, fleet was having same analogy of “treating cattle like pets”. This approach is quite similar applied to managing multiple Kubernetes clusters, treating them with one trust sources then bringing up it to all cluster rather than individually.

The ApplicationSet controller is the main object resources that play important to help the Application Deployment and the all manifest thing’s related to Kubernetes cluster configuration. It introduces Generator features, including Cluster generator which is able to push all the deployments and configurations to multiple Kubernetes clusters.

There are many ways to run Kubernetes cluster locally, but in this case, I will sticking-out using Kind as “management-cluster” and the place where the Argo CD will be deployed. To simulate a multi-cluster approach, will be use vCluster to help replicate multi-cluster with virtually. Well, It is more like a multi-tenant setup, but I guess it works fine 😄. In the end but great to mention, I will be using MetalLB as load-balancer tools to help expose vCluster API as a LoadBalancer service type to make setup-life easier ✌️.

Simple High Level View

graph LR; subgraph kubernetes cluster subgraph argocd-namespace argocd_ns[argocd] end subgraph vcluster style vcluster stroke-dasharray: 5 5 vcluster_c1[c1 namespace] vcluster_c2[c2 namespace] end subgraph metallb-namespace metallb_1[metallb] end end trigger/commit-->git_rep[git repository]-->argocd_ns argocd_ns-- appset-dst-cluster --> vcluster metallb_1-.advertise-ip.->vcluster

The setups

At this point, I have already created the Makefile to simplify the process, and the explanation will be provided below. You can find all stuff’s in this Git repository multicluster-play .

Prepare Kind Cluster

In the prepare Kind cluster, it will run to spawning Kind cluster after that install some base dependencies like installing metrics-server (it will be so useful when playing o11y in Kind cluster for future playgrounds) then install MetalLB with also the configuration itself to check. The one thing to ensure is to check the Docker networks for Kind so we can rent the network to act as LoadBalancer external IPs. Usually, it will create a docker network with naming “kind” in your host cmiiw. Example MetalLB in my host with using IPs ranging from .50 to .60.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
    # docker container will be using sequential increment IPs like:
    # 172.18.0.1 == gateway
    # 172.18.0.2 == container app 1/kind-1
    # 172.18.0.3 == container app 2/kind-2 and so on...
    - 172.18.0.50-172.18.0.60
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

Installing Argo CD

Simply install it the using Helm package. At the time of writing, this article uses Argo CD v2.10. For those who want to know my values, you can check them under the bootstrap directory.

# makefile
make install_argocd

# or manually with
helm upgrade --install -n argocd --create-namespace argocd 0-bootstrap/argocd/ -f 0-bootstrap/argocd/values.yaml

Spawn vCluster

At this Part, ensure that you already have vCluster CLI with the latest stable version (v0.19, since they are still developing v0.20 with beta status). The values of vCluster that I used are shown below.

vcluster:
  # idk why im still like to using k8s v1.27 ¯\_(ツ)_/¯
  image: rancher/k3s:v1.27.11-k3s1

telemetry:
  disabled: "true"

To apply it with:

# makefile
make start_vcluster

# manually
vcluster create c1 -n c1 --connect=false -f vcluster-bootstrap/c1-values.yaml --expose

Register vCluster to Argo CD

This is the interesting part. There are two ways to add child clusters to Argo CD. First, we can adding them via Argo CD CLI, ensuring login through via CLI. Second, we can add clusters with declarative Secret objects. I will describe both ways below.

Register with Argo CD CLI:

  1. Since i haven’t exposing externally via ingress, we can expose it via port-forwarding then grab the initial admin password
    kubectl port-forward -n argocd svc/argocd-server 8080:80
    kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
    
    # makefile ways
    make forward_argocd
    make get_secret_argocd
    
  2. After grab the login password and exposed the server, then can login via:
    argocd login localhost:8080 --username admin --plaintext --insecure
    
  3. Grab the kubeconfig for c1 cluster from vCluster CLI, so we can adding it directly via kubeconfig context.
    vcluster connect c1 -n c1
    
  4. Register the c1 cluster with Argo CD CLI that you have been login before.
    argocd cluster add vcluster_c1_c1_kind-kind-infra-mgmt
    
  5. Then you can check via Argo CD dashboard under Settings –> Clusters with open localhost:8080 from browser with same the credentials above.

Register with Declarative Secret:

  1. Write Secret object manifest that is able to read from Argo CD controller with adding metadata label. Here is an example for adding c2-cluster.
    apiVersion: v1
    kind: Secret
    metadata:
      name: c2-cluster
      namespace: argocd
      labels:
        # important label! to ensure argocd can read/discover the clusters
        argocd.argoproj.io/secret-type: cluster
        # also can adding common label like:
        # to help query/filtering in the future usage
        env: production
        project: lalayeye
    type: Opaque
    # for better reading #iykwim 😛
    stringData:
      # you can change the name cluster below, to make sure name destination
      # in appsets target later.
      # name: c2-cluster
      name: vcluster_c2_c2_kind-kind-infra-mgmt
      # in this example my c2-cluster got IP .51
      server: https://172.18.0.51
      config: |
        {
          "tlsClientConfig": {
             "insecure": false,
             "certData": "--- cluster.client-certificate-data ---",
             "keyData": "--- cluster.client-key-data ---",
             "caData": "--- cluster.certificate-authority-data ---"
          }
        }    
    
  2. Save then apply it and check it under the Argo CD dashboard.

(Not yet) Closing Thought

Well, while I writing the Register vCluster to Argo CD sections, I thought about to separate it into “like a series” 😄. In this Part I, discussing the background of multi-cluster deployment approach on Kubernetes with Argo CD, then explore vCluster as “simulator” of “child-cluster” and cover some basic Argo CD operations loggging in and adding clusters that will serve as destination under ApplicationSet object later in the next part.

Thank You.