vcluster Archives - Piotr's TechBlog https://piotrminkowski.com/tag/vcluster/ Java, Spring, Kotlin, microservices, Kubernetes, containers Thu, 29 Jun 2023 11:01:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 vcluster Archives - Piotr's TechBlog https://piotrminkowski.com/tag/vcluster/ 32 32 181738725 Testing GitOps on Virtual Kubernetes Clusters with ArgoCD https://piotrminkowski.com/2023/06/29/testing-gitops-on-virtual-kubernetes-clusters-with-argocd/ https://piotrminkowski.com/2023/06/29/testing-gitops-on-virtual-kubernetes-clusters-with-argocd/#comments Thu, 29 Jun 2023 10:39:50 +0000 https://piotrminkowski.com/?p=14283 In this article, you will learn how to test and verify the GitOps configuration managed by ArgoCD on virtual Kubernetes clusters. Assuming that we are fully managing the cluster in the GitOps way, it is crucial to verify each change in the Git repository before applying it to the target cluster. In order to test […]

The post Testing GitOps on Virtual Kubernetes Clusters with ArgoCD appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to test and verify the GitOps configuration managed by ArgoCD on virtual Kubernetes clusters. Assuming that we are fully managing the cluster in the GitOps way, it is crucial to verify each change in the Git repository before applying it to the target cluster. In order to test it, we need to provision a new Kubernetes cluster on demand. Fortunately, we may take advantage of virtual clusters using the Loft vcluster solution. In this approach, we are just “simulating” another cluster on the existing Kubernetes. Once, we are done with the tests, we may remove it. I have already introduced vcluster in one of my previous articles about multicluster management with ArgoCD available here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

Architecture

There are several branches in the Git configuration repository. However, ArgoCD automatically applies configuration from the master branch. Before merging other branches we want to test the current state of the repository on a newly created cluster. Thanks to that we could be sure that we won’t break anything on the main cluster. Moreover, we will make sure it is possible to apply the current configuration to any new, real cluster.

The virtual cluster creation process is triggered by Loft. Once, we create a new virtual Kubernetes cluster Loft adds it as a managed cluster to ArgoCD (thanks to the integration between Loft and ArgoCD). The name of the virtual cluster should be the same as the name of the tested branch in the configuration repository. When ArgoCD detects a new cluster, it automatically creates the Application to manage it. It is possible thanks to the ApplicationSet and its cluster generator. Then the Application automatically synchronizes the Git repository from the selected branch into the target vcluster. Here’s the diagram that illustrates our architecture.

argocd-virtual-kubernetes-arch

Install ArgoCD on Kubernetes

In the first step, we are going to install ArgoCD on the management Kubernetes cluster. We can do it using the latest version of the argo-cd Helm chart available in the repository. Assuming you have Helm CLI installed on your laptop add the argo-helm repository with the following command:

$ helm repo add argo https://argoproj.github.io/argo-helm

Then you can install ArgoCD in the argocd namespace by executing the following command:

$ helm install argocd argo/argo-cd -n argocd --create-namespace

In order to access the ArgoCD UI outside Kubernetes we can configure the Ingress object or enable port forwarding as shown below:

$ kubectl port-forward service/argocd-server -n argocd 8080:443

Now, the UI is available under the https://localhost:8080 address. You also need to obtain the admin password:

$ kubectl -n argocd get secret argocd-initial-admin-secret \
    -o jsonpath="{.data.password}" | base64 -d

Install Loft vcluster on Kubernetes

In the next step, we are going to install Loft vcluster on Kubernetes. If you just want to create and manage virtual clusters you can just install the vcluster CLI on your laptop. Here’s the documentation with the installation instructions. With the loft CLI, we can install web UI on Kubernetes and take advantage e.g. with built-in integration with ArgoCD. Here are the installation instructions. Once you install the CLI on your laptop you can use the loft command to install it on your Kubernetes cluster:

$ loft start

Here’s the result screen after the installation:

If your installation finished successfully you should have two pods running in the loft namespace:

$ kubectl get po -n loft
NAME                        READY   STATUS    RESTARTS   AGE
loft-77875f8946-xp8v2       1/1     Running   0          5h27m
loft-agent-58c96f88-z6bzw   1/1     Running   0          5h25m

The Loft UI is available under the https://localhost:9898 address. We can login there and create our first project:

argocd-virtual-kubernetes-loft-ui

For that project, we have to enable integration with our instance of ArgoCD. Go to the “Project Settings”, and then select the “Argo CD” menu item. Then you should click “Enable Argo CD Integration”. Our instance of ArgoCD is running in the argocd namespace. Don’t forget to save the changes.

As you see, we also need to configure the loftHost in the Admin > Config section. It should point to the internal address of the loft Kubernetes Service from the loft namespace.

Creating Configuration for ArgoCD with Helm

In our exercise today, we will install cert-manager on Kubernetes and then use its CRDs to create issuers and certificates. In order to be able to run it on any cluster, we will create simple, parametrized templates with Helm. Here’s the structure of our configuration repository:

├── apps
│   └── cert-manager
│       ├── certificates.yaml
│       └── cluster-issuer.yaml
└── bootstrap
    ├── Chart.yaml
    ├── templates
    │   ├── cert-manager.yaml
    │   └── projects.yaml
    └── values.yaml

Let’s analyze the content stored in this configuration repository. Each time we are creating virtual cluster per repository branch we are using a dedicated ArgoCD Project.

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: {{ .Values.project }}
  namespace: argocd
  annotations:
    argocd.argoproj.io/sync-wave: "1"
spec:
  clusterResourceWhitelist:
    - group: '*'
      kind: '*'
  destinations:
    - name: '*'
      namespace: '*'
      server: '*'
  sourceRepos:
    - '*'

For the next activities, we are basing on the ArgoCD “app of apps” pattern. Our first Application is installing the cert-manager using the official Helm chart (1). The destination Kubernetes cluster (2) and ArgoCD project (3) are set using the Helm parameters.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: cert-manager
  namespace: argocd
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  syncPolicy:
    automated: {}
    syncOptions:
      - CreateNamespace=true
  source:
    repoURL: 'https://charts.jetstack.io'
    targetRevision: v1.12.2
    chart: cert-manager # (1)
  destination:
    namespace: cert-manager
    server: {{ .Values.server }} # (2)
  project: {{ .Values.project }} # (3)

Our second ArgoCD Application is responsible for applying cert-manager CRD objects from the apps directory. We are parametrizing not only a target cluster and ArgoCD Project, but also a source branch in the configuration repository (1).

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: certs-config
  namespace: argocd
  annotations:
    argocd.argoproj.io/sync-wave: "3"
spec:
  destination:
    namespace: certs
    server: {{ .Values.server }}
  project: {{ .Values.project }}
  source:
    path: apps/cert-manager
    repoURL: https://github.com/piomin/kubernetes-config-argocd.git
    targetRevision: {{ .Values.project }} # (1)
  syncPolicy:
    automated: {}
    syncOptions:
      - CreateNamespace=true

In the apps directory, we are storing cert-manager CRDs. Here’s the ClusterIssuer object:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: ss-clusterissuer
spec:
  selfSigned: {}

Here’s the CRD object responsible for generating a certificate. It refers to the previously shown the ClusterIssuer object.

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: secure-caller-cert
spec:
  keystores:
    jks:
      passwordSecretRef:
        name: jks-password-secret
        key: password
      create: true
  issuerRef:
    name: ss-clusterissuer
    group: cert-manager.io
    kind: ClusterIssuer
  privateKey:
    algorithm: ECDSA
    size: 256
  dnsNames:
    - localhost
    - secure-caller
  secretName: secure-caller-cert
  commonName: localhost
  duration: 1h
  renewBefore: 5m

Create ArgoCD ApplicationSet for Virtual Clusters

Assuming we have already prepared a configuration in the Git repository, we may proceed to the ArgoCD settings. We will create ArgoCD ApplicationSet with the cluster generator. It loads all the remote clusters managed by ArgoCD and creates the corresponding Application for each of them (1). It uses the name of the vcluster to generate the name of the ArgoCD Application (2). The name of the virtual cluster is generated by Loft during the creation process. It automatically adds some prefixes to the name we set when creating a new virtual Kubernetes cluster in Loft UI. The proper name without any prefixes is stored in the loft.sh/vcluster-instance-name label of the ArgoCD Secret. We will use the value of that label to set the name of the source branch (3), or Helm params in the “app of apps” pattern.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: config
  namespace: argocd
spec:
  generators:
    - clusters: # (1)
        selector:
          matchLabels:
            argocd.argoproj.io/secret-type: cluster
  template:
    metadata:
      name: '{{name}}-config-test' # (2)
    spec:
      destination:
        namespace: argocd
        server: https://kubernetes.default.svc
      project: default
      source:
        helm:
          parameters:
            - name: server
              value: '{{server}}'
            - name: project 
              value: '{{metadata.labels.loft.sh/vcluster-instance-name}}'
        path: bootstrap
        repoURL: https://github.com/piomin/kubernetes-config-argocd.git
        # (3)
        targetRevision: '{{metadata.labels.loft.sh/vcluster-instance-name}}'
      syncPolicy:
        automated: {}
        syncOptions:
          - CreateNamespace=true

Let’s assume we are testing the initial version of our configuration before merging it to the master branch in the abc branch. In order to do that, we need to create a virtual cluster with the same name as the branch name – abc.

After creating a cluster, we need to enable integration with ArgoCD.

Then, Loft adds a new cluster to the instance of ArgoCD set in our project settings.

ArgoCD ApplicationSet detects a new managed cluster and creates a dedicated app for managing it. This app automatically synchronizes the configuration from the abc branch. As a result, there is also the Application responsible for installing a cert-manager using Helm chart and another one for applying cert-manager CRD objects. As you see it doesn’t work properly…

argocd-virtual-kubernetes-argocd

Let’s what happened. Ok, we didn’t install CRDs together with cert-manager.

argocd-virtual-kubernetes-argocd-errors

There is also a problem with the ephemeral storage used by the cert-manager. With this configuration, a single pod deployed by cert-manager consumes the whole ephemeral storage from the node.

argocd-virtual-kubernetes-pods

We will fix those problems in the configuration of the cert-manager Helm chart. In order to install CRDs we need to set the installCRDs Helm parameter to true. Since I’m running cert-manager locally, I also have to set limits for ephemeral storage usage for several components. Here’s the final configuration. You will find this version in the b branch inside the Git repository.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: cert-manager
  namespace: argocd
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  syncPolicy:
    automated: {}
    syncOptions:
      - CreateNamespace=true
  destination:
    namespace: cert-manager
    server: {{ .Values.server }}
  project: {{ .Values.project }}
  source:
    repoURL: 'https://charts.jetstack.io'
    targetRevision: v1.12.2
    chart: cert-manager
    helm:
      parameters:
        - name: installCRDs
          value: 'true'
        - name: webhook.limits.ephemeral-storage
          value: '500Mi'
        - name: cainjector.enabled
          value: 'false'
        - name: startupapicheck.limits.ephemeral-storage
          value: '500Mi'
        - name: resources.limits.ephemeral-storage
          value: '500Mi'

So, now let’s create another virtual Kubernetes cluster with the b name.

argocd-virtual-kubernetes-loft-vluster

There were also some other minor problems. However, I was testing them and fixing them in the b branch. Then I have an immediate verification by synchronizing the Git branch with ArgoCD to the target virtual Kubernetes cluster.

Finally, I achieved the desired state on the virtual Kubernetes cluster. Now, I’m safe to merge the changes into the master branch and apply them to the main cluster 🙂

Here we go 🙂

argocd-virtual-kubernetes-pr

Final Thoughts

If you are using the GitOps approach to manage your whole Kubernetes cluster, testing updated configuration becomes very important. With Kubernetes virtual clusters we may simplify the process of testing configuration managed by ArgoCD. Loft provides built-in integration with ArgoCD. We can easily define multiple projects and multiple ArgoCD for managing different aspects of the Kubernetes cluster.

The post Testing GitOps on Virtual Kubernetes Clusters with ArgoCD appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/06/29/testing-gitops-on-virtual-kubernetes-clusters-with-argocd/feed/ 2 14283
Manage Multiple Kubernetes Clusters with ArgoCD https://piotrminkowski.com/2022/12/09/manage-multiple-kubernetes-clusters-with-argocd/ https://piotrminkowski.com/2022/12/09/manage-multiple-kubernetes-clusters-with-argocd/#comments Fri, 09 Dec 2022 11:54:52 +0000 https://piotrminkowski.com/?p=13774 In this article, you will learn how to deploy the same app across multiple Kubernetes clusters with ArgoCD. In order to easily test the solution we will run several virtual Kubernetes clusters on the single management cluster with the vcluster tool. Since that’s the first article where I’m using vcluster, I’m going to do a […]

The post Manage Multiple Kubernetes Clusters with ArgoCD appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to deploy the same app across multiple Kubernetes clusters with ArgoCD. In order to easily test the solution we will run several virtual Kubernetes clusters on the single management cluster with the vcluster tool. Since that’s the first article where I’m using vcluster, I’m going to do a quick introduction in the next section. As usual, we will use Helm for installing the required components and creating an app template. I will also show you, how we can leverage Kyverno in this scenario. But first things first – let’s discuss our architecture for the current article.

Introduction

If I want to easily test a scenario with multiple Kubernetes clusters I usually use kind for that. You can find examples in some of my previous articles. For example, here is the article about Cilium cluster mesh. Or another one about mirroring traffic between multiple clusters with Istio. This time I’m going to try a slightly different solution – vcluster. It allows us to run virtual Kubernetes clusters inside the namespaces of other clusters. Those virtual clusters have a separate API server and a separate data store. We can easily interact with them the same as with the “real” clusters through the Kube context on the local machine. The vcluster and all of its workloads will be hosted in a single underlying host namespace. Once we delete a namespace we will remove the whole virtual cluster with all workloads.

How vcluster may help in our exercise? First of all, it creates all the resources on the “hosting” Kubernetes cluster. There is a dedicated namespace that contains a Secret with a certificate and private key. Based on that Secret we can automatically add a newly created cluster to the clusters managed by Argo CD. I’ll show you how can leverage Kyverno ClusterPolicy for that. I will trigger on new Secret creation in the virtual cluster namespace, and then generate a new Secret in the Argo CD namespace containing the cluster details.

Here is the diagram that illustrates our architecture. ArgoCD is managing multiple Kubernetes clusters and deploying the app across those clusters using the ApplicationSet object. Once a new cluster is created it is automatically included in the list of clusters managed by Argo CD. It is possible thanks to Kyverno policy that generates a new Secret with the argocd.argoproj.io/secret-type: cluster label in the argocd namespace.

multiple-kubernetes-clusters-argocd-arch

Prerequisites

Of course, you need to have a Kubernetes cluster. In this exercise, I’m using Kubernetes on Docker Desktop. But you can as well use any other local distribution like minikube or a cloud-hosted instance. No matter which distribution you choose you also need to have:

  1. Helm CLI – used to install Argo CD, Kyverno and vcluster on the “hosting” Kubernetes cluster
  2. vcluster CLI – used to interact with virtual Kubernetes clusters. We can also use vcluster to create a virtual cluster, however, we can also do it directly using the Helm chart. You will vcluster CLI installation instructions are available here.

Running Virtual Clusters on Kubernetes

Let’s create our first virtual cluster on Kubernetes. In that approach, we can use the vcluster create command for that. Additionally, we need to sign the cluster certificate using the internal DNS name containing the name of the Service and a target namespace. Assuming that the name of the cluster is vc1, the default namespace name is vcluster-vc1. Therefore, the API server certificate should be signed for the vc1.vcluster-vc1 domain. Here is the appropriate values.yaml file that overrides default chart properties.

syncer:
  extraArgs:
  - --tls-san=vc1.vc1-vcluster

Then, we can install the first virtual cluster in the vcluster-vc1 namespace. By default, vcluster uses k3s distribution (to decrease resource consumption), so we will switch to vanilla k8s using the distro parameter:

$ vcluster create vc1 --upgrade --connect=false \
  --distro k8s \
  -f values.yaml 

We need to create another two virtual clusters with names vc2 and vc3. So you should repeat the same steps using the values.yaml and the vcluster create command dedicated to each of them. After completing the required steps we can display a list of running virtual clusters:

multiple-kubernetes-clusters-argocd-vclusters

Each cluster has a dedicated namespace that contains all the required pods for k8s distribution.

$ kubectl get pod -n vcluster-vc1
NAME                                           READY   STATUS    RESTARTS   AGE
coredns-586cbcd49f-pkn5q-x-kube-system-x-vc1   1/1     Running   0          20m
vc1-7985c794d6-7pqln                           1/1     Running   0          21m
vc1-api-6564bf7bbf-lqqxv                       1/1     Running   0          39s
vc1-controller-9f98c7f9c-87tqb                 1/1     Running   0          23s
vc1-etcd-0                                     1/1     Running   0          21m

Now, we can switch to the newly create Kube context using the vcluster connect command. Under the hood, vcluster creates a Kube context with the vcluster_vc1_vcluster-vc1_docker-desktop name and exposes API outside of the cluster using the NodePort Service.

For example, we can display a list of namespaces. As you see it is different than a list on the “hosting” cluster.

$ kubectl get ns   
NAME              STATUS   AGE
default           Active   25m
kube-node-lease   Active   25m
kube-public       Active   25m
kube-system       Active   25m

In order to switch back to the “hosting” cluster just run the following command:

$ vcluster disconnect

Installing Argo CD on Kubernetes

In the next step, we will install Argo CD on Kubernetes. To do that, we will use an official Argo CD Helm chart. First, let’s add the following Helm repo:

$ helm repo add argo https://argoproj.github.io/argo-helm

Then we can install the latest version of Argo CD in the selected namespace. For it is the argocd namespace.

$ helm install argocd argo/argo-cd -n argocd --create-namespace

After a while, Argo CD should be installed. We will then use the UI dashboard to interact with Argo CD. Therefore let’s expose it outside the cluster using the port-forward command for the argocd-server Service. After that we can access the dashboard under the local port 8080:

$ kubectl port-foward svc/argocd-server 8080:80 -n argocd

The default username is admin. ArgoCD Helm chart generates the password automatically during the installation. And you will find it inside the argocd-initial-admin-secret Secret.

$ kubectl get secret argocd-initial-admin-secret \
  --template={{.data.password}} \
  -n argocd | base64 -D

Automatically Adding Argo CD Clusters with Kyverno

The main goal here is to automatically add a newly create virtual Kubernetes to the clusters managed by Argo CD. Argo CD stores the details about each managed cluster inside the Kubernetes Secret labeled with argocd.argoproj.io/secret-type: cluster. On the other hand vcluster stores cluster credentials in the Secret inside a namespace dedicated to the particular cluster. The name of the Secret is the name of cluster prefixed with vc-. For example, the Secret name for the vc1 cluster is vc-vc1.

Probably, there are several ways to achieve the goal described above. However, for me, the simplest way is through the Kyverno ClusterPolicy. Kyverno is able to not only validate the resources it can also create additional resources when a resource is created or updated. Before we start, we need to install Kyverno on Kubernetes. As usual, we will Helm chart for that. First, let’s add the required Helm repository:

$ helm repo add kyverno https://kyverno.github.io/kyverno/

Then, we can install it for example in the kyverno namespace with the following command:

$ helm install kyverno kyverno/kyverno -n kyverno --create-namespace

That’s all – we may create our Kyverno policy. Let’s discuss the ClusterPolicy fields step by step. By default, the policy will not be applied to the existing resource when it is installed. To change this behavior we need to set the generateExistingOnPolicyUpdate parameter to true (1). Now it will also be for existing resources (our virtual clusters are already running). The policy triggers for any existing or newly created Secret with name starting from vc- (2). It sets several variables using the context field (3).

The policy has an access to the source Secret fields, so it is able to get API server CA (4), client certificate (5), and private key (6). Finally, it generates a new Secret with the same name as a cluster name (8). We can retrieve the name of the cluster from the namespace of the source Secret (7). The generated Secret should contain the label argocd.argoproj.io/secret-type: cluster (10) and should be placed in the argocd namespace (9). We fill all the required fields of Secret using variables (11). ArgoCD can access vcluster internally using Kubernetes Service with the same as the vcluster name (12).

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: sync-secret
spec:
  generateExistingOnPolicyUpdate: true # (1)
  rules:
  - name: sync-secret
    match:
      any:
      - resources: # (2)
          names:
          - "vc-*"
          kinds:
          - Secret
    exclude:
      any:
      - resources:
          namespaces:
          - kube-system
          - default
          - kube-public
          - kyverno
    context: # (3)
    - name: namespace
      variable:
        value: "{{ request.object.metadata.namespace }}"
    - name: name
      variable:
        value: "{{ request.object.metadata.name }}"
    - name: ca # (4)
      variable: 
        value: "{{ request.object.data.\"certificate-authority\" }}"
    - name: cert # (5)
      variable: 
        value: "{{ request.object.data.\"client-certificate\" }}"
    - name: key # (6)
      variable: 
        value: "{{ request.object.data.\"client-key\" }}"
    - name: vclusterName # (7)
      variable:
        value: "{{ replace_all(namespace, 'vcluster-', '') }}"
        jmesPath: 'to_string(@)'
    generate:
      kind: Secret
      apiVersion: v1
      name: "{{ vclusterName }}" # (8)
      namespace: argocd # (9)
      synchronize: true
      data:
        kind: Secret
        metadata:
          labels:
            argocd.argoproj.io/secret-type: cluster # (10)
        stringData: # (11)
          name: "{{ vclusterName }}"
          server: "https://{{ vclusterName }}.{{ namespace }}:443" # (12)
          config: |
            {
              "tlsClientConfig": {
                "insecure": false,
                "caData": "{{ ca }}",
                "certData": "{{ cert }}",
                "keyData": "{{ key }}"
              }
            }

Once you created the policy you can display its status with the following command:

$ kubectl get clusterpolicy
NAME          BACKGROUND   VALIDATE ACTION   READY
sync-secret   true         audit             true

Finally, you should see the three following secrets inside the argocd namespace:

Deploy the App Across Multiple Kubernetes Clusters with ArgoCD

We can easily deploy the same app across multiple Kubernetes clusters with the ArgoCD ApplicationSet object. The ApplicationSet controller is automatically installed by the ArgoCD Helm chart. So, we don’t have to do anything additional to use it. ApplicationSet is doing a very simple thing. Based on the defined criteria it generates several ArgoCD Applications. There are several types of criteria available. One of them is the list of Kubernetes clusters managed by ArgoCD.

In order to create the Application per a managed cluster, we need to use a “Cluster Generator”. The ApplicationSet visible above automatically uses all clusters managed by ArgoCD (1). It provides several parameter values to the Application template. We can use them to generate a unique name (2) or set the target cluster name (4). In this exercise, we will deploy a simple Spring Boot app that exposes some endpoints over HTTP. The configuration is stored in the following GitHub repo inside the apps/simple path (3). The target namespace name is demo (5). The app is synchronized automatically with the configuration stored in the Git repo (6).

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: sample-spring-boot
  namespace: argocd
spec:
  generators:
  - clusters: {} # (1)
  template:
    metadata:
      name: '{{name}}-sample-spring-boot' # (2)
    spec:
      project: default
      source: # (3)
        repoURL: https://github.com/piomin/openshift-cluster-config.git
        targetRevision: HEAD
        path: apps/simple
      destination:
        server: '{{server}}' # (4)
        namespace: demo # (5)
      syncPolicy: # (6)
        automated:
          selfHeal: true
        syncOptions:
          - CreateNamespace=true

Let’s switch to the ArgoCD dashboard. We have four clusters managed by ArgoCD: three virtual clusters and a single “real” cluster in-cluster.

multiple-kubernetes-clusters-argocd-clusters

Therefore you should have four ArgoCD applications generated and automatically synchronized. It means that our Sporing Boot app is currently running on all the clusters.

multiple-kubernetes-clusters-argocd-apps

Let’s connect with the vc1 virtual cluster:

$ vcluster connect vc1

We can display a list of running pods inside the demo namespace. Of course, you can repeat the same steps for another two virtual clusters.

We can access the HTTP through the Kubernetes Service just by running the following command:

$ kubectl port-forward svc/sample-spring-kotlin 8080:8080 -n demo

The app exposes Swagger UI with the list of available endpoints. You can access it under the /swagger-ui.html path.

Final Thoughts

In this article, I focused on simplifying deployment across multiple Kubernetes clusters as much as possible. We deployed our sample app across all running clusters using a single ApplicationSet CRD. We were able to add managed clusters with Kyverno policy automatically. Finally, we perform the whole exercise using a single “real” cluster, which hosted several virtual Kubernetes clusters with the vcluster tool. There is also a very interesting solution dedicated to a similar challenge based on OpenShift GitOps and Advanced Cluster Management for Kubernetes. You can read more it in my previous article.

The post Manage Multiple Kubernetes Clusters with ArgoCD appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/12/09/manage-multiple-kubernetes-clusters-with-argocd/feed/ 10 13774