kubernetes multicluster Archives - Piotr's TechBlog https://piotrminkowski.com/tag/kubernetes-multicluster/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 15 Jan 2024 08:55:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 kubernetes multicluster Archives - Piotr's TechBlog https://piotrminkowski.com/tag/kubernetes-multicluster/ 32 32 181738725 OpenShift Multicluster with Advanced Cluster Management for Kubernetes and Submariner https://piotrminkowski.com/2024/01/15/openshift-multicluster-with-advanced-cluster-management-for-kubernetes-and-submariner/ https://piotrminkowski.com/2024/01/15/openshift-multicluster-with-advanced-cluster-management-for-kubernetes-and-submariner/#respond Mon, 15 Jan 2024 08:55:03 +0000 https://piotrminkowski.com/?p=14792 This article will teach you how to connect multiple Openshift clusters with Submariner and Advanced Cluster Management for Kubernetes. Submariner allows us to configure direct networking between pods and services in different Kubernetes clusters, either on-premises or in the cloud. It operates at the L3 layer. It establishes a secure tunnel between two clusters and […]

The post OpenShift Multicluster with Advanced Cluster Management for Kubernetes and Submariner appeared first on Piotr's TechBlog.

]]>
This article will teach you how to connect multiple Openshift clusters with Submariner and Advanced Cluster Management for Kubernetes. Submariner allows us to configure direct networking between pods and services in different Kubernetes clusters, either on-premises or in the cloud. It operates at the L3 layer. It establishes a secure tunnel between two clusters and provides service discovery. I have already described how to install and manage it on Kubernetes mostly with the subctl CLI in the following article.

Today we will focus on the integration between Submariner and OpenShift through the Advanced Cluster Management for Kubernetes (ACM). ACM is a tool dedicated to OpenShift. It allows to control of clusters and applications from a single console, with built-in security policies. You can find several articles about it on my blog. For example, the following one describes how to use ACM together with Argo CD in the GitOps approach.

Source Code

This time we won’t work much with a source code. However, if you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should follow my further instructions.

Architecture

Our architecture consists of three Openshift clusters: a single hub cluster and two managed clusters. The hub cluster aims to create new managed clusters and establish a secure connection between them using Submariner. So, in the initial state, there is just a hub cluster with the Advanced Cluster Management for Kubernetes (ACM) operator installed on it. With ACM we will create two new Openshift clusters on the target infrastructure (AWS) and install Submariner on them. Finally, we are going to deploy two sample Spring Boot apps. The callme-service app exposes a single GET /callme/ping endpoint and runs on ocp2. We will expose it through Submariner to the ocp1 cluster. On the ocp1 cluster, there is the second app caller-service that invokes the endpoint exposed by the callme-service app. Here’s the diagram of our architecture.

openshift-submariner-arch

Install Advanced Cluster Management on OpenShift

In the first step, we must install the Advanced Cluster Management for Kubernetes (ACM) on OpenShift using an operator. The default installation namespace is open-cluster-management. We won’t change it.

Once we install the operator, we need to initialize the ACM we have to create the MultiClusterHub object. Once again, we will use the open-cluster-management for that. Here’s the object declaration. We don’t need to specify any more advanced settings.

apiVersion: operator.open-cluster-management.io/v1
kind: MultiClusterHub
metadata:
  name: multiclusterhub
  namespace: open-cluster-management
spec: {}

We can do the same thing graphically in the OpenShift Dashboard. Just click the “Create MultiClusterHub” button and then accept the action on the next page. Probably it will take some time to complete the installation since there are several pods to run.

openshift-submariner-acm

Once the installation is completed, you will see the new menu item at the top of the dashboard allowing you to switch to the “All Clusters” view. Let’s do it. After that, we can proceed to the next step.

Create OpenShift Clusters with ACM

Advanced Cluster Management for Kubernetes allows us to import the existing clusters or create new ones on the target infrastructure. In this exercise, you see how to leverage the cloud provider account for that. Let’s just click the “Connect your cloud provider” tile on the welcome screen.

Provide Cloud Credentials

I’m using my already existing account on AWS. ACM will ask us to provide the appropriate credentials for the AWS account. In the first form, we should provide the name and namespace of our secret with credentials and a default base DNS domain.

openshift-submariner-cluster-create

Then, the ACM wizard will redirect us to the next steps. We have to provide AWS access key ID and secret, OpenShift pull secret, and also the SSH private/public keys. Of course, we can create the required Kubernetes Secret without a wizard, just by applying the similar YAML manifest:

apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: aws
  namespace: open-cluster-management
  labels:
    cluster.open-cluster-management.io/type: aws
    cluster.open-cluster-management.io/credentials: ""
stringData:
  aws_access_key_id: AKIAXBLSZLXZJWT3KFPM
  aws_secret_access_key: "********************"
  baseDomain: sandbox2746.opentlc.com
  pullSecret: "********************"
  ssh-privatekey: "********************"
  ssh-publickey: "********************"
  httpProxy: ""
  httpsProxy: ""
  noProxy: ""
  additionalTrustBundle: ""

Provision the Cluster

After that, we can prepare the ACM cluster set. The cluster set feature allows us to group OpenShift clusters. It is the required prerequisite for Submariner installation. Here’s the ManagedClusterSet object. The name is arbitrary. We can set it e.g. as the submariner.

apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSet
metadata:
  name: submariner
spec: {}

Finally, we can create two OpenShift clusters on AWS from the ACM dashboard. Go to the Infrastructure -> Clusters -> Cluster list and click the “Create cluster” button. Then, let’s choose the “Amazon Web Services” tile with already created credentials.

In the “Cluster Details” form we should set the name (ocp1 and then ocp2 for the second cluster) and version of the OpenShift cluster (the “Release image” field). We should also assign it to the submariner cluster set.

Let’s take a look at the “Networking” form. We won’t change anything here intentionally. We will set the same IP address ranges for both the ocp1 and ocp2 clusters. In the default settings, Submariner requires non-overlapping Pod and Service CIDRs between the interconnected clusters. This approach prevents routing conflicts. We are going to break those rules, which results in conflicts in the internal IP addresses between the ocp1 and ocp2 clusters. We will see how Submariner helps to resolve such an issue.

It will take around 30-40 minutes to create both clusters. ACM will connect directly to our AWS and create all the required resources there. As a result, our environment is ready. Let’s take how it looks from the ACM dashboard perspective:

openshift-submariner-clusters

There is a single management (hub) cluster and two managed clusters. Both managed clusters are assigned to the submariner cluster set. If you have the same result as me, you can proceed to the next step.

Enable Submariner for OpenShift clusters with ACM

Install in the Target Managed Cluster Set

Submariner is available on OpenShift in the form of an add-on to ACM. As I mentioned before, it requires ACM ManagedClusterSet objects for grouping clusters that should be connected. In order to enable Submariner for the specific cluster set, we need to view its details and switch to the “Submariner add-ons” tab. Then, we need to click the “Install Submariner add-ons” button. In the installation form, we have to choose the target clusters and enable the “Globalnet” feature to resolve an issue related to the Pod and Service CIDR overlapping. The default value of the “Globalnet” CIDR is 242.0.0.0/8. If it’s fine for us we can leave the empty value in the text field and proceed to the next step.

openshift-submariner-install

In the next form, we are configuring Submariner installation in each OpenShift cluster. We don’t have to change any value there. ACM will create an additional node on the OpenShift cluster using the c5d.large VM type. It will use that node for installing Multus CNI. Multus is a CNI plugin for Kubernetes that enables attaching multiple network interfaces to pods. It is responsible for enabling the Submariner “Globalnet” feature and giving a subnet from this virtual Global Private Network, configured as a new cluster parameter GlobalCIDR. We will run a single instance of the Submariner gateway and leave the default libreswan cable driver.

Of course, we can also provide that configuration as YAML manifests. With that approach, we need to create the ManagedClusterAddOn and SubmarinerConfig objects on both ocp1 and ocp2 clusters through the ACM engine. The Submariner Broker object has to be created on the hub cluster.

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: submariner
  namespace: ocp2
spec:
  installNamespace: submariner-operator
---
apiVersion: submarineraddon.open-cluster-management.io/v1alpha1
kind: SubmarinerConfig
metadata:
  name: submariner
  namespace: ocp2
spec:
  gatewayConfig:
    gateways: 1
    aws:
      instanceType: c5d.large
  IPSecNATTPort: 4500
  airGappedDeployment: false
  NATTEnable: true
  cableDriver: libreswan
  globalCIDR: ""
  credentialsSecret:
    name: ocp2-aws-creds
---
apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: submariner
  namespace: ocp1
spec:
  installNamespace: submariner-operator
---
apiVersion: submarineraddon.open-cluster-management.io/v1alpha1
kind: SubmarinerConfig
metadata:
  name: submariner
  namespace: ocp1
spec:
  gatewayConfig:
    gateways: 1
    aws:
      instanceType: c5d.large
  IPSecNATTPort: 4500
  airGappedDeployment: false
  NATTEnable: true
  cableDriver: libreswan
  globalCIDR: ""
  credentialsSecret:
    name: ocp1-aws-creds
---
apiVersion: submariner.io/v1alpha1
kind: Broker
metadata:
  name: submariner-broker
  namespace: submariner-broker
  labels:
    cluster.open-cluster-management.io/backup: submariner
spec:
  globalnetEnabled: true
  globalnetCIDRRange: 242.0.0.0/8

Verify the Status of Submariner Network

After installing the Submariner Add-on in the target cluster set, you should see the same statuses for both ocp1 and ocp2 clusters.

openshift-submariner-status

Assuming that you are logged in to all the clusters with the oc CLI, we can the detailed status of the Submariner network with the subctl CLI. In order to do that, we should execute the following command:

$ subctl show all

It examines all the clusters one after the other and prints all key Submariner components installed there. Let’s begin with the command output for the hub cluster. As you see, it runs the Submariner Broker component in the submariner-broker namespace:

Here’s the output for the ocp1 managed cluster. The global CIDR for that cluster is 242.1.0.0/16. This IP range will be used for exposing services to other clusters inside the same Submariner network.

On the other hand, here’s the output for the ocp2 managed cluster. The global CIDR for that cluster is 242.0.0.0/16. The connection between ocp1 and ocp2 clusters is established. Therefore we can proceed to the last step in our exercise. Let’s run the sample apps on our OpenShift clusters!

Export App to the Remote Cluster

Since we already installed Submariner on both OpenShift clusters we can deploy our sample applications. Let’s begin with caller-service. We will run it in the demo-apps namespace. Make sure you are in the ocp1 Kube context. Here’s the YAML manifest with the Deployment and Service definitions for our app:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        env:
          - name: VERSION
            value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: caller-service
  labels:
    app: caller-service
spec:
  type: ClusterIP
  ports:
    - port: 8080
      name: http
  selector:
    app: caller-service

Then go to the caller-service directory and deploy the application using Skaffold as shown below. We can also expose the service outside the cluster using the OpenShift Route object:

$ cd caller-service
$ oc project demo-apps
$ skaffold run
$ oc expose svc/caller-service

Let’s switch to the callme-service app. Make sure you are in the ocp2 Kube context. Here’s the YAML manifest with the Deployment and Service definitions for our second app:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
  template:
    metadata:
      labels:
        app: callme-service
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

Once again, we can deploy the app on OpenShift using Skaffold.

$ cd callme-service
$ oc project demo-apps
$ skaffold run

This time, instead of exposing the service outside of the cluster, we will export it to the Submariner network. Thanks to that, the caller-service app will be able to call directly through the IPSec tunnel established between the clusters. We can do it using the subctl CLI command:

$ subctl export service callme-service

That command creates the ServiceExport CRD object provided by the Submariner operator. We can apply the following YAML definition as well:

apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
  name: callme-service
  namespace: demo-apps

We can verify if everything turned out okay by checking out the ServiceExport object status:

Submariner creates an additional Kubernetes Service with the IP address from the “Globalnet” CIDR pool to avoid services IP overlapping.

Then, let’s switch to the ocp1 cluster. After exporting the Service from the ocp2 cluster Submariner automatically creates the ServiceImport object on the connected clusters.

apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
  name: callme-service
  namespace: demo-apps
spec:
  ports:
    - name: http
      port: 8080
      protocol: TCP
  type: ClusterSetIP
status:
  clusters:
    - cluster: ocp2

Submariner exposes services on the domain clusterset.local. So, our service is now available under the URL callme-service.demo-apps.svc.clusterset.local. We can verify it by executing the following curl command inside the caller-service container. As you see, it uses the external IP address allocated by the Submariner within the “Globalnet” subnet.

Here’s the implementation of @RestController responsible for handling requests coming to the caller-service service. As you see, it uses Spring RestTemplate client to call the remote service using the callme-service.demo-apps.svc.clusterset.local URL provided by Submariner.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallerController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.or(Optional::empty), version);
      String response = restTemplate
         .getForObject("http://callme-service.demo-apps.svc.clusterset.local:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }
}

Let’s just make a final test using the OpenShift caller-service Route and the GET /caller/ping endpoint. As you see it calls the callme-service app successfully through the Submariner tunnel.

openshift submariner-tes-

Final Thoughts

In this article, we analyzed the scenario where we are interconnecting two OpenShift clusters with overlapping CIDRs. I also showed you how to leverage the ACM dashboard to simplify the installation and configuration of Submariner on the managed clusters. It is worth mentioning, that there are some other ways to interconnect multiple OpenShift clusters. For example, we can use Red Hat Service Interconnect based on the open-source project Skupper for that. In order to read more about it, you can refer to the following article on my blog.

The post OpenShift Multicluster with Advanced Cluster Management for Kubernetes and Submariner appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/01/15/openshift-multicluster-with-advanced-cluster-management-for-kubernetes-and-submariner/feed/ 0 14792
Handle Traffic Bursts with Ephemeral OpenShift Clusters https://piotrminkowski.com/2023/10/06/handle-traffic-bursts-with-ephemeral-openshift-clusters/ https://piotrminkowski.com/2023/10/06/handle-traffic-bursts-with-ephemeral-openshift-clusters/#comments Fri, 06 Oct 2023 18:11:03 +0000 https://piotrminkowski.com/?p=14560 This article will teach you how to handle temporary traffic bursts with ephemeral OpenShift clusters provisioned in the public cloud. Such a solution should work in a fully automated way. We must forward part of that traffic to another cluster once we deal with unexpected or sudden network traffic volume peaks. Such a cluster is […]

The post Handle Traffic Bursts with Ephemeral OpenShift Clusters appeared first on Piotr's TechBlog.

]]>
This article will teach you how to handle temporary traffic bursts with ephemeral OpenShift clusters provisioned in the public cloud. Such a solution should work in a fully automated way. We must forward part of that traffic to another cluster once we deal with unexpected or sudden network traffic volume peaks. Such a cluster is called “ephemeral” since it works just for a specified period until the unexpected situation ends. Of course, we should be able to use ephemeral OpenShift as soon as possible after the event occurs. But on the other hand, we don’t want to pay for it if unnecessary.

In this article, I’ll show how you can achieve all the described things with the GitOps (Argo CD) approach and several tools around OpenShift/Kubernetes like Kyverno or Red Hat Service Interconnect (open-source Skupper project). We will also use Advanced Cluster Management for Kubernetes (ACM) to create and handle “ephemeral” OpenShift clusters. If you need an introduction to the GitOps approach in a multicluster OpenShift environment read the following article. It is also to familiarize with the idea behind multicluster communication through the Skupper project. In order to do that you can read the article about multicluster load balancing with Skupper on my blog.

Source Code

If you would like to try it by yourself, you can always take a look at my source code. In order to do that, you need to clone my GitHub repository. It contains several YAML manifests that allow us to manage OpenShift clusters in a GitOps way. For that exercise, we will use the manifests under the clusterpool directory. There are two subdirectories there: hub and managed. The manifests inside the hub directory should be applied to the management cluster, while the manifests inside the managed directory to the managed cluster. In our traffic bursts scenario, a single OpenShift acts as a hub and managed cluster, and it creates another managed (ephemeral) cluster.

Prerequisites

In order to start the exercise, we need a running Openshift that acts as a management cluster. It will create and configure the ephemeral cluster on AWS used to handle traffic volume peaks. In the first step, we need to install two operators on the management cluster: “Openshift GitOps” and “Advanced Cluster Management for Kubernetes”.

traffic-bursts-openshift-operators

After that, we have to create the MultiClusterHub object, which runs and configures ACM:

kind: MultiClusterHub
apiVersion: operator.open-cluster-management.io/v1
metadata:
  name: multiclusterhub
  namespace: open-cluster-management
spec: {}

We also need to install Kyverno. Since there is no official operator for it, we have to leverage the Helm chart. Firstly, let’s add the following Helm repository:

$ helm repo add kyverno https://kyverno.github.io/kyverno/

Then, we can install the latest version of Kyverno in the kyverno namespace using the following command:

$ helm install my-kyverno kyverno/kyverno -n kyverno --create-namespace

By the way, Openshift Console provides built-in support for Helm. In order to use it, you need to switch to the Developer perspective. Then, click the Helm menu and choose the Create -> Repository option. Once you do it you will be able to create a new Helm release of Kyverno.

Using OpenShift Cluster Pool

With ACM we can create a pool of Openshift clusters. That pool contains running or hibernated clusters. While a running cluster is just ready to work, a hibernated cluster needs to be resumed by ACM. We are defining a pool size and the number of running clusters inside that pool. Once we create the ClusterPool object ACM starts to provision new clusters on AWS. In our case, the pool size is 1, but the number of running clusters is 0. The object declaration also contains all things required to create a new cluster like the installation template (the aws-install-config Secret) or AWS account credentials reference (the aws-aws-creds Secret). Each cluster within that pool is automatically assigned to the interconnect ManagedClusterSet. The cluster set approach allows us to group multiple OpenShift clusters.

apiVersion: hive.openshift.io/v1
kind: ClusterPool
metadata:
  name: aws
  namespace: aws
  labels:
    cloud: AWS
    cluster.open-cluster-management.io/clusterset: interconnect
    region: us-east-1
    vendor: OpenShift
spec:
  baseDomain: sandbox449.opentlc.com
  imageSetRef:
    name: img4.12.36-multi-appsub
  installConfigSecretTemplateRef:
    name: aws-install-config
  platform:
    aws:
      credentialsSecretRef:
        name: aws-aws-creds
      region: us-east-1
  pullSecretRef:
    name: aws-pull-secret
  size: 1

So, as a result, there is only one cluster in the pool. ACM keeps that cluster in the hibernated state. It means that all the VMs with master and worker nodes are stopped. In order to resume the hibernated cluster we need to create the ClusterClaim object that refers to the ClusterPool. It is similar to clicking the Claim cluster link visible below. However, we don’t want to create that object directly, but as a reaction to the Kubernetes event.

traffic-bursts-openshift-cluster-pool

Before we proceed, let’s just take a look at a list of virtual machines on AWS related to our cluster. As you see they are not running.

Claim Cluster From the Pool on Scaling Event

Now, the question is – what kind of event should result in getting a cluster from the pool? A single app could rely on the scaling event. So once the number of deployment pods exceeds the assumed threshold we will resume a hibernated cluster and run the app there. With Kyverno we can react to such scaling events by creating the ClusterPolicy object. As you see our policy monitors the Deployment/scale resource. The assumed maximum allowed pod for our app on the main cluster is 4. We need to put such a value in the preconditions together with the Deployment name. Once all the conditions are met we may generate a new Kubernetes resource. That resource is the ClusterClaim which refers to the ClusterPool we created in the previous section. It will result in getting a hibernated cluster from the pool and resuming it.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: aws
spec:
  background: true
  generateExisting: true
  rules:
    - generate:
        apiVersion: hive.openshift.io/v1
        data:
          spec:
            clusterPoolName: aws
        kind: ClusterClaim
        name: aws
        namespace: aws
        synchronize: true
      match:
        any:
          - resources:
              kinds:
                - Deployment/scale
      preconditions:
        all:
          - key: '{{request.object.spec.replicas}}'
            operator: Equals
            value: 4
          - key: '{{request.object.metadata.name}}'
            operator: Equals
            value: sample-kotlin-spring
  validationFailureAction: Audit

Kyverno requires additional permission to create the ClusterClaim object. We can easily achieve this by creating a properly annotated ClusterRole:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kyverno:create-claim
  labels:
    app.kubernetes.io/component: background-controller
    app.kubernetes.io/instance: kyverno
    app.kubernetes.io/part-of: kyverno
rules:
  - verbs:
      - create
      - patch
      - update
      - delete
    apiGroups:
      - hive.openshift.io
    resources:
      - clusterclaims

Once the cluster is ready we are going to assign it to the interconnect group represented by the ManagedClusterSet object. This group of clusters is managed by our instance of Argo CD from the openshift-gitops namespace. In order to achieve it we need to apply the following objects to the management OpenShift cluster:

apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSetBinding
metadata:
  name: interconnect
  namespace: openshift-gitops
spec:
  clusterSet: interconnect
---
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name: interconnect
  namespace: openshift-gitops
spec:
  predicates:
    - requiredClusterSelector:
        labelSelector:
          matchExpressions:
            - key: vendor
              operator: In
              values:
                - OpenShift
---
apiVersion: apps.open-cluster-management.io/v1beta1
kind: GitOpsCluster
metadata:
  name: argo-acm-importer
  namespace: openshift-gitops
spec:
  argoServer:
    argoNamespace: openshift-gitops
    cluster: openshift-gitops
  placementRef:
    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    name: interconnect
    namespace: openshift-gitops

After applying the manifest visible above you should see that the openshift-gitops is managing the interconnect cluster group.

Automatically Sync Configuration for a New Cluster with Argo CD

In Argo CD we can define the ApplicationSet with the “Cluster Decision Resource Generator” (1). You can read more details about that type of generator here in the docs. It will create the Argo CD Application per each Openshift cluster in the interconnect group (2). Then, the newly created Argo CD Application will automatically apply manifests responsible for creating our sample Deployment. Of course, those manifests are available in the same repository inside the clusterpool/managed directory (3).

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: cluster-init
  namespace: openshift-gitops
spec:
  generators:
    - clusterDecisionResource: # (1)
        configMapRef: acm-placement
        labelSelector:
          matchLabels:
            cluster.open-cluster-management.io/placement: interconnect # (2)
        requeueAfterSeconds: 180
  template:
    metadata:
      name: 'cluster-init-{{name}}'
    spec:
      ignoreDifferences:
        - group: apps
          kind: Deployment
          jsonPointers:
            - /spec/replicas
      destination:
        server: '{{server}}'
        namespace: interconnect
      project: default
      source:
        path: clusterpool/managed # (3)
        repoURL: 'https://github.com/piomin/openshift-cluster-config.git'
        targetRevision: master
      syncPolicy:
        automated:
          selfHeal: true
        syncOptions:
          - CreateNamespace=true

Here’s the YAML manifest that contains the Deployment object and the Openshift Route definition. Pay attention to the three skupper.io/* annotations. We will let Skupper generate the Kubernetes Service to load balance between all running pods of our app. Finally, it will allow us to load balance between the pods spread across two Openshift clusters.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/instance: sample-kotlin-spring
  annotations:
    skupper.io/address: sample-kotlin-spring
    skupper.io/port: '8080'
    skupper.io/proxy: http
  name: sample-kotlin-spring
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sample-kotlin-spring
  template:
    metadata:
      labels:
        app: sample-kotlin-spring
    spec:
      containers:
        - image: 'quay.io/pminkows/sample-kotlin-spring:1.4.39'
          name: sample-kotlin-spring
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 1000m
              memory: 1024Mi
            requests:
              cpu: 100m
              memory: 128Mi
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  labels:
    app: sample-kotlin-spring
    app.kubernetes.io/component: sample-kotlin-spring
    app.kubernetes.io/instance: sample-spring-kotlin
  name: sample-kotlin-spring
spec:
  port:
    targetPort: port8080
  to:
    kind: Service
    name: sample-kotlin-spring
    weight: 100
  wildcardPolicy: None

Let’s check out how it works. I won’t simulate traffic bursts on OpenShift. However, you can easily imagine that our app is autoscaled with HPA (Horizontal Pod Autoscaler) and therefore is able to react to the traffic volume peak. I will just manually scale up the app to 4 pods:

Now, let’s switch to the All Clusters view. As you see Kyverno sent a cluster claim to the aws ClusterPool. The claim stays in the Pending status until the cluster won’t be resumed. In the meantime, ACM creates a new cluster to fill up the pool.

traffic-bursts-openshift-cluster-claim

Once the cluster is ready you will see it in the Clusters view.

ACM automatically adds a cluster from the aws pool to the interconnect group (ManagedClusterSet). Therefore Argo CD is seeing a new cluster and adding it as a managed.

Finally, Argo CD generates the Application for a new cluster to automatically install all required Kubernetes objects.

traffic-bursts-openshift-argocd

Using Red Hat Service Interconnect

In order to enable Skupper for our apps we first need to install the Red Hat Service Interconnect operator. We can also do it in the GitOps way. We need to define the Subscription object as shown below (1). The operator has to be installed on both hub and managed clusters. Once we install the operator we need to enable Skupper in the particular namespace. In order to do that we need to define the ConfigMap there with the skupper-site name (2). Those manifests are also applied by the Argo CD Application described in the previous section.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: skupper-operator
  namespace: openshift-operators
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  channel: alpha
  installPlanApproval: Automatic
  name: skupper-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: skupper-site

Here’s the result of synchronization for the managed cluster.

We can switch to the OpenShift Console of the new cluster. The Red Hat Service Interconnect operator is ready.

Finally, we are at the final phase of our exercise. Both our clusters are running. We have already installed our sample app and the Skupper operator on both of them. Now, we need to link the apps running on different clusters into a single Skupper network. In order to do that, we need to let Skupper generate a connection token. Here’s the Secret object responsible for that. It doesn’t contain any data – just the skupper.io/type label with the connection-token-request value. Argo CD has already applied it to the management cluster in the interconnect namespace.

apiVersion: v1
kind: Secret
metadata:
  labels:
    skupper.io/type: connection-token-request
  name: token-req
  namespace: interconnect

As a result, Skupper fills the Secret object with certificates and a private key. It also overrides the value of the skupper.io/type label.

So, now our goal is to copy that Secret to the managed cluster. We won’t do that in the GitOps way directly, since the object was dynamically generated on OpenShift. However, we may use the SelectorSyncSet object provided by ACM. It can copy the secrets between the hub and managed clusters.

apiVersion: hive.openshift.io/v1
kind: SelectorSyncSet
metadata:
  name: skupper-token-sync
spec:
  clusterDeploymentSelector:
    matchLabels:
      cluster.open-cluster-management.io/clusterset: interconnect
  secretMappings:
    - sourceRef:
        name: token-req
        namespace: interconnect
      targetRef:
        name: token-req
        namespace: interconnect

Once the token is copied into the managed cluster, it should connect to the Skupper network existing on the main cluster. We can verify that everything works fine with the skupper CLI command. The following command prints all the pods from the Skupper network. As you see, we have 4 pods on the main (local) cluster and 2 pods on the managed (linked) cluster.

traffic-bursts-openshift-skupper

Let’s display the route of our service:

$ oc get route sample-kotlin-spring

Now, we can make a final test. Here’s the siege request for my route and cluster domain. It will send 10k requests via the Route. After running it, you can verify the logs to see if the traffic comes to all six pods spread across our two clusters.

$ siege -r 1000 -c 10  http://sample-kotlin-spring-interconnect.apps.jaipxwuhcp.eastus.aroapp.io/persons

Final Thoughts

Handling traffic bursts is one of the more interesting scenarios for a hybrid-cloud environment with OpenShift. With the approach described in that article, we can dynamically provision clusters and redirect traffic from on-prem to the cloud. We can do it in a fully automated, GitOps-based way. The features and tools around OpenShift allow us to cut down the cloud costs and speed up cluster startup. Therefore it reduces system downtime in case of any failures or unexpected situations.

The post Handle Traffic Bursts with Ephemeral OpenShift Clusters appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/10/06/handle-traffic-bursts-with-ephemeral-openshift-clusters/feed/ 2 14560
Kubernetes Multicluster Load Balancing with Skupper https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/ https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/#respond Fri, 04 Aug 2023 00:03:25 +0000 https://piotrminkowski.com/?p=14372 In this article, you will learn how to leverage Skupper for load balancing between app instances running on several Kubernetes clusters. We will create some Kubernetes clusters locally with Kind. Then we will connect them using Skupper. Skupper cluster interconnection works in Layer 7 (application layer). It means there is no need to create any […]

The post Kubernetes Multicluster Load Balancing with Skupper appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to leverage Skupper for load balancing between app instances running on several Kubernetes clusters. We will create some Kubernetes clusters locally with Kind. Then we will connect them using Skupper.

Skupper cluster interconnection works in Layer 7 (application layer). It means there is no need to create any VNPs or special firewall rules. Skupper is working according to the Virtual Application Network (VAN) approach. Thanks to that it can connect different Kubernetes clusters and guarantee communication between services without exposing them to the Internet. You can read more about the concept behind it in the Skupper docs.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. This time we will do almost everything using a command-line tool (skupper CLI). The repository contains just a sample app Spring Boot with Kubernetes Deployment manifests and Skaffold config. You will find here instructions on how to deploy the app with Skaffold, but you can as well use another tool. As always, follow my instructions for the details 🙂

Create Kubernetes clusters with Kind

In the first step, we will create three Kubernetes clusters with Kind. We need to give them different names: c1, c2 and c3. Accordingly, they are available under the context names: kind-c1, kind-c2 and kind-c3.

$ kind create cluster --name c1
$ kind create cluster --name c2
$ kind create cluster --name c3

In this exercise, we will switch between the clusters a few times. Personally, I’m using the kubectx to switch between different Kubernetes contexts and kubens to switch between the namespaces.

By default, Skupper exposes itself as a Kubernetes LoadBalancer Service. Therefore, we need to enable the load balancer on Kind. In order to do that, we can install MetalLB. You can find the full installation instructions in the Kind docs here. Firstly, let’s switch to the c1 cluster:

$ kubectx kind-c1

Then, we have to apply the following YAML manifest:

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

You should repeat the same procedure for the other two clusters: c2 and c3. However, it is not all. We also need to set up the address pool used by load balancers. To do that, let’s first check a range of IP addresses on the Docker network used by Kind. For me it is 172.19.0.0/16 172.19.0.1.

$ docker network inspect -f '{{.IPAM.Config}}' kind

According to the results, we need to choose the right IP address for all three Kind clusters. Then we have to create the IPAddressPool object, which contains the IPs range. Here’s the YAML manifest for the c1 cluster:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.200-172.19.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

Here’s the pool configuration for e.g. the c2 cluster. It is important that the address range should not conflict with the ranges in two other Kind clusters.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.150-172.19.255.199
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

Finally, the configuration for the c3 cluster:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.100-172.19.255.149
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

After applying the YAML manifests with the kubectl apply -f command we can proceed to the next section.

Install Skupper on Kubernetes

We can install and manage Skupper on Kubernetes in two different ways: with CLI or through YAML manifests. Most of the examples in Skupper documentation use CLI for that, so I guess it is a preferable approach. Consequently, before we start with Kubernetes, we need to install CLI. You can find installation instructions in the Skupper docs here. Once you install it, just verify if it works with the following command:

$ skupper version

After that, we can proceed with Kubernetes clusters. We will create the same namespace interconnect inside all three clusters. To simplify our upcoming exercise we can also set a default namespace for each context (alternatively you can do it with the kubectl config set-context --current --namespace interconnect command).

$ kubectl create ns interconnect
$ kubens interconnect

Then, let’s switch to the kind-c1 cluster. We will stay in this context until the end of our exercise 🙂

$ kubectx kind-c1

Finally, we will install Skupper on our Kubernetes clusters. In order to do that, we have to execute the skupper init command. Fortunately, it allows us to set the target Kubernetes context with the -c parameter. Inside the kind-c1 cluster, we will also enable the Skupper UI dashboard (--enable-console parameter). With the Skupper console, we may e.g. visualize a traffic volume for all targets in the Skupper network.

$ skupper init --enable-console --enable-flow-collector
$ skupper init -c kind-c2
$ skupper init -c kind-c3

Let’s verify the status of the Skupper installation:

$ skupper status
$ skupper status -c kind-c2
$ skupper status -c kind-c3

Here’s the status for Skupper running in the kind-c1 cluster:

kubernetes-skupper-status

We can also display a list of running Skupper pods in the interconnect namespace:

$ kubectl get po
NAME                                          READY   STATUS    RESTARTS   AGE
skupper-prometheus-867f57b89-dc4lq            1/1     Running   0          3m36s
skupper-router-55bbb99b87-k4qn5               2/2     Running   0          3m40s
skupper-service-controller-6bf57595dd-45hvw   2/2     Running   0          3m37s

Now, our goal is to connect both the c2 and c3 Kind clusters with the c1 cluster. In the Skupper nomenclature, we have to create a link between the namespace in the source and target cluster. Before we create a link we need to generate a secret token that signifies permission to create a link. The token also carries the link details. We are generating two tokens on the target cluster. Each token is stored as the YAML file. The first of them is for the kind-c2 cluster (skupper-c2-token.yaml), and the second for the kind-c3 cluster (skupper-c3-token.yaml).

$ skupper token create skupper-c2-token.yaml
$ skupper token create skupper-c3-token.yaml

We will consider several scenarios where we create a link using different parameters. Before that, let’s deploy our sample app on the kind-c2 and kind-c3 clusters.

Running the sample app on Kubernetes with Skaffold

After cloning the sample app repository go to the main directory. You can easily build and deploy the app to both kind-c2 and kind-c3 with the following commands:

$ skaffold dev --kube-context=kind-c2
$ skaffold dev --kube-context=kind-c3

After deploying the app skaffold automatically prints all the logs as shown below. It will be helpful for the next steps in our exercise.

Our app is deployed under the sample-spring-kotlin-microservice name.

Load balancing with Skupper – scenarios

Scenario 1: the same number of pods and link cost

Let’s start with the simplest scenario. We have a single pod of our app running on the kind-c2 and kind-c3 cluster. In Skupper we can also assign a cost to each link to influence the traffic flow. By default, link cost is set to 1 for a new link. In a service network, the routing algorithm attempts to use the path with the lowest total cost from the client to the target server. For now, we will leave a default value. Here’s a visualization of the first scenario:

Let’s create links to the c1 Kind cluster using the previously generated tokens.

$ skupper link create skupper-c2-token.yaml -c kind-c2
$ skupper link create skupper-c3-token.yaml -c kind-c3

If everything goes fine you should see a similar message:

We can also verify the status of links by executing the following commands:

$ skupper link status -c kind-c2
$ skupper link status -c kind-c3

It means that now c2 and c3 Kind clusters are “working” in the same Skupper network as the c1 cluster. The next step is to expose our app running in both the c2 and c3 clusters into the c1 cluster. Skupper works at Layer 7 and by default, it doesn’t connect apps unless we won’t enable that feature for the particular app. In order to expose our apps to the c1 cluster we need to run the following command on both c2 and c3 clusters.

$ skupper expose deployment/sample-spring-kotlin-microservice \
  --port 8080 -c kind-c2
$ skupper expose deployment/sample-spring-kotlin-microservice \
  --port 8080 -c kind-c3

Let’s take a look at what happened at the target (kind-c1) cluster. As you see Skupper created the sample-spring-kotlin-microservice Kubernetes Service that forwards traffic to the skupper-router pod. The Skupper Router is responsible for load-balancing requests across pods being a part of the Skupper network.

To simplify our exercise, we will enable port-forwarding for the Service visible above.

$ kubectl port-forward svc/sample-spring-kotlin-microservice 8080:8080

Thanks to that we don’t have to configure Kubernetes Ingress to call the service. Now, we can send some test requests over localhost, e.g. with siege.

$ siege -r 200 -c 5 http://localhost:8080/persons/1

We can easily verify that the traffic is coming to pods running on the kind-c2 and kind-c3 by looking at the logs. Alternatively, we can go to the Skupper console and see the traffic visualization:

kubernetes-skupper-diagram-first

Scenario 2: different number of pods and same link cost

In the next scenario, we won’t change anything in the Skupper network configuration. We will just run the second pod of the app in the kind-c3 cluster. So now, there is a single pod running in the kind-c2 cluster, and two pods running in the kind-c3 cluster. Here’s our architecture.

Once again, we can send some requests to the previously tested Kubernetes Service with the siege command:

$ siege -r 200 -c 5 http://localhost:8080/persons/2

Let’s take a look at traffic visualization in the Skupper dashboard. We can switch between all available pods. Here’s the diagram for the pod running in the kind-c2 cluster.

kubernetes-skupper-diagram

Here’s the same diagram for the pod running in the kind-c3 cluster. As you see it receives only ~50% (or even less depending on which pod we visualize) of traffic received by the pod in the kind-c2 cluster. That’s because Skupper there are two pods running in the kind-c3 cluster, while Skupper still balances requests across clusters equally.

Scenario 3: only one pod and different link costs

In the current scenario, there is a single pod of the app running on the c2 Kind cluster. At the same time, there are no pods on the c3 cluster (the Deployment exists but it has been scaled down to zero instances). Here’s the visualization of our scenario.

kubernetes-skupper-arch2

The important thing here is that the c3 cluster is preferred by Skupper since the link to it has a lower cost (2) than the link to the c2 cluster (4). So now, we need to remove the previous link, and then create a new one with the following commands:

$ skupper link create skupper-c2-token.yaml --cost 4 -c kind-c2
$ skupper link create skupper-c3-token.yaml --cost 2 -c kind-c3

In order to create a Skupper link once again you first need to delete the previous one with the skupper link delete link1 command. Then you have to generate new tokens with the skupper token create command as we did before.

Let’s take a look at the Skupper network status:

kubernetes-skupper-network-status

Let’s send some test requests to the exposed service. It works without any errors. Since there is only a single running pod, the whole traffic goes there:

Scenario 4 – more pods in one cluster and different link cost

Finally, the last scenario in our exercise. We will use the same Skupper configuration as in Scenario 3. However, this time we will run two pods in the kind-c3 cluster.

kubernetes-skupper-arch-1

We can switch once again to the Skupper dashboard. Now, as you see, all the pods receive a very similar amount of traffic. Here’s the diagram for the pod running on the kind-c2 cluster.

kubernetes-skupper-equal-traffic

Here’s a similar diagram for the pod running on the kind-c3 cluster. After setting the cost of the link assuming the number of pods running on the cluster I was able to split traffic equally between all the pods across both clusters. It works. However, it is not a perfect way for load-balancing. I would expect at least an option for enabling a round-robin between all the pods working in the same Skupper network. The solution presented in this scenario will work as expected unless we enable auto-scaling for the app.

Final Thoughts

Skupper introduces an interesting approach to the Kubernetes multicluster connectivity based fully on Layer 7. You can compare it to another solution based on the different layers like Submariner or Cilium cluster mesh. I described both of them in my previous articles. If you want to read more about Submariner visit the following post. If you are interested in Cilium read that article.

The post Kubernetes Multicluster Load Balancing with Skupper appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/feed/ 0 14372
Manage Multiple Kubernetes Clusters with ArgoCD https://piotrminkowski.com/2022/12/09/manage-multiple-kubernetes-clusters-with-argocd/ https://piotrminkowski.com/2022/12/09/manage-multiple-kubernetes-clusters-with-argocd/#comments Fri, 09 Dec 2022 11:54:52 +0000 https://piotrminkowski.com/?p=13774 In this article, you will learn how to deploy the same app across multiple Kubernetes clusters with ArgoCD. In order to easily test the solution we will run several virtual Kubernetes clusters on the single management cluster with the vcluster tool. Since that’s the first article where I’m using vcluster, I’m going to do a […]

The post Manage Multiple Kubernetes Clusters with ArgoCD appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to deploy the same app across multiple Kubernetes clusters with ArgoCD. In order to easily test the solution we will run several virtual Kubernetes clusters on the single management cluster with the vcluster tool. Since that’s the first article where I’m using vcluster, I’m going to do a quick introduction in the next section. As usual, we will use Helm for installing the required components and creating an app template. I will also show you, how we can leverage Kyverno in this scenario. But first things first – let’s discuss our architecture for the current article.

Introduction

If I want to easily test a scenario with multiple Kubernetes clusters I usually use kind for that. You can find examples in some of my previous articles. For example, here is the article about Cilium cluster mesh. Or another one about mirroring traffic between multiple clusters with Istio. This time I’m going to try a slightly different solution – vcluster. It allows us to run virtual Kubernetes clusters inside the namespaces of other clusters. Those virtual clusters have a separate API server and a separate data store. We can easily interact with them the same as with the “real” clusters through the Kube context on the local machine. The vcluster and all of its workloads will be hosted in a single underlying host namespace. Once we delete a namespace we will remove the whole virtual cluster with all workloads.

How vcluster may help in our exercise? First of all, it creates all the resources on the “hosting” Kubernetes cluster. There is a dedicated namespace that contains a Secret with a certificate and private key. Based on that Secret we can automatically add a newly created cluster to the clusters managed by Argo CD. I’ll show you how can leverage Kyverno ClusterPolicy for that. I will trigger on new Secret creation in the virtual cluster namespace, and then generate a new Secret in the Argo CD namespace containing the cluster details.

Here is the diagram that illustrates our architecture. ArgoCD is managing multiple Kubernetes clusters and deploying the app across those clusters using the ApplicationSet object. Once a new cluster is created it is automatically included in the list of clusters managed by Argo CD. It is possible thanks to Kyverno policy that generates a new Secret with the argocd.argoproj.io/secret-type: cluster label in the argocd namespace.

multiple-kubernetes-clusters-argocd-arch

Prerequisites

Of course, you need to have a Kubernetes cluster. In this exercise, I’m using Kubernetes on Docker Desktop. But you can as well use any other local distribution like minikube or a cloud-hosted instance. No matter which distribution you choose you also need to have:

  1. Helm CLI – used to install Argo CD, Kyverno and vcluster on the “hosting” Kubernetes cluster
  2. vcluster CLI – used to interact with virtual Kubernetes clusters. We can also use vcluster to create a virtual cluster, however, we can also do it directly using the Helm chart. You will vcluster CLI installation instructions are available here.

Running Virtual Clusters on Kubernetes

Let’s create our first virtual cluster on Kubernetes. In that approach, we can use the vcluster create command for that. Additionally, we need to sign the cluster certificate using the internal DNS name containing the name of the Service and a target namespace. Assuming that the name of the cluster is vc1, the default namespace name is vcluster-vc1. Therefore, the API server certificate should be signed for the vc1.vcluster-vc1 domain. Here is the appropriate values.yaml file that overrides default chart properties.

syncer:
  extraArgs:
  - --tls-san=vc1.vc1-vcluster

Then, we can install the first virtual cluster in the vcluster-vc1 namespace. By default, vcluster uses k3s distribution (to decrease resource consumption), so we will switch to vanilla k8s using the distro parameter:

$ vcluster create vc1 --upgrade --connect=false \
  --distro k8s \
  -f values.yaml 

We need to create another two virtual clusters with names vc2 and vc3. So you should repeat the same steps using the values.yaml and the vcluster create command dedicated to each of them. After completing the required steps we can display a list of running virtual clusters:

multiple-kubernetes-clusters-argocd-vclusters

Each cluster has a dedicated namespace that contains all the required pods for k8s distribution.

$ kubectl get pod -n vcluster-vc1
NAME                                           READY   STATUS    RESTARTS   AGE
coredns-586cbcd49f-pkn5q-x-kube-system-x-vc1   1/1     Running   0          20m
vc1-7985c794d6-7pqln                           1/1     Running   0          21m
vc1-api-6564bf7bbf-lqqxv                       1/1     Running   0          39s
vc1-controller-9f98c7f9c-87tqb                 1/1     Running   0          23s
vc1-etcd-0                                     1/1     Running   0          21m

Now, we can switch to the newly create Kube context using the vcluster connect command. Under the hood, vcluster creates a Kube context with the vcluster_vc1_vcluster-vc1_docker-desktop name and exposes API outside of the cluster using the NodePort Service.

For example, we can display a list of namespaces. As you see it is different than a list on the “hosting” cluster.

$ kubectl get ns   
NAME              STATUS   AGE
default           Active   25m
kube-node-lease   Active   25m
kube-public       Active   25m
kube-system       Active   25m

In order to switch back to the “hosting” cluster just run the following command:

$ vcluster disconnect

Installing Argo CD on Kubernetes

In the next step, we will install Argo CD on Kubernetes. To do that, we will use an official Argo CD Helm chart. First, let’s add the following Helm repo:

$ helm repo add argo https://argoproj.github.io/argo-helm

Then we can install the latest version of Argo CD in the selected namespace. For it is the argocd namespace.

$ helm install argocd argo/argo-cd -n argocd --create-namespace

After a while, Argo CD should be installed. We will then use the UI dashboard to interact with Argo CD. Therefore let’s expose it outside the cluster using the port-forward command for the argocd-server Service. After that we can access the dashboard under the local port 8080:

$ kubectl port-foward svc/argocd-server 8080:80 -n argocd

The default username is admin. ArgoCD Helm chart generates the password automatically during the installation. And you will find it inside the argocd-initial-admin-secret Secret.

$ kubectl get secret argocd-initial-admin-secret \
  --template={{.data.password}} \
  -n argocd | base64 -D

Automatically Adding Argo CD Clusters with Kyverno

The main goal here is to automatically add a newly create virtual Kubernetes to the clusters managed by Argo CD. Argo CD stores the details about each managed cluster inside the Kubernetes Secret labeled with argocd.argoproj.io/secret-type: cluster. On the other hand vcluster stores cluster credentials in the Secret inside a namespace dedicated to the particular cluster. The name of the Secret is the name of cluster prefixed with vc-. For example, the Secret name for the vc1 cluster is vc-vc1.

Probably, there are several ways to achieve the goal described above. However, for me, the simplest way is through the Kyverno ClusterPolicy. Kyverno is able to not only validate the resources it can also create additional resources when a resource is created or updated. Before we start, we need to install Kyverno on Kubernetes. As usual, we will Helm chart for that. First, let’s add the required Helm repository:

$ helm repo add kyverno https://kyverno.github.io/kyverno/

Then, we can install it for example in the kyverno namespace with the following command:

$ helm install kyverno kyverno/kyverno -n kyverno --create-namespace

That’s all – we may create our Kyverno policy. Let’s discuss the ClusterPolicy fields step by step. By default, the policy will not be applied to the existing resource when it is installed. To change this behavior we need to set the generateExistingOnPolicyUpdate parameter to true (1). Now it will also be for existing resources (our virtual clusters are already running). The policy triggers for any existing or newly created Secret with name starting from vc- (2). It sets several variables using the context field (3).

The policy has an access to the source Secret fields, so it is able to get API server CA (4), client certificate (5), and private key (6). Finally, it generates a new Secret with the same name as a cluster name (8). We can retrieve the name of the cluster from the namespace of the source Secret (7). The generated Secret should contain the label argocd.argoproj.io/secret-type: cluster (10) and should be placed in the argocd namespace (9). We fill all the required fields of Secret using variables (11). ArgoCD can access vcluster internally using Kubernetes Service with the same as the vcluster name (12).

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: sync-secret
spec:
  generateExistingOnPolicyUpdate: true # (1)
  rules:
  - name: sync-secret
    match:
      any:
      - resources: # (2)
          names:
          - "vc-*"
          kinds:
          - Secret
    exclude:
      any:
      - resources:
          namespaces:
          - kube-system
          - default
          - kube-public
          - kyverno
    context: # (3)
    - name: namespace
      variable:
        value: "{{ request.object.metadata.namespace }}"
    - name: name
      variable:
        value: "{{ request.object.metadata.name }}"
    - name: ca # (4)
      variable: 
        value: "{{ request.object.data.\"certificate-authority\" }}"
    - name: cert # (5)
      variable: 
        value: "{{ request.object.data.\"client-certificate\" }}"
    - name: key # (6)
      variable: 
        value: "{{ request.object.data.\"client-key\" }}"
    - name: vclusterName # (7)
      variable:
        value: "{{ replace_all(namespace, 'vcluster-', '') }}"
        jmesPath: 'to_string(@)'
    generate:
      kind: Secret
      apiVersion: v1
      name: "{{ vclusterName }}" # (8)
      namespace: argocd # (9)
      synchronize: true
      data:
        kind: Secret
        metadata:
          labels:
            argocd.argoproj.io/secret-type: cluster # (10)
        stringData: # (11)
          name: "{{ vclusterName }}"
          server: "https://{{ vclusterName }}.{{ namespace }}:443" # (12)
          config: |
            {
              "tlsClientConfig": {
                "insecure": false,
                "caData": "{{ ca }}",
                "certData": "{{ cert }}",
                "keyData": "{{ key }}"
              }
            }

Once you created the policy you can display its status with the following command:

$ kubectl get clusterpolicy
NAME          BACKGROUND   VALIDATE ACTION   READY
sync-secret   true         audit             true

Finally, you should see the three following secrets inside the argocd namespace:

Deploy the App Across Multiple Kubernetes Clusters with ArgoCD

We can easily deploy the same app across multiple Kubernetes clusters with the ArgoCD ApplicationSet object. The ApplicationSet controller is automatically installed by the ArgoCD Helm chart. So, we don’t have to do anything additional to use it. ApplicationSet is doing a very simple thing. Based on the defined criteria it generates several ArgoCD Applications. There are several types of criteria available. One of them is the list of Kubernetes clusters managed by ArgoCD.

In order to create the Application per a managed cluster, we need to use a “Cluster Generator”. The ApplicationSet visible above automatically uses all clusters managed by ArgoCD (1). It provides several parameter values to the Application template. We can use them to generate a unique name (2) or set the target cluster name (4). In this exercise, we will deploy a simple Spring Boot app that exposes some endpoints over HTTP. The configuration is stored in the following GitHub repo inside the apps/simple path (3). The target namespace name is demo (5). The app is synchronized automatically with the configuration stored in the Git repo (6).

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: sample-spring-boot
  namespace: argocd
spec:
  generators:
  - clusters: {} # (1)
  template:
    metadata:
      name: '{{name}}-sample-spring-boot' # (2)
    spec:
      project: default
      source: # (3)
        repoURL: https://github.com/piomin/openshift-cluster-config.git
        targetRevision: HEAD
        path: apps/simple
      destination:
        server: '{{server}}' # (4)
        namespace: demo # (5)
      syncPolicy: # (6)
        automated:
          selfHeal: true
        syncOptions:
          - CreateNamespace=true

Let’s switch to the ArgoCD dashboard. We have four clusters managed by ArgoCD: three virtual clusters and a single “real” cluster in-cluster.

multiple-kubernetes-clusters-argocd-clusters

Therefore you should have four ArgoCD applications generated and automatically synchronized. It means that our Sporing Boot app is currently running on all the clusters.

multiple-kubernetes-clusters-argocd-apps

Let’s connect with the vc1 virtual cluster:

$ vcluster connect vc1

We can display a list of running pods inside the demo namespace. Of course, you can repeat the same steps for another two virtual clusters.

We can access the HTTP through the Kubernetes Service just by running the following command:

$ kubectl port-forward svc/sample-spring-kotlin 8080:8080 -n demo

The app exposes Swagger UI with the list of available endpoints. You can access it under the /swagger-ui.html path.

Final Thoughts

In this article, I focused on simplifying deployment across multiple Kubernetes clusters as much as possible. We deployed our sample app across all running clusters using a single ApplicationSet CRD. We were able to add managed clusters with Kyverno policy automatically. Finally, we perform the whole exercise using a single “real” cluster, which hosted several virtual Kubernetes clusters with the vcluster tool. There is also a very interesting solution dedicated to a similar challenge based on OpenShift GitOps and Advanced Cluster Management for Kubernetes. You can read more it in my previous article.

The post Manage Multiple Kubernetes Clusters with ArgoCD appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/12/09/manage-multiple-kubernetes-clusters-with-argocd/feed/ 10 13774
GitOps with Advanced Cluster Management for Kubernetes https://piotrminkowski.com/2022/10/24/gitops-with-advanced-cluster-management-for-kubernetes/ https://piotrminkowski.com/2022/10/24/gitops-with-advanced-cluster-management-for-kubernetes/#respond Mon, 24 Oct 2022 10:41:40 +0000 https://piotrminkowski.com/?p=13656 In this article, you will learn how to manage multiple clusters with Argo CD and Advanced Cluster Management for Kubernetes. Advanced Cluster Management (ACM) for Kubernetes is a tool provided by Red Hat based on a community-driven project Open Cluster Management. I’ll show you how to use it with OpenShift to implement gitops approach for […]

The post GitOps with Advanced Cluster Management for Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to manage multiple clusters with Argo CD and Advanced Cluster Management for Kubernetes. Advanced Cluster Management (ACM) for Kubernetes is a tool provided by Red Hat based on a community-driven project Open Cluster Management. I’ll show you how to use it with OpenShift to implement gitops approach for running apps across multiple clusters. However, you can as well deploy a community-driven version on Kubernetes.

If you are not familiar with Argo CD you can read my article about Kubernetes CI/CD with Tekton and ArgoCD available here.

Prerequisites

To be able to run that exercise you need to have at least two OpenShift clusters. I’ll run my both clusters on cloud providers: Azure and GCP. The cluster running on Azure will act as a management cluster. It means that we will install there ACM and Argo CD for managing both local and remote clusters. We can easily do it using operators provided by Red Hat. Here’s a list of required operators:

advanced-cluster-management-kubernetes-operators

After installing the Advanced Cluster Management for Kubernetes operator we also need to create the CRD object MultiClusterHub. Openshift Console provides simplified way of creating such objects. We need to go to the operator details and then switch to the “MultiClusterHub” tab. There is the “Create MultiClusterHub” on the right corner of the page. Just click it and then create object with the default settings. Probably it takes some time until the ACM will be ready to use.

ACM provides dashboard UI. We can easily access it through the OpenShift Route (object similar to Kubrnetes Ingress). Just switch to the open-cluster-management namespace and display a list of routes. The address of dashborad is https://multicloud-console.apps.<YOUR_DOMAIN>.

The OpenShift GitOps operator atomatically creates the Argo CD instance during installation. By default, it is located in the openshift-gitops namespace.

The same before, we can access Argo CD dashboard UI through the OpenShift Route.

Add a Remote Cluster

In this section, we will use ACM dashboard UI for importing an existing remote cluster. In order to do that go to the to the “Clusters” menu item, and then click the “Import cluster” button. There are three different modes of importing a remote cluster. We will choose the method of entering server URL with API token. The name of our cluster on GCP is remote-gcp. It will act as a production environment, so we will add the env=prod label.

advanced-cluster-management-kubernetes-import-cluster

You can get server URL and API token from your OpenShift login command:

After importing a cluster we can display a list of managed clusters in the UI. The local-cluster is labelled with env=test, while the remote-gcp with env=prod.

advanced-cluster-management-kubernetes-list-clusters

All the clusters are represented by the CRD objects ManagedCluster. Let’s display them using the oc CLI:

$ oc get managedcluster              
NAME            HUB ACCEPTED   MANAGED CLUSTER URLS                                     JOINED   AVAILABLE   AGE
local-cluster   true           https://api.zyyrwsdd.eastus.aroapp.io:6443               True     True        25h
remote-gcp      true           https://api.cluster-rhf5a9.gcp.redhatworkshops.io:6443   True     True        25h

We can manage clusters individually or organize them as groups. The ManagedClusterSet object can contain many managed clusters. Our sample set contains two clusters used in this article: local-cluster and remote-gcp. Firstly, we need to create the ManagedClusterSet object.

apiVersion: cluster.open-cluster-management.io/v1alpha1
kind: ManagedClusterSet
metadata:
  name: demo
spec: {}

Then we need to label managed cluster with the cluster.open-cluster-management.io/clusterset label containing the name of ManagedClusterSet it should belong to. A single ManagedCluster can be a member of only a single ManagedClusterSet.

$ oc label managedcluster remote-gcp cluster.open-cluster-management.io/clusterset=demo
$ oc label managedcluster local-cluster cluster.open-cluster-management.io/clusterset=demo

Integrate ACM with Argo CD

In order to integrate Advanced Cluster Management for Kubernetes with OpenShift GitOps we need to create some CRD objects. In the first step we need to bind managed clusters to the target namespace where Argo CD is deployed. The ManagedClusterSetBinding object assigns ManagedClusterSet to the particular namespace. Our target namespace is openshift-gitops.

apiVersion: cluster.open-cluster-management.io/v1beta1
kind: ManagedClusterSetBinding
metadata:
  name: demo
  namespace: openshift-gitops
spec:
  clusterSet: demo

After that, we need to create the Placement object in the same namespace. Placement determines which managed clusters each subscription, policy or other definition affects. It can filter by the cluster sets or other predicates like, for example, labels. In our scenario, placement filters all the managed clusters included in the demo managed set, which have the env label with the test or prod value.

apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name: demo-gitops-placement
  namespace: openshift-gitops
spec:
  clusterSets:
    - demo
  predicates:
    - requiredClusterSelector:
        labelSelector:
          matchExpressions:
            - key: env
              operator: In
              values:
                - test
                - prod

Let’s verify if the object has been succesfully created. Our placement should filter out two managed clusters.

$ oc get placement -n openshift-gitops
NAME                    SUCCEEDED   REASON                  SELECTEDCLUSTERS
demo-gitops-placement   True        AllDecisionsScheduled   2

Finally, we can create the GitOpsCluster object. It assigns managed clusters to the target instance of Argo CD basing on the predicates defined in the demo-gitops-placement Placement.

apiVersion: apps.open-cluster-management.io/v1beta1
kind: GitOpsCluster
metadata:
  name: demo-gitops-cluster
  namespace: openshift-gitops
spec:
  argoServer:
    cluster: local-cluster
    argoNamespace: openshift-gitops
  placementRef:
    kind: Placement
    apiVersion: cluster.open-cluster-management.io/v1beta1
    name: demo-gitops-placement

If everything worked fine you should see the following status message in your GitOpsCluster object:

$ oc get gitopscluster demo-gitops-cluster -n openshift-gitops \
    -o=jsonpath='{.status.message}'
Added managed clusters [local-cluster remote-gcp] to gitops namespace openshift-gitops

Now, we can switch to the Argo CD dashboard. We don’t have any applications there yet, but we just want to verify a list of managed clusters. In the dashboard choose the “Settings” menu item, then go to the “Clusters” tile. You should see the similar list as shown below.

advanced-cluster-management-kubernetes-argocd-clusters

Now, we can manage application deployment across multiple OpenShift clusters with Argo CD. Let’s deploy our first app there.

Deploy App Across Multiple Clusters

As the example we will that GitHub repository. It contains several configuration files, but our deployment manifests are available inside the apps/simple directory. We will deploy a simple Spring Boot app from my Docker registry. Here’s our deployment manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-kotlin
spec:
  selector:
    matchLabels:
      app: sample-spring-kotlin
  template:
    metadata:
      labels:
        app: sample-spring-kotlin
    spec:
      containers:
      - name: sample-spring-kotlin
        image: piomin/sample-spring-kotlin:1.4.8
        ports:
        - containerPort: 8080
          name: http

There is also Kubernetes Service definition in this directory:

apiVersion: v1
kind: Service
metadata:
  name: sample-spring-kotlin
spec:
  type: ClusterIP
  selector:
    app: sample-spring-kotlin
  ports:
  - port: 8080
    name: http

For generating Argo CD applications for multiple clusters we will use a feature called Cluster Decision Resource Generator. It is one of available generator in Argo CD ApplicationSet project. After we created the Placement object, ACM has automatically created the following ConfigMap in the openshift-gitops namespace:

kind: ConfigMap
apiVersion: v1
metadata:
  name: acm-placement
  namespace: openshift-gitops
data:
  apiVersion: cluster.open-cluster-management.io/v1beta1
  kind: placementdecisions
  matchKey: clusterName
  statusListKey: decisions

The ApplicationSet generator will read the kind placementrules with an apiVersion of apps.open-cluster-management.io/v1. It will attempt to extract the list of clusters from the key decisions (1). Then, it validates the actual cluster name as defined in Argo CD against the value from the key clusterName in each of the elements in the list. The ClusterDecisionResource generator passes the name, server and any other key/value in the resource’s status list as parameters into the ApplicationSet template. Thanks to that, we can use those parameters to set the name of Argo CD Application (2) and a target cluster (3). The target namespace for our sample app is demo. Let’s create it on all clusters automatically with Argo CD (4).

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: sample-spring-boot
  namespace: openshift-gitops
spec:
  generators:
    - clusterDecisionResource:
        configMapRef: acm-placement # (1)
        labelSelector:
          matchLabels:
            cluster.open-cluster-management.io/placement: demo-gitops-placement
        requeueAfterSeconds: 180
  template:
    metadata:
      name: sample-spring-boot-{{name}} # (2)
    spec:
      project: default
      source:
        repoURL: https://github.com/piomin/openshift-cluster-config.git
        targetRevision: master
        path: apps/simple
      destination:
        namespace: demo
        server: "{{server}}" # (3)
      syncPolicy:
        automated:
          selfHeal: false
        syncOptions:
          - CreateNamespace=true # (4)

Once you created the ApplicationSet obejct Argo CD will create Applications basing on the managed clusters list. Since, there are two clusters managed by Argo CD we have two applications. Here’s the current view of applications in the Argo CD dashboard. Both of vthem are automatically synchronized to the target clusters.

advanced-cluster-management-kubernetes-argocd-apps

Also, we can switch to the ACM dashboard once again. Then, we should choose the “Applications” menu item. It displays the full list of all running applications. We can filter the application by the name. Here’s a view for our sample-spring-boot application.

advanced-cluster-management-kubernetes-acm-app

We can see the details of each application.

We can also see the topology view:

And for example, display the logs of pods from the remote clusters from a single place – ACM dashboard.

Final Thoughts

Many organizations already operate multiple Kubernetes clusters in multiple regions. Unfortunately, operating a distributed, multi-cluster, multi-cloud environment is not a simple task. We need to use right tools to simplify it. Advanced Cluster Management for Kubernetes is a Red Hat proposition of such tool. In this article, I focused on the showing you how to apply GitOps approach for managing application deployment across several clusters. With RedHat you can integrate Argo CD with ACM to manage multiple clusters following GitOps pattern. OpenShift organizes that process from the beginning to the end. I think it shows the added value of OpenShift as an enterprise platform in your organizarion in comparison to the vanilla Kubernetes.

The post GitOps with Advanced Cluster Management for Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/10/24/gitops-with-advanced-cluster-management-for-kubernetes/feed/ 0 13656
Create and Manage Kubernetes Clusters with Cluster API and ArgoCD https://piotrminkowski.com/2021/12/03/create-kubernetes-clusters-with-cluster-api-and-argocd/ https://piotrminkowski.com/2021/12/03/create-kubernetes-clusters-with-cluster-api-and-argocd/#comments Fri, 03 Dec 2021 15:10:50 +0000 https://piotrminkowski.com/?p=10285 In this article, you will learn how to create and manage multiple Kubernetes clusters using Kubernetes Cluster API and ArgoCD. We will create a single, local cluster with Kind. On that cluster, we will provision the process of other Kubernetes clusters creation. In order to perform that process automatically, we will use ArgoCD. Thanks to […]

The post Create and Manage Kubernetes Clusters with Cluster API and ArgoCD appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create and manage multiple Kubernetes clusters using Kubernetes Cluster API and ArgoCD. We will create a single, local cluster with Kind. On that cluster, we will provision the process of other Kubernetes clusters creation. In order to perform that process automatically, we will use ArgoCD. Thanks to it, we can handle the whole process from a single Git repository. Before we start, let’s do a theoretical brief.

If you are interested in topics related to the Kubernetes multi-clustering you may also read some other articles about it:

  1. Kubernetes Multicluster with Kind and Cilium
  2. Multicluster Traffic Mirroring with Istio and Kind
  3. Kubernetes Multicluster with Kind and Submariner

Introduction

Did you hear about a project called Kubernetes Cluster API? It provides declarative APIs and tools to simplify provisioning, upgrading, and managing multiple Kubernetes clusters. In fact, it is a very interesting concept. We are creating a single Kubernetes cluster that manages the lifecycle of other clusters. On this cluster, we are installing Cluster API. And then we are just defining new workload clusters, by creating Cluster API objects. Looks simple? And that’s what is.

Cluster API provides a set of CRDs extending the Kubernetes API. Each of them represents a customization of a Kubernetes cluster installation. I will not get into the details. But, if you are interested you may read more about it here. What’s is important for us, it provides a CLI that handles the lifecycle of a Cluster API management cluster. Also, it allows creating clusters on multiple infrastructures including AWS, GCP, or Azure. However, today we are going to run the whole infrastructure locally on Docker and Kind. It is also possible with Kubernetes Cluster API since it supports Docker.

We will use Cluster API CLI just to initialize the management cluster and generate YAML templates. The whole process will be managed by the ArgoCD installed on the management cluster. Argo CD perfectly fits our scenario, since it supports multi-clusters. The instance installed on a single cluster can manage many other clusters that is able to connect with.

Finally, the last tool used today – Kind. Thanks to it, we can run multiple Kubernetes clusters on the same machine using Docker container nodes. Let’s take a look at the architecture of our solution described in this article.

Architecture with Kubernetes Cluster API and ArgoCD

Here’s the picture with our architecture. The whole infrastructure is running locally on Docker. We install Kubernetes Cluster API and ArgoCD on the management cluster. Then, using both those tools we are creating new clusters with Kind. After that, we are going to apply some Kubernetes objects into the workload clusters (c1, c2) like Namespace, ResourceQuota or RoleBinding. Of course, the whole process is managed by the Argo CD instance and configuration is stored in the Git repository.

kubernetes-cluster-api-argocd-arch

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should just follow my instructions. Let’s begin.

Create Management Cluster with Kind and Cluster API

In the first step, we are going to create a management cluster on Kind. You need to have Docker installed on your machine, kubectl and kind to do that exercise by yourself. Because we use Docker infrastructure to run Kubernetes workloads clusters, Kind must have an access to the Docker host. Here’s the definition of the Kind cluster. Let’s say the name of the file is mgmt-cluster-config.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
    - hostPath: /var/run/docker.sock
      containerPath: /var/run/docker.sock

Now, let’s just apply the configuration visible above when creating a new cluster with Kind:

$ kind create cluster --config mgmt-cluster-config.yaml --name mgmt

If everything goes fine you should see a similar output. After that, your Kubernetes context is automatically set to kind-mgmt.

Then, we need to initialize a management cluster. In other words, we have to install Cluster API on our Kind cluster. In order to do that, we first need to install Cluster API CLI on the local machine. On macOS, I can use the brew install clusterctl command. Once the clusterctl has been successfully installed I can run the following command:

$ clusterctl init --infrastructure docker

The result should be similar to the following. Maybe, without this timeout 🙂 I’m not sure why it happens, but it doesn’t have any negative impact on the next steps.

kubernetes-cluster-api-argocd-init-mgmt

Once we successfully initialized a management cluster we may verify it. Let’s display e.g. a list of namespaces. There are five new namespaces created by the Cluster API.

kubectl get ns
NAME                                STATUS   AGE
capd-system                         Active   3m37s
capi-kubeadm-bootstrap-system       Active   3m42s
capi-kubeadm-control-plane-system   Active   3m40s
capi-system                         Active   3m44s
cert-manager                        Active   4m8s
default                             Active   12m
kube-node-lease                     Active   12m
kube-public                         Active   12m
kube-system                         Active   12m
local-path-storage                  Active   12m

Also, let’s display a list of all pods. All the pods created inside new namespaces should be in a running state.

$ kubectl get pod -A

We can also display a list of installed CRDs. Anyway, the Kubernetes Cluster API is running on the management cluster and we can proceed to the further steps.

Install Argo CD on the management Kubernetes cluster

I will install Argo CD in the default namespace. But you can as well create a namespace argocd and install it there (following Argo CD documentation).

$ kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Then, let’s just verify the installation:

$ kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
argocd-application-controller-0       1/1     Running   0          63s
argocd-dex-server-6dcf645b6b-6dlk9    1/1     Running   0          63s
argocd-redis-5b6967fdfc-vg5k6         1/1     Running   0          63s
argocd-repo-server-7598bf5999-96mh5   1/1     Running   0          63s
argocd-server-79f9bc9b44-d6c8q        1/1     Running   0          63s

As you probably know, Argo CD provides a web UI for management. To access it on the local port (8080), I will run the kubectl port-forward command:

$ kubectl port-forward svc/argocd-server 8080:80

Now, the UI is available under http://localhost:8080. To login there you need to find a Kubernetes Secret argocd-initial-admin-secret and decode the password. The username is admin. You can easily decode secrets using for example Lens – advanced Kubernetes IDE. For now, only just log in there. We will back to Argo CD UI later.

Create Kubernetes cluster with Cluster API and ArgoCD

We will use the clusterctl CLI to generate YAML manifests with a declaration of a new Kubernetes cluster. To do that we need to run the following command. It will generate and save the manifest into the c1-clusterapi.yaml file.

$ clusterctl generate cluster c1 --flavor development \
  --infrastructure docker \
  --kubernetes-version v1.21.1 \
  --control-plane-machine-count=3 \
  --worker-machine-count=3 \
  > c1-clusterapi.yaml

Our c1 cluster consists of three master and three worker nodes. Following Cluster API documentation we would have to apply the generated manifests into the management cluster. However, we are going to use ArgoCD to automatically apply the Cluster API manifest stored in the Git repository to Kubernetes. So, let’s create a manifest with Cluster API objects in the Git repository. To simplify the process I will use Helm templates. Because there are two clusters to create we have to Argo CD applications that use the same template with parametrization. Ok, so here’s the Helm template based on the manifest generated in the previous step. You can find it in our sample Git repository under the path /mgmt/templates/cluster-api-template.yaml.

apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: {{ .Values.cluster.name }}
  namespace: default
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
        - 192.168.0.0/16
    serviceDomain: cluster.local
    services:
      cidrBlocks:
        - 10.128.0.0/12
  controlPlaneRef:
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    name: {{ .Values.cluster.name }}-control-plane
    namespace: default
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: DockerCluster
    name: {{ .Values.cluster.name }}
    namespace: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerCluster
metadata:
  name: {{ .Values.cluster.name }}
  namespace: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
  name: {{ .Values.cluster.name }}-control-plane
  namespace: default
spec:
  template:
    spec:
      extraMounts:
        - containerPath: /var/run/docker.sock
          hostPath: /var/run/docker.sock
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
  name: {{ .Values.cluster.name }}-control-plane
  namespace: default
spec:
  kubeadmConfigSpec:
    clusterConfiguration:
      apiServer:
        certSANs:
          - localhost
          - 127.0.0.1
      controllerManager:
        extraArgs:
          enable-hostpath-provisioner: "true"
    initConfiguration:
      nodeRegistration:
        criSocket: /var/run/containerd/containerd.sock
        kubeletExtraArgs:
          cgroup-driver: cgroupfs
          eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
    joinConfiguration:
      nodeRegistration:
        criSocket: /var/run/containerd/containerd.sock
        kubeletExtraArgs:
          cgroup-driver: cgroupfs
          eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
  machineTemplate:
    infrastructureRef:
      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: DockerMachineTemplate
      name: {{ .Values.cluster.name }}-control-plane
      namespace: default
  replicas: {{ .Values.cluster.masterNodes }}
  version: {{ .Values.cluster.version }}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
  name: {{ .Values.cluster.name }}-md-0
  namespace: default
spec:
  template:
    spec: {}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
  name: {{ .Values.cluster.name }}-md-0
  namespace: default
spec:
  template:
    spec:
      joinConfiguration:
        nodeRegistration:
          kubeletExtraArgs:
            cgroup-driver: cgroupfs
            eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
  name: {{ .Values.cluster.name }}-md-0
  namespace: default
spec:
  clusterName: {{ .Values.cluster.name }}
  replicas: {{ .Values.cluster.workerNodes }}
  selector:
    matchLabels: null
  template:
    spec:
      bootstrap:
        configRef:
          apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
          kind: KubeadmConfigTemplate
          name: {{ .Values.cluster.name }}-md-0
          namespace: default
      clusterName: {{ .Values.cluster.name }}
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: DockerMachineTemplate
        name: {{ .Values.cluster.name }}-md-0
        namespace: default
      version: {{ .Values.cluster.version }}

We can parameterize four properties related to cluster creation: name of the cluster, number of master and worker nodes, or a version of Kubernetes. Since we use Helm for that, we just need to create the values.yaml file containing values of those parameters in YAML format. Here’s the values.yaml file for the first cluster. You can find it in the sample Git repository under the path /mgmt/values-c1.yaml.

cluster:
  name: c1
  masterNodes: 3
  workerNodes: 3
  version: v1.21.1

Here’s the same configuration for the second cluster. As you see, there is a single master node and a single worker node. You can find it in the sample Git repository under the path /mgmt/values-c2.yaml.

cluster:
  name: c2
  masterNodes: 1
  workerNodes: 1
  version: v1.21.1

Create Argo CD applications

Since Argo CD supports Helm, we just need to set the right values.yaml file in the configuration of the ArgoCD application. Except that, we also need to set the address of our Git configuration repository and the directory with manifests inside the repository. All the configuration for the management cluster is stored inside the mgmt directory.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: c1-cluster-create
spec:
  destination:
    name: ''
    namespace: ''
    server: 'https://kubernetes.default.svc'
  source:
    path: mgmt
    repoURL: 'https://github.com/piomin/sample-kubernetes-cluster-api-argocd.git'
    targetRevision: HEAD
    helm:
      valueFiles:
        - values-c1.yaml
  project: default

Here’s a similar declaration for the second cluster:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: c2-cluster-create
spec:
  destination:
    name: ''
    namespace: ''
    server: 'https://kubernetes.default.svc'
  source:
    path: mgmt
    repoURL: 'https://github.com/piomin/sample-kubernetes-cluster-api-argocd.git'
    targetRevision: HEAD
    helm:
      valueFiles:
        - values-c2.yaml
  project: default

Argo CD requires privileges to manage Cluster API objects. Just to simplify, let’s add the cluster-admin role to the argocd-application-controller ServiceAccount used by Argo CD.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-admin-argocd-contoller
subjects:
  - kind: ServiceAccount
    name: argocd-application-controller
    namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin

After creating applications in Argo CD you may synchronize them manually (or enable the auto-sync option). It begins the process of creating workload clusters by the Cluster API tool.

kubernetes-cluster-api-argocd-ui

Verify Kubernetes clusters using Cluster API CLI

After performing synchronization with Argo CD we can verify a list of available Kubernetes clusters. To do that just use the following kind command:

$ kind get clusters
c1
c2
mgmt

As you see there are three running clusters! Kubernetes Cluster API installed on the management cluster has created two other clusters based on the configuration applied by Argo CD. To check if everything goes fine we may use the clusterctl describe command. After executing this command you would probably have a similar result to the visible below.

The control plane is not ready. It is described in the Cluster API documentation. We need to install a CNI provider on our workload clusters in the next step. Cluster API documentation suggests installing Calico as the CNI plugin. We will do it, but before we need to switch to the kind-c1 and kind-c2 contexts. Of course, they were not created on our local machine by the Cluster API, so first we need to export them to our Kubeconfig file. Let’s do that for both workload clusters.

$ kind export kubeconfig --name c1
$ kind export kubeconfig --name c2

I’m not sure why, but it exports contexts with 0.0.0.0 as the address of clusters. So in the next step, I also had to edit my Kubeconfig file and change this address to 127.0.0.1 as shown below. Now, I can connect both clusters using kubectl from my local machine.

And install Calico CNI on both clusters.

$ kubectl apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml --context kind-c1
$ kubectl apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml --context kind-c2

I could also automate that step in Argo CD. But for now, I just want to finish the installation. In the next section, I’m going to describe how to manage both these clusters using Argo CD running on the management cluster. Now, if you verify the status of both clusters using the clusterctl describe command, it looks perfectly fine.

kubernetes-cluster-api-argocd-cli

Managing workload clusters with ArgoCD

In the previous section, we have successfully created two Kubernetes clusters using the Cluster API tool and ArgoCD. To clarify, all the Kubernetes objects required to perform that operation were created on the management cluster. Now, we would like to apply a simple configuration visible below to our both workload clusters. Of course, we will also use the same instance of Argo CD for it.

apiVersion: v1
kind: Namespace
metadata:
  name: demo
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: demo-quota
  namespace: demo
spec:
  hard:
    pods: '10'
    requests.cpu: '1'
    requests.memory: 1Gi
    limits.cpu: '2'
    limits.memory: 4Gi
---
apiVersion: v1
kind: LimitRange
metadata:
  name: demo-limitrange
  namespace: demo
spec:
  limits:
    - default:
        memory: 512Mi
        cpu: 500m
      defaultRequest:
        cpu: 100m
        memory: 128Mi
      type: Container

Unfortunately, there is no built-in integration between ArgoCD and the Kubernetes Cluster API tool. Although Cluster API creates a secret containing the Kubeconfig file per each created cluster, Argo CD is not able to recognize it to automatically add such a cluster to the managed clusters. If you are interested in more details, there is an interesting discussion about it here. Anyway, the goal, for now, is to add both our workload clusters to the list of clusters managed by the global instance of Argo CD running on the management cluster. To do that, we first need to log in to Argo CD. We need to use the same credentials and URL used to interact with web UI.

$ argocd login localhost:8080

Now, we just need to run the following commands, assuming we have already exported both Kubernetes contexts to our local Kubeconfig file:

$ argocd cluster add kind-c1
$ argocd cluster add kind-c2

If you run Docker on macOS or Windows it is not such a simple thing to do. You need to use an internal Docker address of your cluster. Cluster API creates secrets containing the Kubeconfig file for all created clusters. We can use it to verify the internal address of our Kubernetes API. Here’s a list of secrets for our workload clusters:

$ kubectl get secrets | grep kubeconfig
c1-kubeconfig                               cluster.x-k8s.io/secret               1      85m
c2-kubeconfig                               cluster.x-k8s.io/secret               1      57m

We can obtain the internal address after decoding a particular secret. For example, the internal address of my c1 cluster is 172.20.0.3.

Under the hood, Argo CD creates a secret related to each of managed clusters. It is recognized basing on the label name and value: argocd.argoproj.io/secret-type: cluster.

apiVersion: v1
kind: Secret
metadata:
  name: c1-cluster-secret
  labels:
    argocd.argoproj.io/secret-type: cluster
type: Opaque
data:
  name: c1
  server: https://172.20.0.3:6443
  config: |
    {
      "tlsClientConfig": {
        "insecure": false,
        "caData": "<base64 encoded certificate>",
        "certData": "<base64 encoded certificate>",
        "keyData": "<base64 encoded key>"
      }
    }

If you added all your clusters successfully, you should see the following list in the Clusters section on your Argo CD instance.

Create Argo CD application for workload clusters

Finally, let’s create Argo CD applications for managing configuration on both workload clusters.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: c1-cluster-config
spec:
  project: default
  source:
    repoURL: 'https://github.com/piomin/sample-kubernetes-cluster-api-argocd.git'
    path: workload
    targetRevision: HEAD
  destination:
    server: 'https://172.20.0.3:6443'

And similarly to apply the configuration on the second cluster:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: c2-cluster-config
spec:
  project: default
  source:
    repoURL: 'https://github.com/piomin/sample-kubernetes-cluster-api-argocd.git'
    path: workload
    targetRevision: HEAD
  destination:
    server: 'https://172.20.0.10:6443'

Once you created both applications on Argo CD you synchronize them.

And finally, let’s verify that configuration has been successfully applied to the target clusters.

The post Create and Manage Kubernetes Clusters with Cluster API and ArgoCD appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/12/03/create-kubernetes-clusters-with-cluster-api-and-argocd/feed/ 4 10285