acm Archives - Piotr's TechBlog https://piotrminkowski.com/tag/acm/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 15 Jan 2024 08:55:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 acm Archives - Piotr's TechBlog https://piotrminkowski.com/tag/acm/ 32 32 181738725 OpenShift Multicluster with Advanced Cluster Management for Kubernetes and Submariner https://piotrminkowski.com/2024/01/15/openshift-multicluster-with-advanced-cluster-management-for-kubernetes-and-submariner/ https://piotrminkowski.com/2024/01/15/openshift-multicluster-with-advanced-cluster-management-for-kubernetes-and-submariner/#respond Mon, 15 Jan 2024 08:55:03 +0000 https://piotrminkowski.com/?p=14792 This article will teach you how to connect multiple Openshift clusters with Submariner and Advanced Cluster Management for Kubernetes. Submariner allows us to configure direct networking between pods and services in different Kubernetes clusters, either on-premises or in the cloud. It operates at the L3 layer. It establishes a secure tunnel between two clusters and […]

The post OpenShift Multicluster with Advanced Cluster Management for Kubernetes and Submariner appeared first on Piotr's TechBlog.

]]>
This article will teach you how to connect multiple Openshift clusters with Submariner and Advanced Cluster Management for Kubernetes. Submariner allows us to configure direct networking between pods and services in different Kubernetes clusters, either on-premises or in the cloud. It operates at the L3 layer. It establishes a secure tunnel between two clusters and provides service discovery. I have already described how to install and manage it on Kubernetes mostly with the subctl CLI in the following article.

Today we will focus on the integration between Submariner and OpenShift through the Advanced Cluster Management for Kubernetes (ACM). ACM is a tool dedicated to OpenShift. It allows to control of clusters and applications from a single console, with built-in security policies. You can find several articles about it on my blog. For example, the following one describes how to use ACM together with Argo CD in the GitOps approach.

Source Code

This time we won’t work much with a source code. However, if you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should follow my further instructions.

Architecture

Our architecture consists of three Openshift clusters: a single hub cluster and two managed clusters. The hub cluster aims to create new managed clusters and establish a secure connection between them using Submariner. So, in the initial state, there is just a hub cluster with the Advanced Cluster Management for Kubernetes (ACM) operator installed on it. With ACM we will create two new Openshift clusters on the target infrastructure (AWS) and install Submariner on them. Finally, we are going to deploy two sample Spring Boot apps. The callme-service app exposes a single GET /callme/ping endpoint and runs on ocp2. We will expose it through Submariner to the ocp1 cluster. On the ocp1 cluster, there is the second app caller-service that invokes the endpoint exposed by the callme-service app. Here’s the diagram of our architecture.

openshift-submariner-arch

Install Advanced Cluster Management on OpenShift

In the first step, we must install the Advanced Cluster Management for Kubernetes (ACM) on OpenShift using an operator. The default installation namespace is open-cluster-management. We won’t change it.

Once we install the operator, we need to initialize the ACM we have to create the MultiClusterHub object. Once again, we will use the open-cluster-management for that. Here’s the object declaration. We don’t need to specify any more advanced settings.

apiVersion: operator.open-cluster-management.io/v1
kind: MultiClusterHub
metadata:
  name: multiclusterhub
  namespace: open-cluster-management
spec: {}

We can do the same thing graphically in the OpenShift Dashboard. Just click the “Create MultiClusterHub” button and then accept the action on the next page. Probably it will take some time to complete the installation since there are several pods to run.

openshift-submariner-acm

Once the installation is completed, you will see the new menu item at the top of the dashboard allowing you to switch to the “All Clusters” view. Let’s do it. After that, we can proceed to the next step.

Create OpenShift Clusters with ACM

Advanced Cluster Management for Kubernetes allows us to import the existing clusters or create new ones on the target infrastructure. In this exercise, you see how to leverage the cloud provider account for that. Let’s just click the “Connect your cloud provider” tile on the welcome screen.

Provide Cloud Credentials

I’m using my already existing account on AWS. ACM will ask us to provide the appropriate credentials for the AWS account. In the first form, we should provide the name and namespace of our secret with credentials and a default base DNS domain.

openshift-submariner-cluster-create

Then, the ACM wizard will redirect us to the next steps. We have to provide AWS access key ID and secret, OpenShift pull secret, and also the SSH private/public keys. Of course, we can create the required Kubernetes Secret without a wizard, just by applying the similar YAML manifest:

apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: aws
  namespace: open-cluster-management
  labels:
    cluster.open-cluster-management.io/type: aws
    cluster.open-cluster-management.io/credentials: ""
stringData:
  aws_access_key_id: AKIAXBLSZLXZJWT3KFPM
  aws_secret_access_key: "********************"
  baseDomain: sandbox2746.opentlc.com
  pullSecret: "********************"
  ssh-privatekey: "********************"
  ssh-publickey: "********************"
  httpProxy: ""
  httpsProxy: ""
  noProxy: ""
  additionalTrustBundle: ""

Provision the Cluster

After that, we can prepare the ACM cluster set. The cluster set feature allows us to group OpenShift clusters. It is the required prerequisite for Submariner installation. Here’s the ManagedClusterSet object. The name is arbitrary. We can set it e.g. as the submariner.

apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSet
metadata:
  name: submariner
spec: {}

Finally, we can create two OpenShift clusters on AWS from the ACM dashboard. Go to the Infrastructure -> Clusters -> Cluster list and click the “Create cluster” button. Then, let’s choose the “Amazon Web Services” tile with already created credentials.

In the “Cluster Details” form we should set the name (ocp1 and then ocp2 for the second cluster) and version of the OpenShift cluster (the “Release image” field). We should also assign it to the submariner cluster set.

Let’s take a look at the “Networking” form. We won’t change anything here intentionally. We will set the same IP address ranges for both the ocp1 and ocp2 clusters. In the default settings, Submariner requires non-overlapping Pod and Service CIDRs between the interconnected clusters. This approach prevents routing conflicts. We are going to break those rules, which results in conflicts in the internal IP addresses between the ocp1 and ocp2 clusters. We will see how Submariner helps to resolve such an issue.

It will take around 30-40 minutes to create both clusters. ACM will connect directly to our AWS and create all the required resources there. As a result, our environment is ready. Let’s take how it looks from the ACM dashboard perspective:

openshift-submariner-clusters

There is a single management (hub) cluster and two managed clusters. Both managed clusters are assigned to the submariner cluster set. If you have the same result as me, you can proceed to the next step.

Enable Submariner for OpenShift clusters with ACM

Install in the Target Managed Cluster Set

Submariner is available on OpenShift in the form of an add-on to ACM. As I mentioned before, it requires ACM ManagedClusterSet objects for grouping clusters that should be connected. In order to enable Submariner for the specific cluster set, we need to view its details and switch to the “Submariner add-ons” tab. Then, we need to click the “Install Submariner add-ons” button. In the installation form, we have to choose the target clusters and enable the “Globalnet” feature to resolve an issue related to the Pod and Service CIDR overlapping. The default value of the “Globalnet” CIDR is 242.0.0.0/8. If it’s fine for us we can leave the empty value in the text field and proceed to the next step.

openshift-submariner-install

In the next form, we are configuring Submariner installation in each OpenShift cluster. We don’t have to change any value there. ACM will create an additional node on the OpenShift cluster using the c5d.large VM type. It will use that node for installing Multus CNI. Multus is a CNI plugin for Kubernetes that enables attaching multiple network interfaces to pods. It is responsible for enabling the Submariner “Globalnet” feature and giving a subnet from this virtual Global Private Network, configured as a new cluster parameter GlobalCIDR. We will run a single instance of the Submariner gateway and leave the default libreswan cable driver.

Of course, we can also provide that configuration as YAML manifests. With that approach, we need to create the ManagedClusterAddOn and SubmarinerConfig objects on both ocp1 and ocp2 clusters through the ACM engine. The Submariner Broker object has to be created on the hub cluster.

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: submariner
  namespace: ocp2
spec:
  installNamespace: submariner-operator
---
apiVersion: submarineraddon.open-cluster-management.io/v1alpha1
kind: SubmarinerConfig
metadata:
  name: submariner
  namespace: ocp2
spec:
  gatewayConfig:
    gateways: 1
    aws:
      instanceType: c5d.large
  IPSecNATTPort: 4500
  airGappedDeployment: false
  NATTEnable: true
  cableDriver: libreswan
  globalCIDR: ""
  credentialsSecret:
    name: ocp2-aws-creds
---
apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: submariner
  namespace: ocp1
spec:
  installNamespace: submariner-operator
---
apiVersion: submarineraddon.open-cluster-management.io/v1alpha1
kind: SubmarinerConfig
metadata:
  name: submariner
  namespace: ocp1
spec:
  gatewayConfig:
    gateways: 1
    aws:
      instanceType: c5d.large
  IPSecNATTPort: 4500
  airGappedDeployment: false
  NATTEnable: true
  cableDriver: libreswan
  globalCIDR: ""
  credentialsSecret:
    name: ocp1-aws-creds
---
apiVersion: submariner.io/v1alpha1
kind: Broker
metadata:
  name: submariner-broker
  namespace: submariner-broker
  labels:
    cluster.open-cluster-management.io/backup: submariner
spec:
  globalnetEnabled: true
  globalnetCIDRRange: 242.0.0.0/8

Verify the Status of Submariner Network

After installing the Submariner Add-on in the target cluster set, you should see the same statuses for both ocp1 and ocp2 clusters.

openshift-submariner-status

Assuming that you are logged in to all the clusters with the oc CLI, we can the detailed status of the Submariner network with the subctl CLI. In order to do that, we should execute the following command:

$ subctl show all

It examines all the clusters one after the other and prints all key Submariner components installed there. Let’s begin with the command output for the hub cluster. As you see, it runs the Submariner Broker component in the submariner-broker namespace:

Here’s the output for the ocp1 managed cluster. The global CIDR for that cluster is 242.1.0.0/16. This IP range will be used for exposing services to other clusters inside the same Submariner network.

On the other hand, here’s the output for the ocp2 managed cluster. The global CIDR for that cluster is 242.0.0.0/16. The connection between ocp1 and ocp2 clusters is established. Therefore we can proceed to the last step in our exercise. Let’s run the sample apps on our OpenShift clusters!

Export App to the Remote Cluster

Since we already installed Submariner on both OpenShift clusters we can deploy our sample applications. Let’s begin with caller-service. We will run it in the demo-apps namespace. Make sure you are in the ocp1 Kube context. Here’s the YAML manifest with the Deployment and Service definitions for our app:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        env:
          - name: VERSION
            value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: caller-service
  labels:
    app: caller-service
spec:
  type: ClusterIP
  ports:
    - port: 8080
      name: http
  selector:
    app: caller-service

Then go to the caller-service directory and deploy the application using Skaffold as shown below. We can also expose the service outside the cluster using the OpenShift Route object:

$ cd caller-service
$ oc project demo-apps
$ skaffold run
$ oc expose svc/caller-service

Let’s switch to the callme-service app. Make sure you are in the ocp2 Kube context. Here’s the YAML manifest with the Deployment and Service definitions for our second app:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
  template:
    metadata:
      labels:
        app: callme-service
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

Once again, we can deploy the app on OpenShift using Skaffold.

$ cd callme-service
$ oc project demo-apps
$ skaffold run

This time, instead of exposing the service outside of the cluster, we will export it to the Submariner network. Thanks to that, the caller-service app will be able to call directly through the IPSec tunnel established between the clusters. We can do it using the subctl CLI command:

$ subctl export service callme-service

That command creates the ServiceExport CRD object provided by the Submariner operator. We can apply the following YAML definition as well:

apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
  name: callme-service
  namespace: demo-apps

We can verify if everything turned out okay by checking out the ServiceExport object status:

Submariner creates an additional Kubernetes Service with the IP address from the “Globalnet” CIDR pool to avoid services IP overlapping.

Then, let’s switch to the ocp1 cluster. After exporting the Service from the ocp2 cluster Submariner automatically creates the ServiceImport object on the connected clusters.

apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
  name: callme-service
  namespace: demo-apps
spec:
  ports:
    - name: http
      port: 8080
      protocol: TCP
  type: ClusterSetIP
status:
  clusters:
    - cluster: ocp2

Submariner exposes services on the domain clusterset.local. So, our service is now available under the URL callme-service.demo-apps.svc.clusterset.local. We can verify it by executing the following curl command inside the caller-service container. As you see, it uses the external IP address allocated by the Submariner within the “Globalnet” subnet.

Here’s the implementation of @RestController responsible for handling requests coming to the caller-service service. As you see, it uses Spring RestTemplate client to call the remote service using the callme-service.demo-apps.svc.clusterset.local URL provided by Submariner.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallerController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.or(Optional::empty), version);
      String response = restTemplate
         .getForObject("http://callme-service.demo-apps.svc.clusterset.local:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }
}

Let’s just make a final test using the OpenShift caller-service Route and the GET /caller/ping endpoint. As you see it calls the callme-service app successfully through the Submariner tunnel.

openshift submariner-tes-

Final Thoughts

In this article, we analyzed the scenario where we are interconnecting two OpenShift clusters with overlapping CIDRs. I also showed you how to leverage the ACM dashboard to simplify the installation and configuration of Submariner on the managed clusters. It is worth mentioning, that there are some other ways to interconnect multiple OpenShift clusters. For example, we can use Red Hat Service Interconnect based on the open-source project Skupper for that. In order to read more about it, you can refer to the following article on my blog.

The post OpenShift Multicluster with Advanced Cluster Management for Kubernetes and Submariner appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/01/15/openshift-multicluster-with-advanced-cluster-management-for-kubernetes-and-submariner/feed/ 0 14792
Handle Traffic Bursts with Ephemeral OpenShift Clusters https://piotrminkowski.com/2023/10/06/handle-traffic-bursts-with-ephemeral-openshift-clusters/ https://piotrminkowski.com/2023/10/06/handle-traffic-bursts-with-ephemeral-openshift-clusters/#comments Fri, 06 Oct 2023 18:11:03 +0000 https://piotrminkowski.com/?p=14560 This article will teach you how to handle temporary traffic bursts with ephemeral OpenShift clusters provisioned in the public cloud. Such a solution should work in a fully automated way. We must forward part of that traffic to another cluster once we deal with unexpected or sudden network traffic volume peaks. Such a cluster is […]

The post Handle Traffic Bursts with Ephemeral OpenShift Clusters appeared first on Piotr's TechBlog.

]]>
This article will teach you how to handle temporary traffic bursts with ephemeral OpenShift clusters provisioned in the public cloud. Such a solution should work in a fully automated way. We must forward part of that traffic to another cluster once we deal with unexpected or sudden network traffic volume peaks. Such a cluster is called “ephemeral” since it works just for a specified period until the unexpected situation ends. Of course, we should be able to use ephemeral OpenShift as soon as possible after the event occurs. But on the other hand, we don’t want to pay for it if unnecessary.

In this article, I’ll show how you can achieve all the described things with the GitOps (Argo CD) approach and several tools around OpenShift/Kubernetes like Kyverno or Red Hat Service Interconnect (open-source Skupper project). We will also use Advanced Cluster Management for Kubernetes (ACM) to create and handle “ephemeral” OpenShift clusters. If you need an introduction to the GitOps approach in a multicluster OpenShift environment read the following article. It is also to familiarize with the idea behind multicluster communication through the Skupper project. In order to do that you can read the article about multicluster load balancing with Skupper on my blog.

Source Code

If you would like to try it by yourself, you can always take a look at my source code. In order to do that, you need to clone my GitHub repository. It contains several YAML manifests that allow us to manage OpenShift clusters in a GitOps way. For that exercise, we will use the manifests under the clusterpool directory. There are two subdirectories there: hub and managed. The manifests inside the hub directory should be applied to the management cluster, while the manifests inside the managed directory to the managed cluster. In our traffic bursts scenario, a single OpenShift acts as a hub and managed cluster, and it creates another managed (ephemeral) cluster.

Prerequisites

In order to start the exercise, we need a running Openshift that acts as a management cluster. It will create and configure the ephemeral cluster on AWS used to handle traffic volume peaks. In the first step, we need to install two operators on the management cluster: “Openshift GitOps” and “Advanced Cluster Management for Kubernetes”.

traffic-bursts-openshift-operators

After that, we have to create the MultiClusterHub object, which runs and configures ACM:

kind: MultiClusterHub
apiVersion: operator.open-cluster-management.io/v1
metadata:
  name: multiclusterhub
  namespace: open-cluster-management
spec: {}

We also need to install Kyverno. Since there is no official operator for it, we have to leverage the Helm chart. Firstly, let’s add the following Helm repository:

$ helm repo add kyverno https://kyverno.github.io/kyverno/

Then, we can install the latest version of Kyverno in the kyverno namespace using the following command:

$ helm install my-kyverno kyverno/kyverno -n kyverno --create-namespace

By the way, Openshift Console provides built-in support for Helm. In order to use it, you need to switch to the Developer perspective. Then, click the Helm menu and choose the Create -> Repository option. Once you do it you will be able to create a new Helm release of Kyverno.

Using OpenShift Cluster Pool

With ACM we can create a pool of Openshift clusters. That pool contains running or hibernated clusters. While a running cluster is just ready to work, a hibernated cluster needs to be resumed by ACM. We are defining a pool size and the number of running clusters inside that pool. Once we create the ClusterPool object ACM starts to provision new clusters on AWS. In our case, the pool size is 1, but the number of running clusters is 0. The object declaration also contains all things required to create a new cluster like the installation template (the aws-install-config Secret) or AWS account credentials reference (the aws-aws-creds Secret). Each cluster within that pool is automatically assigned to the interconnect ManagedClusterSet. The cluster set approach allows us to group multiple OpenShift clusters.

apiVersion: hive.openshift.io/v1
kind: ClusterPool
metadata:
  name: aws
  namespace: aws
  labels:
    cloud: AWS
    cluster.open-cluster-management.io/clusterset: interconnect
    region: us-east-1
    vendor: OpenShift
spec:
  baseDomain: sandbox449.opentlc.com
  imageSetRef:
    name: img4.12.36-multi-appsub
  installConfigSecretTemplateRef:
    name: aws-install-config
  platform:
    aws:
      credentialsSecretRef:
        name: aws-aws-creds
      region: us-east-1
  pullSecretRef:
    name: aws-pull-secret
  size: 1

So, as a result, there is only one cluster in the pool. ACM keeps that cluster in the hibernated state. It means that all the VMs with master and worker nodes are stopped. In order to resume the hibernated cluster we need to create the ClusterClaim object that refers to the ClusterPool. It is similar to clicking the Claim cluster link visible below. However, we don’t want to create that object directly, but as a reaction to the Kubernetes event.

traffic-bursts-openshift-cluster-pool

Before we proceed, let’s just take a look at a list of virtual machines on AWS related to our cluster. As you see they are not running.

Claim Cluster From the Pool on Scaling Event

Now, the question is – what kind of event should result in getting a cluster from the pool? A single app could rely on the scaling event. So once the number of deployment pods exceeds the assumed threshold we will resume a hibernated cluster and run the app there. With Kyverno we can react to such scaling events by creating the ClusterPolicy object. As you see our policy monitors the Deployment/scale resource. The assumed maximum allowed pod for our app on the main cluster is 4. We need to put such a value in the preconditions together with the Deployment name. Once all the conditions are met we may generate a new Kubernetes resource. That resource is the ClusterClaim which refers to the ClusterPool we created in the previous section. It will result in getting a hibernated cluster from the pool and resuming it.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: aws
spec:
  background: true
  generateExisting: true
  rules:
    - generate:
        apiVersion: hive.openshift.io/v1
        data:
          spec:
            clusterPoolName: aws
        kind: ClusterClaim
        name: aws
        namespace: aws
        synchronize: true
      match:
        any:
          - resources:
              kinds:
                - Deployment/scale
      preconditions:
        all:
          - key: '{{request.object.spec.replicas}}'
            operator: Equals
            value: 4
          - key: '{{request.object.metadata.name}}'
            operator: Equals
            value: sample-kotlin-spring
  validationFailureAction: Audit

Kyverno requires additional permission to create the ClusterClaim object. We can easily achieve this by creating a properly annotated ClusterRole:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kyverno:create-claim
  labels:
    app.kubernetes.io/component: background-controller
    app.kubernetes.io/instance: kyverno
    app.kubernetes.io/part-of: kyverno
rules:
  - verbs:
      - create
      - patch
      - update
      - delete
    apiGroups:
      - hive.openshift.io
    resources:
      - clusterclaims

Once the cluster is ready we are going to assign it to the interconnect group represented by the ManagedClusterSet object. This group of clusters is managed by our instance of Argo CD from the openshift-gitops namespace. In order to achieve it we need to apply the following objects to the management OpenShift cluster:

apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSetBinding
metadata:
  name: interconnect
  namespace: openshift-gitops
spec:
  clusterSet: interconnect
---
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name: interconnect
  namespace: openshift-gitops
spec:
  predicates:
    - requiredClusterSelector:
        labelSelector:
          matchExpressions:
            - key: vendor
              operator: In
              values:
                - OpenShift
---
apiVersion: apps.open-cluster-management.io/v1beta1
kind: GitOpsCluster
metadata:
  name: argo-acm-importer
  namespace: openshift-gitops
spec:
  argoServer:
    argoNamespace: openshift-gitops
    cluster: openshift-gitops
  placementRef:
    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    name: interconnect
    namespace: openshift-gitops

After applying the manifest visible above you should see that the openshift-gitops is managing the interconnect cluster group.

Automatically Sync Configuration for a New Cluster with Argo CD

In Argo CD we can define the ApplicationSet with the “Cluster Decision Resource Generator” (1). You can read more details about that type of generator here in the docs. It will create the Argo CD Application per each Openshift cluster in the interconnect group (2). Then, the newly created Argo CD Application will automatically apply manifests responsible for creating our sample Deployment. Of course, those manifests are available in the same repository inside the clusterpool/managed directory (3).

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: cluster-init
  namespace: openshift-gitops
spec:
  generators:
    - clusterDecisionResource: # (1)
        configMapRef: acm-placement
        labelSelector:
          matchLabels:
            cluster.open-cluster-management.io/placement: interconnect # (2)
        requeueAfterSeconds: 180
  template:
    metadata:
      name: 'cluster-init-{{name}}'
    spec:
      ignoreDifferences:
        - group: apps
          kind: Deployment
          jsonPointers:
            - /spec/replicas
      destination:
        server: '{{server}}'
        namespace: interconnect
      project: default
      source:
        path: clusterpool/managed # (3)
        repoURL: 'https://github.com/piomin/openshift-cluster-config.git'
        targetRevision: master
      syncPolicy:
        automated:
          selfHeal: true
        syncOptions:
          - CreateNamespace=true

Here’s the YAML manifest that contains the Deployment object and the Openshift Route definition. Pay attention to the three skupper.io/* annotations. We will let Skupper generate the Kubernetes Service to load balance between all running pods of our app. Finally, it will allow us to load balance between the pods spread across two Openshift clusters.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/instance: sample-kotlin-spring
  annotations:
    skupper.io/address: sample-kotlin-spring
    skupper.io/port: '8080'
    skupper.io/proxy: http
  name: sample-kotlin-spring
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sample-kotlin-spring
  template:
    metadata:
      labels:
        app: sample-kotlin-spring
    spec:
      containers:
        - image: 'quay.io/pminkows/sample-kotlin-spring:1.4.39'
          name: sample-kotlin-spring
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 1000m
              memory: 1024Mi
            requests:
              cpu: 100m
              memory: 128Mi
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  labels:
    app: sample-kotlin-spring
    app.kubernetes.io/component: sample-kotlin-spring
    app.kubernetes.io/instance: sample-spring-kotlin
  name: sample-kotlin-spring
spec:
  port:
    targetPort: port8080
  to:
    kind: Service
    name: sample-kotlin-spring
    weight: 100
  wildcardPolicy: None

Let’s check out how it works. I won’t simulate traffic bursts on OpenShift. However, you can easily imagine that our app is autoscaled with HPA (Horizontal Pod Autoscaler) and therefore is able to react to the traffic volume peak. I will just manually scale up the app to 4 pods:

Now, let’s switch to the All Clusters view. As you see Kyverno sent a cluster claim to the aws ClusterPool. The claim stays in the Pending status until the cluster won’t be resumed. In the meantime, ACM creates a new cluster to fill up the pool.

traffic-bursts-openshift-cluster-claim

Once the cluster is ready you will see it in the Clusters view.

ACM automatically adds a cluster from the aws pool to the interconnect group (ManagedClusterSet). Therefore Argo CD is seeing a new cluster and adding it as a managed.

Finally, Argo CD generates the Application for a new cluster to automatically install all required Kubernetes objects.

traffic-bursts-openshift-argocd

Using Red Hat Service Interconnect

In order to enable Skupper for our apps we first need to install the Red Hat Service Interconnect operator. We can also do it in the GitOps way. We need to define the Subscription object as shown below (1). The operator has to be installed on both hub and managed clusters. Once we install the operator we need to enable Skupper in the particular namespace. In order to do that we need to define the ConfigMap there with the skupper-site name (2). Those manifests are also applied by the Argo CD Application described in the previous section.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: skupper-operator
  namespace: openshift-operators
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  channel: alpha
  installPlanApproval: Automatic
  name: skupper-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: skupper-site

Here’s the result of synchronization for the managed cluster.

We can switch to the OpenShift Console of the new cluster. The Red Hat Service Interconnect operator is ready.

Finally, we are at the final phase of our exercise. Both our clusters are running. We have already installed our sample app and the Skupper operator on both of them. Now, we need to link the apps running on different clusters into a single Skupper network. In order to do that, we need to let Skupper generate a connection token. Here’s the Secret object responsible for that. It doesn’t contain any data – just the skupper.io/type label with the connection-token-request value. Argo CD has already applied it to the management cluster in the interconnect namespace.

apiVersion: v1
kind: Secret
metadata:
  labels:
    skupper.io/type: connection-token-request
  name: token-req
  namespace: interconnect

As a result, Skupper fills the Secret object with certificates and a private key. It also overrides the value of the skupper.io/type label.

So, now our goal is to copy that Secret to the managed cluster. We won’t do that in the GitOps way directly, since the object was dynamically generated on OpenShift. However, we may use the SelectorSyncSet object provided by ACM. It can copy the secrets between the hub and managed clusters.

apiVersion: hive.openshift.io/v1
kind: SelectorSyncSet
metadata:
  name: skupper-token-sync
spec:
  clusterDeploymentSelector:
    matchLabels:
      cluster.open-cluster-management.io/clusterset: interconnect
  secretMappings:
    - sourceRef:
        name: token-req
        namespace: interconnect
      targetRef:
        name: token-req
        namespace: interconnect

Once the token is copied into the managed cluster, it should connect to the Skupper network existing on the main cluster. We can verify that everything works fine with the skupper CLI command. The following command prints all the pods from the Skupper network. As you see, we have 4 pods on the main (local) cluster and 2 pods on the managed (linked) cluster.

traffic-bursts-openshift-skupper

Let’s display the route of our service:

$ oc get route sample-kotlin-spring

Now, we can make a final test. Here’s the siege request for my route and cluster domain. It will send 10k requests via the Route. After running it, you can verify the logs to see if the traffic comes to all six pods spread across our two clusters.

$ siege -r 1000 -c 10  http://sample-kotlin-spring-interconnect.apps.jaipxwuhcp.eastus.aroapp.io/persons

Final Thoughts

Handling traffic bursts is one of the more interesting scenarios for a hybrid-cloud environment with OpenShift. With the approach described in that article, we can dynamically provision clusters and redirect traffic from on-prem to the cloud. We can do it in a fully automated, GitOps-based way. The features and tools around OpenShift allow us to cut down the cloud costs and speed up cluster startup. Therefore it reduces system downtime in case of any failures or unexpected situations.

The post Handle Traffic Bursts with Ephemeral OpenShift Clusters appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/10/06/handle-traffic-bursts-with-ephemeral-openshift-clusters/feed/ 2 14560
GitOps with Advanced Cluster Management for Kubernetes https://piotrminkowski.com/2022/10/24/gitops-with-advanced-cluster-management-for-kubernetes/ https://piotrminkowski.com/2022/10/24/gitops-with-advanced-cluster-management-for-kubernetes/#respond Mon, 24 Oct 2022 10:41:40 +0000 https://piotrminkowski.com/?p=13656 In this article, you will learn how to manage multiple clusters with Argo CD and Advanced Cluster Management for Kubernetes. Advanced Cluster Management (ACM) for Kubernetes is a tool provided by Red Hat based on a community-driven project Open Cluster Management. I’ll show you how to use it with OpenShift to implement gitops approach for […]

The post GitOps with Advanced Cluster Management for Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to manage multiple clusters with Argo CD and Advanced Cluster Management for Kubernetes. Advanced Cluster Management (ACM) for Kubernetes is a tool provided by Red Hat based on a community-driven project Open Cluster Management. I’ll show you how to use it with OpenShift to implement gitops approach for running apps across multiple clusters. However, you can as well deploy a community-driven version on Kubernetes.

If you are not familiar with Argo CD you can read my article about Kubernetes CI/CD with Tekton and ArgoCD available here.

Prerequisites

To be able to run that exercise you need to have at least two OpenShift clusters. I’ll run my both clusters on cloud providers: Azure and GCP. The cluster running on Azure will act as a management cluster. It means that we will install there ACM and Argo CD for managing both local and remote clusters. We can easily do it using operators provided by Red Hat. Here’s a list of required operators:

advanced-cluster-management-kubernetes-operators

After installing the Advanced Cluster Management for Kubernetes operator we also need to create the CRD object MultiClusterHub. Openshift Console provides simplified way of creating such objects. We need to go to the operator details and then switch to the “MultiClusterHub” tab. There is the “Create MultiClusterHub” on the right corner of the page. Just click it and then create object with the default settings. Probably it takes some time until the ACM will be ready to use.

ACM provides dashboard UI. We can easily access it through the OpenShift Route (object similar to Kubrnetes Ingress). Just switch to the open-cluster-management namespace and display a list of routes. The address of dashborad is https://multicloud-console.apps.<YOUR_DOMAIN>.

The OpenShift GitOps operator atomatically creates the Argo CD instance during installation. By default, it is located in the openshift-gitops namespace.

The same before, we can access Argo CD dashboard UI through the OpenShift Route.

Add a Remote Cluster

In this section, we will use ACM dashboard UI for importing an existing remote cluster. In order to do that go to the to the “Clusters” menu item, and then click the “Import cluster” button. There are three different modes of importing a remote cluster. We will choose the method of entering server URL with API token. The name of our cluster on GCP is remote-gcp. It will act as a production environment, so we will add the env=prod label.

advanced-cluster-management-kubernetes-import-cluster

You can get server URL and API token from your OpenShift login command:

After importing a cluster we can display a list of managed clusters in the UI. The local-cluster is labelled with env=test, while the remote-gcp with env=prod.

advanced-cluster-management-kubernetes-list-clusters

All the clusters are represented by the CRD objects ManagedCluster. Let’s display them using the oc CLI:

$ oc get managedcluster              
NAME            HUB ACCEPTED   MANAGED CLUSTER URLS                                     JOINED   AVAILABLE   AGE
local-cluster   true           https://api.zyyrwsdd.eastus.aroapp.io:6443               True     True        25h
remote-gcp      true           https://api.cluster-rhf5a9.gcp.redhatworkshops.io:6443   True     True        25h

We can manage clusters individually or organize them as groups. The ManagedClusterSet object can contain many managed clusters. Our sample set contains two clusters used in this article: local-cluster and remote-gcp. Firstly, we need to create the ManagedClusterSet object.

apiVersion: cluster.open-cluster-management.io/v1alpha1
kind: ManagedClusterSet
metadata:
  name: demo
spec: {}

Then we need to label managed cluster with the cluster.open-cluster-management.io/clusterset label containing the name of ManagedClusterSet it should belong to. A single ManagedCluster can be a member of only a single ManagedClusterSet.

$ oc label managedcluster remote-gcp cluster.open-cluster-management.io/clusterset=demo
$ oc label managedcluster local-cluster cluster.open-cluster-management.io/clusterset=demo

Integrate ACM with Argo CD

In order to integrate Advanced Cluster Management for Kubernetes with OpenShift GitOps we need to create some CRD objects. In the first step we need to bind managed clusters to the target namespace where Argo CD is deployed. The ManagedClusterSetBinding object assigns ManagedClusterSet to the particular namespace. Our target namespace is openshift-gitops.

apiVersion: cluster.open-cluster-management.io/v1beta1
kind: ManagedClusterSetBinding
metadata:
  name: demo
  namespace: openshift-gitops
spec:
  clusterSet: demo

After that, we need to create the Placement object in the same namespace. Placement determines which managed clusters each subscription, policy or other definition affects. It can filter by the cluster sets or other predicates like, for example, labels. In our scenario, placement filters all the managed clusters included in the demo managed set, which have the env label with the test or prod value.

apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name: demo-gitops-placement
  namespace: openshift-gitops
spec:
  clusterSets:
    - demo
  predicates:
    - requiredClusterSelector:
        labelSelector:
          matchExpressions:
            - key: env
              operator: In
              values:
                - test
                - prod

Let’s verify if the object has been succesfully created. Our placement should filter out two managed clusters.

$ oc get placement -n openshift-gitops
NAME                    SUCCEEDED   REASON                  SELECTEDCLUSTERS
demo-gitops-placement   True        AllDecisionsScheduled   2

Finally, we can create the GitOpsCluster object. It assigns managed clusters to the target instance of Argo CD basing on the predicates defined in the demo-gitops-placement Placement.

apiVersion: apps.open-cluster-management.io/v1beta1
kind: GitOpsCluster
metadata:
  name: demo-gitops-cluster
  namespace: openshift-gitops
spec:
  argoServer:
    cluster: local-cluster
    argoNamespace: openshift-gitops
  placementRef:
    kind: Placement
    apiVersion: cluster.open-cluster-management.io/v1beta1
    name: demo-gitops-placement

If everything worked fine you should see the following status message in your GitOpsCluster object:

$ oc get gitopscluster demo-gitops-cluster -n openshift-gitops \
    -o=jsonpath='{.status.message}'
Added managed clusters [local-cluster remote-gcp] to gitops namespace openshift-gitops

Now, we can switch to the Argo CD dashboard. We don’t have any applications there yet, but we just want to verify a list of managed clusters. In the dashboard choose the “Settings” menu item, then go to the “Clusters” tile. You should see the similar list as shown below.

advanced-cluster-management-kubernetes-argocd-clusters

Now, we can manage application deployment across multiple OpenShift clusters with Argo CD. Let’s deploy our first app there.

Deploy App Across Multiple Clusters

As the example we will that GitHub repository. It contains several configuration files, but our deployment manifests are available inside the apps/simple directory. We will deploy a simple Spring Boot app from my Docker registry. Here’s our deployment manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-kotlin
spec:
  selector:
    matchLabels:
      app: sample-spring-kotlin
  template:
    metadata:
      labels:
        app: sample-spring-kotlin
    spec:
      containers:
      - name: sample-spring-kotlin
        image: piomin/sample-spring-kotlin:1.4.8
        ports:
        - containerPort: 8080
          name: http

There is also Kubernetes Service definition in this directory:

apiVersion: v1
kind: Service
metadata:
  name: sample-spring-kotlin
spec:
  type: ClusterIP
  selector:
    app: sample-spring-kotlin
  ports:
  - port: 8080
    name: http

For generating Argo CD applications for multiple clusters we will use a feature called Cluster Decision Resource Generator. It is one of available generator in Argo CD ApplicationSet project. After we created the Placement object, ACM has automatically created the following ConfigMap in the openshift-gitops namespace:

kind: ConfigMap
apiVersion: v1
metadata:
  name: acm-placement
  namespace: openshift-gitops
data:
  apiVersion: cluster.open-cluster-management.io/v1beta1
  kind: placementdecisions
  matchKey: clusterName
  statusListKey: decisions

The ApplicationSet generator will read the kind placementrules with an apiVersion of apps.open-cluster-management.io/v1. It will attempt to extract the list of clusters from the key decisions (1). Then, it validates the actual cluster name as defined in Argo CD against the value from the key clusterName in each of the elements in the list. The ClusterDecisionResource generator passes the name, server and any other key/value in the resource’s status list as parameters into the ApplicationSet template. Thanks to that, we can use those parameters to set the name of Argo CD Application (2) and a target cluster (3). The target namespace for our sample app is demo. Let’s create it on all clusters automatically with Argo CD (4).

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: sample-spring-boot
  namespace: openshift-gitops
spec:
  generators:
    - clusterDecisionResource:
        configMapRef: acm-placement # (1)
        labelSelector:
          matchLabels:
            cluster.open-cluster-management.io/placement: demo-gitops-placement
        requeueAfterSeconds: 180
  template:
    metadata:
      name: sample-spring-boot-{{name}} # (2)
    spec:
      project: default
      source:
        repoURL: https://github.com/piomin/openshift-cluster-config.git
        targetRevision: master
        path: apps/simple
      destination:
        namespace: demo
        server: "{{server}}" # (3)
      syncPolicy:
        automated:
          selfHeal: false
        syncOptions:
          - CreateNamespace=true # (4)

Once you created the ApplicationSet obejct Argo CD will create Applications basing on the managed clusters list. Since, there are two clusters managed by Argo CD we have two applications. Here’s the current view of applications in the Argo CD dashboard. Both of vthem are automatically synchronized to the target clusters.

advanced-cluster-management-kubernetes-argocd-apps

Also, we can switch to the ACM dashboard once again. Then, we should choose the “Applications” menu item. It displays the full list of all running applications. We can filter the application by the name. Here’s a view for our sample-spring-boot application.

advanced-cluster-management-kubernetes-acm-app

We can see the details of each application.

We can also see the topology view:

And for example, display the logs of pods from the remote clusters from a single place – ACM dashboard.

Final Thoughts

Many organizations already operate multiple Kubernetes clusters in multiple regions. Unfortunately, operating a distributed, multi-cluster, multi-cloud environment is not a simple task. We need to use right tools to simplify it. Advanced Cluster Management for Kubernetes is a Red Hat proposition of such tool. In this article, I focused on the showing you how to apply GitOps approach for managing application deployment across several clusters. With RedHat you can integrate Argo CD with ACM to manage multiple clusters following GitOps pattern. OpenShift organizes that process from the beginning to the end. I think it shows the added value of OpenShift as an enterprise platform in your organizarion in comparison to the vanilla Kubernetes.

The post GitOps with Advanced Cluster Management for Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/10/24/gitops-with-advanced-cluster-management-for-kubernetes/feed/ 0 13656