skupper Archives - Piotr's TechBlog https://piotrminkowski.com/tag/skupper/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 06 Oct 2023 18:11:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 skupper Archives - Piotr's TechBlog https://piotrminkowski.com/tag/skupper/ 32 32 181738725 Handle Traffic Bursts with Ephemeral OpenShift Clusters https://piotrminkowski.com/2023/10/06/handle-traffic-bursts-with-ephemeral-openshift-clusters/ https://piotrminkowski.com/2023/10/06/handle-traffic-bursts-with-ephemeral-openshift-clusters/#comments Fri, 06 Oct 2023 18:11:03 +0000 https://piotrminkowski.com/?p=14560 This article will teach you how to handle temporary traffic bursts with ephemeral OpenShift clusters provisioned in the public cloud. Such a solution should work in a fully automated way. We must forward part of that traffic to another cluster once we deal with unexpected or sudden network traffic volume peaks. Such a cluster is […]

The post Handle Traffic Bursts with Ephemeral OpenShift Clusters appeared first on Piotr's TechBlog.

]]>
This article will teach you how to handle temporary traffic bursts with ephemeral OpenShift clusters provisioned in the public cloud. Such a solution should work in a fully automated way. We must forward part of that traffic to another cluster once we deal with unexpected or sudden network traffic volume peaks. Such a cluster is called “ephemeral” since it works just for a specified period until the unexpected situation ends. Of course, we should be able to use ephemeral OpenShift as soon as possible after the event occurs. But on the other hand, we don’t want to pay for it if unnecessary.

In this article, I’ll show how you can achieve all the described things with the GitOps (Argo CD) approach and several tools around OpenShift/Kubernetes like Kyverno or Red Hat Service Interconnect (open-source Skupper project). We will also use Advanced Cluster Management for Kubernetes (ACM) to create and handle “ephemeral” OpenShift clusters. If you need an introduction to the GitOps approach in a multicluster OpenShift environment read the following article. It is also to familiarize with the idea behind multicluster communication through the Skupper project. In order to do that you can read the article about multicluster load balancing with Skupper on my blog.

Source Code

If you would like to try it by yourself, you can always take a look at my source code. In order to do that, you need to clone my GitHub repository. It contains several YAML manifests that allow us to manage OpenShift clusters in a GitOps way. For that exercise, we will use the manifests under the clusterpool directory. There are two subdirectories there: hub and managed. The manifests inside the hub directory should be applied to the management cluster, while the manifests inside the managed directory to the managed cluster. In our traffic bursts scenario, a single OpenShift acts as a hub and managed cluster, and it creates another managed (ephemeral) cluster.

Prerequisites

In order to start the exercise, we need a running Openshift that acts as a management cluster. It will create and configure the ephemeral cluster on AWS used to handle traffic volume peaks. In the first step, we need to install two operators on the management cluster: “Openshift GitOps” and “Advanced Cluster Management for Kubernetes”.

traffic-bursts-openshift-operators

After that, we have to create the MultiClusterHub object, which runs and configures ACM:

kind: MultiClusterHub
apiVersion: operator.open-cluster-management.io/v1
metadata:
  name: multiclusterhub
  namespace: open-cluster-management
spec: {}

We also need to install Kyverno. Since there is no official operator for it, we have to leverage the Helm chart. Firstly, let’s add the following Helm repository:

$ helm repo add kyverno https://kyverno.github.io/kyverno/

Then, we can install the latest version of Kyverno in the kyverno namespace using the following command:

$ helm install my-kyverno kyverno/kyverno -n kyverno --create-namespace

By the way, Openshift Console provides built-in support for Helm. In order to use it, you need to switch to the Developer perspective. Then, click the Helm menu and choose the Create -> Repository option. Once you do it you will be able to create a new Helm release of Kyverno.

Using OpenShift Cluster Pool

With ACM we can create a pool of Openshift clusters. That pool contains running or hibernated clusters. While a running cluster is just ready to work, a hibernated cluster needs to be resumed by ACM. We are defining a pool size and the number of running clusters inside that pool. Once we create the ClusterPool object ACM starts to provision new clusters on AWS. In our case, the pool size is 1, but the number of running clusters is 0. The object declaration also contains all things required to create a new cluster like the installation template (the aws-install-config Secret) or AWS account credentials reference (the aws-aws-creds Secret). Each cluster within that pool is automatically assigned to the interconnect ManagedClusterSet. The cluster set approach allows us to group multiple OpenShift clusters.

apiVersion: hive.openshift.io/v1
kind: ClusterPool
metadata:
  name: aws
  namespace: aws
  labels:
    cloud: AWS
    cluster.open-cluster-management.io/clusterset: interconnect
    region: us-east-1
    vendor: OpenShift
spec:
  baseDomain: sandbox449.opentlc.com
  imageSetRef:
    name: img4.12.36-multi-appsub
  installConfigSecretTemplateRef:
    name: aws-install-config
  platform:
    aws:
      credentialsSecretRef:
        name: aws-aws-creds
      region: us-east-1
  pullSecretRef:
    name: aws-pull-secret
  size: 1

So, as a result, there is only one cluster in the pool. ACM keeps that cluster in the hibernated state. It means that all the VMs with master and worker nodes are stopped. In order to resume the hibernated cluster we need to create the ClusterClaim object that refers to the ClusterPool. It is similar to clicking the Claim cluster link visible below. However, we don’t want to create that object directly, but as a reaction to the Kubernetes event.

traffic-bursts-openshift-cluster-pool

Before we proceed, let’s just take a look at a list of virtual machines on AWS related to our cluster. As you see they are not running.

Claim Cluster From the Pool on Scaling Event

Now, the question is – what kind of event should result in getting a cluster from the pool? A single app could rely on the scaling event. So once the number of deployment pods exceeds the assumed threshold we will resume a hibernated cluster and run the app there. With Kyverno we can react to such scaling events by creating the ClusterPolicy object. As you see our policy monitors the Deployment/scale resource. The assumed maximum allowed pod for our app on the main cluster is 4. We need to put such a value in the preconditions together with the Deployment name. Once all the conditions are met we may generate a new Kubernetes resource. That resource is the ClusterClaim which refers to the ClusterPool we created in the previous section. It will result in getting a hibernated cluster from the pool and resuming it.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: aws
spec:
  background: true
  generateExisting: true
  rules:
    - generate:
        apiVersion: hive.openshift.io/v1
        data:
          spec:
            clusterPoolName: aws
        kind: ClusterClaim
        name: aws
        namespace: aws
        synchronize: true
      match:
        any:
          - resources:
              kinds:
                - Deployment/scale
      preconditions:
        all:
          - key: '{{request.object.spec.replicas}}'
            operator: Equals
            value: 4
          - key: '{{request.object.metadata.name}}'
            operator: Equals
            value: sample-kotlin-spring
  validationFailureAction: Audit

Kyverno requires additional permission to create the ClusterClaim object. We can easily achieve this by creating a properly annotated ClusterRole:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kyverno:create-claim
  labels:
    app.kubernetes.io/component: background-controller
    app.kubernetes.io/instance: kyverno
    app.kubernetes.io/part-of: kyverno
rules:
  - verbs:
      - create
      - patch
      - update
      - delete
    apiGroups:
      - hive.openshift.io
    resources:
      - clusterclaims

Once the cluster is ready we are going to assign it to the interconnect group represented by the ManagedClusterSet object. This group of clusters is managed by our instance of Argo CD from the openshift-gitops namespace. In order to achieve it we need to apply the following objects to the management OpenShift cluster:

apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSetBinding
metadata:
  name: interconnect
  namespace: openshift-gitops
spec:
  clusterSet: interconnect
---
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name: interconnect
  namespace: openshift-gitops
spec:
  predicates:
    - requiredClusterSelector:
        labelSelector:
          matchExpressions:
            - key: vendor
              operator: In
              values:
                - OpenShift
---
apiVersion: apps.open-cluster-management.io/v1beta1
kind: GitOpsCluster
metadata:
  name: argo-acm-importer
  namespace: openshift-gitops
spec:
  argoServer:
    argoNamespace: openshift-gitops
    cluster: openshift-gitops
  placementRef:
    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    name: interconnect
    namespace: openshift-gitops

After applying the manifest visible above you should see that the openshift-gitops is managing the interconnect cluster group.

Automatically Sync Configuration for a New Cluster with Argo CD

In Argo CD we can define the ApplicationSet with the “Cluster Decision Resource Generator” (1). You can read more details about that type of generator here in the docs. It will create the Argo CD Application per each Openshift cluster in the interconnect group (2). Then, the newly created Argo CD Application will automatically apply manifests responsible for creating our sample Deployment. Of course, those manifests are available in the same repository inside the clusterpool/managed directory (3).

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: cluster-init
  namespace: openshift-gitops
spec:
  generators:
    - clusterDecisionResource: # (1)
        configMapRef: acm-placement
        labelSelector:
          matchLabels:
            cluster.open-cluster-management.io/placement: interconnect # (2)
        requeueAfterSeconds: 180
  template:
    metadata:
      name: 'cluster-init-{{name}}'
    spec:
      ignoreDifferences:
        - group: apps
          kind: Deployment
          jsonPointers:
            - /spec/replicas
      destination:
        server: '{{server}}'
        namespace: interconnect
      project: default
      source:
        path: clusterpool/managed # (3)
        repoURL: 'https://github.com/piomin/openshift-cluster-config.git'
        targetRevision: master
      syncPolicy:
        automated:
          selfHeal: true
        syncOptions:
          - CreateNamespace=true

Here’s the YAML manifest that contains the Deployment object and the Openshift Route definition. Pay attention to the three skupper.io/* annotations. We will let Skupper generate the Kubernetes Service to load balance between all running pods of our app. Finally, it will allow us to load balance between the pods spread across two Openshift clusters.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/instance: sample-kotlin-spring
  annotations:
    skupper.io/address: sample-kotlin-spring
    skupper.io/port: '8080'
    skupper.io/proxy: http
  name: sample-kotlin-spring
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sample-kotlin-spring
  template:
    metadata:
      labels:
        app: sample-kotlin-spring
    spec:
      containers:
        - image: 'quay.io/pminkows/sample-kotlin-spring:1.4.39'
          name: sample-kotlin-spring
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 1000m
              memory: 1024Mi
            requests:
              cpu: 100m
              memory: 128Mi
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  labels:
    app: sample-kotlin-spring
    app.kubernetes.io/component: sample-kotlin-spring
    app.kubernetes.io/instance: sample-spring-kotlin
  name: sample-kotlin-spring
spec:
  port:
    targetPort: port8080
  to:
    kind: Service
    name: sample-kotlin-spring
    weight: 100
  wildcardPolicy: None

Let’s check out how it works. I won’t simulate traffic bursts on OpenShift. However, you can easily imagine that our app is autoscaled with HPA (Horizontal Pod Autoscaler) and therefore is able to react to the traffic volume peak. I will just manually scale up the app to 4 pods:

Now, let’s switch to the All Clusters view. As you see Kyverno sent a cluster claim to the aws ClusterPool. The claim stays in the Pending status until the cluster won’t be resumed. In the meantime, ACM creates a new cluster to fill up the pool.

traffic-bursts-openshift-cluster-claim

Once the cluster is ready you will see it in the Clusters view.

ACM automatically adds a cluster from the aws pool to the interconnect group (ManagedClusterSet). Therefore Argo CD is seeing a new cluster and adding it as a managed.

Finally, Argo CD generates the Application for a new cluster to automatically install all required Kubernetes objects.

traffic-bursts-openshift-argocd

Using Red Hat Service Interconnect

In order to enable Skupper for our apps we first need to install the Red Hat Service Interconnect operator. We can also do it in the GitOps way. We need to define the Subscription object as shown below (1). The operator has to be installed on both hub and managed clusters. Once we install the operator we need to enable Skupper in the particular namespace. In order to do that we need to define the ConfigMap there with the skupper-site name (2). Those manifests are also applied by the Argo CD Application described in the previous section.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: skupper-operator
  namespace: openshift-operators
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  channel: alpha
  installPlanApproval: Automatic
  name: skupper-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: skupper-site

Here’s the result of synchronization for the managed cluster.

We can switch to the OpenShift Console of the new cluster. The Red Hat Service Interconnect operator is ready.

Finally, we are at the final phase of our exercise. Both our clusters are running. We have already installed our sample app and the Skupper operator on both of them. Now, we need to link the apps running on different clusters into a single Skupper network. In order to do that, we need to let Skupper generate a connection token. Here’s the Secret object responsible for that. It doesn’t contain any data – just the skupper.io/type label with the connection-token-request value. Argo CD has already applied it to the management cluster in the interconnect namespace.

apiVersion: v1
kind: Secret
metadata:
  labels:
    skupper.io/type: connection-token-request
  name: token-req
  namespace: interconnect

As a result, Skupper fills the Secret object with certificates and a private key. It also overrides the value of the skupper.io/type label.

So, now our goal is to copy that Secret to the managed cluster. We won’t do that in the GitOps way directly, since the object was dynamically generated on OpenShift. However, we may use the SelectorSyncSet object provided by ACM. It can copy the secrets between the hub and managed clusters.

apiVersion: hive.openshift.io/v1
kind: SelectorSyncSet
metadata:
  name: skupper-token-sync
spec:
  clusterDeploymentSelector:
    matchLabels:
      cluster.open-cluster-management.io/clusterset: interconnect
  secretMappings:
    - sourceRef:
        name: token-req
        namespace: interconnect
      targetRef:
        name: token-req
        namespace: interconnect

Once the token is copied into the managed cluster, it should connect to the Skupper network existing on the main cluster. We can verify that everything works fine with the skupper CLI command. The following command prints all the pods from the Skupper network. As you see, we have 4 pods on the main (local) cluster and 2 pods on the managed (linked) cluster.

traffic-bursts-openshift-skupper

Let’s display the route of our service:

$ oc get route sample-kotlin-spring

Now, we can make a final test. Here’s the siege request for my route and cluster domain. It will send 10k requests via the Route. After running it, you can verify the logs to see if the traffic comes to all six pods spread across our two clusters.

$ siege -r 1000 -c 10  http://sample-kotlin-spring-interconnect.apps.jaipxwuhcp.eastus.aroapp.io/persons

Final Thoughts

Handling traffic bursts is one of the more interesting scenarios for a hybrid-cloud environment with OpenShift. With the approach described in that article, we can dynamically provision clusters and redirect traffic from on-prem to the cloud. We can do it in a fully automated, GitOps-based way. The features and tools around OpenShift allow us to cut down the cloud costs and speed up cluster startup. Therefore it reduces system downtime in case of any failures or unexpected situations.

The post Handle Traffic Bursts with Ephemeral OpenShift Clusters appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/10/06/handle-traffic-bursts-with-ephemeral-openshift-clusters/feed/ 2 14560
Kubernetes Multicluster Load Balancing with Skupper https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/ https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/#respond Fri, 04 Aug 2023 00:03:25 +0000 https://piotrminkowski.com/?p=14372 In this article, you will learn how to leverage Skupper for load balancing between app instances running on several Kubernetes clusters. We will create some Kubernetes clusters locally with Kind. Then we will connect them using Skupper. Skupper cluster interconnection works in Layer 7 (application layer). It means there is no need to create any […]

The post Kubernetes Multicluster Load Balancing with Skupper appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to leverage Skupper for load balancing between app instances running on several Kubernetes clusters. We will create some Kubernetes clusters locally with Kind. Then we will connect them using Skupper.

Skupper cluster interconnection works in Layer 7 (application layer). It means there is no need to create any VNPs or special firewall rules. Skupper is working according to the Virtual Application Network (VAN) approach. Thanks to that it can connect different Kubernetes clusters and guarantee communication between services without exposing them to the Internet. You can read more about the concept behind it in the Skupper docs.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. This time we will do almost everything using a command-line tool (skupper CLI). The repository contains just a sample app Spring Boot with Kubernetes Deployment manifests and Skaffold config. You will find here instructions on how to deploy the app with Skaffold, but you can as well use another tool. As always, follow my instructions for the details 🙂

Create Kubernetes clusters with Kind

In the first step, we will create three Kubernetes clusters with Kind. We need to give them different names: c1, c2 and c3. Accordingly, they are available under the context names: kind-c1, kind-c2 and kind-c3.

$ kind create cluster --name c1
$ kind create cluster --name c2
$ kind create cluster --name c3

In this exercise, we will switch between the clusters a few times. Personally, I’m using the kubectx to switch between different Kubernetes contexts and kubens to switch between the namespaces.

By default, Skupper exposes itself as a Kubernetes LoadBalancer Service. Therefore, we need to enable the load balancer on Kind. In order to do that, we can install MetalLB. You can find the full installation instructions in the Kind docs here. Firstly, let’s switch to the c1 cluster:

$ kubectx kind-c1

Then, we have to apply the following YAML manifest:

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

You should repeat the same procedure for the other two clusters: c2 and c3. However, it is not all. We also need to set up the address pool used by load balancers. To do that, let’s first check a range of IP addresses on the Docker network used by Kind. For me it is 172.19.0.0/16 172.19.0.1.

$ docker network inspect -f '{{.IPAM.Config}}' kind

According to the results, we need to choose the right IP address for all three Kind clusters. Then we have to create the IPAddressPool object, which contains the IPs range. Here’s the YAML manifest for the c1 cluster:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.200-172.19.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

Here’s the pool configuration for e.g. the c2 cluster. It is important that the address range should not conflict with the ranges in two other Kind clusters.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.150-172.19.255.199
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

Finally, the configuration for the c3 cluster:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.100-172.19.255.149
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

After applying the YAML manifests with the kubectl apply -f command we can proceed to the next section.

Install Skupper on Kubernetes

We can install and manage Skupper on Kubernetes in two different ways: with CLI or through YAML manifests. Most of the examples in Skupper documentation use CLI for that, so I guess it is a preferable approach. Consequently, before we start with Kubernetes, we need to install CLI. You can find installation instructions in the Skupper docs here. Once you install it, just verify if it works with the following command:

$ skupper version

After that, we can proceed with Kubernetes clusters. We will create the same namespace interconnect inside all three clusters. To simplify our upcoming exercise we can also set a default namespace for each context (alternatively you can do it with the kubectl config set-context --current --namespace interconnect command).

$ kubectl create ns interconnect
$ kubens interconnect

Then, let’s switch to the kind-c1 cluster. We will stay in this context until the end of our exercise 🙂

$ kubectx kind-c1

Finally, we will install Skupper on our Kubernetes clusters. In order to do that, we have to execute the skupper init command. Fortunately, it allows us to set the target Kubernetes context with the -c parameter. Inside the kind-c1 cluster, we will also enable the Skupper UI dashboard (--enable-console parameter). With the Skupper console, we may e.g. visualize a traffic volume for all targets in the Skupper network.

$ skupper init --enable-console --enable-flow-collector
$ skupper init -c kind-c2
$ skupper init -c kind-c3

Let’s verify the status of the Skupper installation:

$ skupper status
$ skupper status -c kind-c2
$ skupper status -c kind-c3

Here’s the status for Skupper running in the kind-c1 cluster:

kubernetes-skupper-status

We can also display a list of running Skupper pods in the interconnect namespace:

$ kubectl get po
NAME                                          READY   STATUS    RESTARTS   AGE
skupper-prometheus-867f57b89-dc4lq            1/1     Running   0          3m36s
skupper-router-55bbb99b87-k4qn5               2/2     Running   0          3m40s
skupper-service-controller-6bf57595dd-45hvw   2/2     Running   0          3m37s

Now, our goal is to connect both the c2 and c3 Kind clusters with the c1 cluster. In the Skupper nomenclature, we have to create a link between the namespace in the source and target cluster. Before we create a link we need to generate a secret token that signifies permission to create a link. The token also carries the link details. We are generating two tokens on the target cluster. Each token is stored as the YAML file. The first of them is for the kind-c2 cluster (skupper-c2-token.yaml), and the second for the kind-c3 cluster (skupper-c3-token.yaml).

$ skupper token create skupper-c2-token.yaml
$ skupper token create skupper-c3-token.yaml

We will consider several scenarios where we create a link using different parameters. Before that, let’s deploy our sample app on the kind-c2 and kind-c3 clusters.

Running the sample app on Kubernetes with Skaffold

After cloning the sample app repository go to the main directory. You can easily build and deploy the app to both kind-c2 and kind-c3 with the following commands:

$ skaffold dev --kube-context=kind-c2
$ skaffold dev --kube-context=kind-c3

After deploying the app skaffold automatically prints all the logs as shown below. It will be helpful for the next steps in our exercise.

Our app is deployed under the sample-spring-kotlin-microservice name.

Load balancing with Skupper – scenarios

Scenario 1: the same number of pods and link cost

Let’s start with the simplest scenario. We have a single pod of our app running on the kind-c2 and kind-c3 cluster. In Skupper we can also assign a cost to each link to influence the traffic flow. By default, link cost is set to 1 for a new link. In a service network, the routing algorithm attempts to use the path with the lowest total cost from the client to the target server. For now, we will leave a default value. Here’s a visualization of the first scenario:

Let’s create links to the c1 Kind cluster using the previously generated tokens.

$ skupper link create skupper-c2-token.yaml -c kind-c2
$ skupper link create skupper-c3-token.yaml -c kind-c3

If everything goes fine you should see a similar message:

We can also verify the status of links by executing the following commands:

$ skupper link status -c kind-c2
$ skupper link status -c kind-c3

It means that now c2 and c3 Kind clusters are “working” in the same Skupper network as the c1 cluster. The next step is to expose our app running in both the c2 and c3 clusters into the c1 cluster. Skupper works at Layer 7 and by default, it doesn’t connect apps unless we won’t enable that feature for the particular app. In order to expose our apps to the c1 cluster we need to run the following command on both c2 and c3 clusters.

$ skupper expose deployment/sample-spring-kotlin-microservice \
  --port 8080 -c kind-c2
$ skupper expose deployment/sample-spring-kotlin-microservice \
  --port 8080 -c kind-c3

Let’s take a look at what happened at the target (kind-c1) cluster. As you see Skupper created the sample-spring-kotlin-microservice Kubernetes Service that forwards traffic to the skupper-router pod. The Skupper Router is responsible for load-balancing requests across pods being a part of the Skupper network.

To simplify our exercise, we will enable port-forwarding for the Service visible above.

$ kubectl port-forward svc/sample-spring-kotlin-microservice 8080:8080

Thanks to that we don’t have to configure Kubernetes Ingress to call the service. Now, we can send some test requests over localhost, e.g. with siege.

$ siege -r 200 -c 5 http://localhost:8080/persons/1

We can easily verify that the traffic is coming to pods running on the kind-c2 and kind-c3 by looking at the logs. Alternatively, we can go to the Skupper console and see the traffic visualization:

kubernetes-skupper-diagram-first

Scenario 2: different number of pods and same link cost

In the next scenario, we won’t change anything in the Skupper network configuration. We will just run the second pod of the app in the kind-c3 cluster. So now, there is a single pod running in the kind-c2 cluster, and two pods running in the kind-c3 cluster. Here’s our architecture.

Once again, we can send some requests to the previously tested Kubernetes Service with the siege command:

$ siege -r 200 -c 5 http://localhost:8080/persons/2

Let’s take a look at traffic visualization in the Skupper dashboard. We can switch between all available pods. Here’s the diagram for the pod running in the kind-c2 cluster.

kubernetes-skupper-diagram

Here’s the same diagram for the pod running in the kind-c3 cluster. As you see it receives only ~50% (or even less depending on which pod we visualize) of traffic received by the pod in the kind-c2 cluster. That’s because Skupper there are two pods running in the kind-c3 cluster, while Skupper still balances requests across clusters equally.

Scenario 3: only one pod and different link costs

In the current scenario, there is a single pod of the app running on the c2 Kind cluster. At the same time, there are no pods on the c3 cluster (the Deployment exists but it has been scaled down to zero instances). Here’s the visualization of our scenario.

kubernetes-skupper-arch2

The important thing here is that the c3 cluster is preferred by Skupper since the link to it has a lower cost (2) than the link to the c2 cluster (4). So now, we need to remove the previous link, and then create a new one with the following commands:

$ skupper link create skupper-c2-token.yaml --cost 4 -c kind-c2
$ skupper link create skupper-c3-token.yaml --cost 2 -c kind-c3

In order to create a Skupper link once again you first need to delete the previous one with the skupper link delete link1 command. Then you have to generate new tokens with the skupper token create command as we did before.

Let’s take a look at the Skupper network status:

kubernetes-skupper-network-status

Let’s send some test requests to the exposed service. It works without any errors. Since there is only a single running pod, the whole traffic goes there:

Scenario 4 – more pods in one cluster and different link cost

Finally, the last scenario in our exercise. We will use the same Skupper configuration as in Scenario 3. However, this time we will run two pods in the kind-c3 cluster.

kubernetes-skupper-arch-1

We can switch once again to the Skupper dashboard. Now, as you see, all the pods receive a very similar amount of traffic. Here’s the diagram for the pod running on the kind-c2 cluster.

kubernetes-skupper-equal-traffic

Here’s a similar diagram for the pod running on the kind-c3 cluster. After setting the cost of the link assuming the number of pods running on the cluster I was able to split traffic equally between all the pods across both clusters. It works. However, it is not a perfect way for load-balancing. I would expect at least an option for enabling a round-robin between all the pods working in the same Skupper network. The solution presented in this scenario will work as expected unless we enable auto-scaling for the app.

Final Thoughts

Skupper introduces an interesting approach to the Kubernetes multicluster connectivity based fully on Layer 7. You can compare it to another solution based on the different layers like Submariner or Cilium cluster mesh. I described both of them in my previous articles. If you want to read more about Submariner visit the following post. If you are interested in Cilium read that article.

The post Kubernetes Multicluster Load Balancing with Skupper appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/feed/ 0 14372