kind Archives - Piotr's TechBlog https://piotrminkowski.com/tag/kind/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 28 Nov 2023 13:04:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 kind Archives - Piotr's TechBlog https://piotrminkowski.com/tag/kind/ 32 32 181738725 Kubernetes Testing with CircleCI, Kind, and Skaffold https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/ https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/#respond Tue, 28 Nov 2023 13:04:18 +0000 https://piotrminkowski.com/?p=14706 In this article, you will learn how to use tools like Kind or Skaffold to build integration tests on CircleCI for apps running on Kubernetes. Our main goal in this exercise is to build the app image and verify the Deployment on Kubernetes in the CircleCI pipeline. Skaffold and Jib Maven plugin build the image […]

The post Kubernetes Testing with CircleCI, Kind, and Skaffold appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use tools like Kind or Skaffold to build integration tests on CircleCI for apps running on Kubernetes. Our main goal in this exercise is to build the app image and verify the Deployment on Kubernetes in the CircleCI pipeline. Skaffold and Jib Maven plugin build the image from the source and deploy it on Kind using YAML manifests. Finally, we will run some load tests on the deployed app using the Grafana k6 tool and its integration with CircleCI.

If you want to build and run tests against Kubernetes, you can read my article about integration tests with JUnit. On the other hand, if you are looking for other testing tools for testing in a Kubernetes-native environment you can refer to that article about Testkube.

Introduction

Before we start, let’s do a brief introduction. There are three simple Spring Boot apps that communicate with each other. The first-service app calls the endpoint exposed by the caller-service app, and then the caller-service app calls the endpoint exposed by the callme-service app. The diagram visible below illustrates that architecture.

kubernetes-circleci-arch

So in short, our goal is to deploy all the sample apps on Kind during the CircleCI build and then test the communication by calling the endpoint exposed by the first-service through the Kubernetes Service.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. It contains three apps: first-service, caller-service, and callme-service. The main Skaffold config manifest is available in the project root directory. Required Kubernetes YAML manifests are always placed inside the k8s directory. Once you take a look at the source code, you should just follow my instructions. Let’s begin.

Our sample Spring Boot apps are very simple. They are exposing a single “ping” endpoint over HTTP and call “ping” endpoints exposed by other apps. Here’s the @RestController in the first-service app:

@RestController
@RequestMapping("/first")
public class FirstController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(FirstController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() 
         ? buildProperties.get().getName() : "first-service", version);
      String response = restTemplate.getForObject(
         "http://caller-service:8080/caller/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm first-service " + version + ". Calling... " + response;
   }

}

Here’s the @RestController inside the caller-service app. The endpoint is called by the first-service app through the RestTemplate bean.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallerController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.or(Optional::empty), version);
      String response = restTemplate.getForObject(
         "http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }

}

Finally, here’s the @RestController inside the callme-service app. It also exposes a single GET /callme/ping endpoint called by the caller-service app:

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallmeController.class);
   private static final String INSTANCE_ID = UUID.randomUUID().toString();
   private Random random = new Random();

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() 
         ? buildProperties.get().getName() : "callme-service", version);
      return "I'm callme-service " + version;
   }

}

Build and Deploy Images with Skaffold and Jib

Firstly, let’s take a look at the main Maven pom.xml in the project root directory. We use the latest version of Spring Boot and the latest LTS version of Java for compilation. All three app modules inherit settings from the parent pom.xml. In order to build the image with Maven we are including jib-maven-plugin. Since it is still using Java 17 in the default base image, we need to override this behavior with the <from>.
            </from>
          </configuration>
        </plugin>
      </plugins>
    </build>
  </profile>
</profiles>

Now, let’s take a look at the main skaffold.yaml file. Skaffold builds the image using Jib support and deploys all three apps on Kubernetes using manifests available in the k8s/deployment.yaml file inside each app module. Skaffold disables JUnit tests for Maven and activates the jib profile. It is also able to deploy Istio objects after activating the istio Skaffold profile. However, we won’t use it today.

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: simple-istio-services
build:
  artifacts:
    - image: piomin/first-service
      jib:
        project: first-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/caller-service
      jib:
        project: caller-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/callme-service
      jib:
        project: callme-service
        args:
          - -Pjib
          - -DskipTests
  tagPolicy:
    gitCommit: {}
manifests:
  rawYaml:
    - '*/k8s/deployment.yaml'
deploy:
  kubectl: {}
profiles:
  - name: istio
    manifests:
      rawYaml:
        - k8s/istio-*.yaml
        - '*/k8s/deployment-versions.yaml'
        - '*/k8s/istio-*.yaml'

Here’s the typical Deployment for our apps. The app is running on the 8080 port.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: first-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: first-service
  template:
    metadata:
      labels:
        app: first-service
    spec:
      containers:
        - name: first-service
          image: piomin/first-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

For testing purposes, we need to expose the first-service outside of the Kind cluster. In order to do that, we will use the Kubernetes NodePort Service. Our app will be available under the 30000 port.

apiVersion: v1
kind: Service
metadata:
  name: first-service
  labels:
    app: first-service
spec:
  type: NodePort
  ports:
  - port: 8080
    name: http
    nodePort: 30000
  selector:
    app: first-service

Note that all other Kubernetes services (“caller-service” and “callme-service”) are exposed only internally using a default ClusterIP type.

How It Works

In this section, we will discuss how we would run the whole process locally. Of course, our goal is to configure it as the CircleCI pipeline. In order to expose the Kubernetes Service outside Kind we need to define the externalPortMappings section in the configuration manifest. As you probably remember, we are exposing our app under the 30000 port. The following file is available in the repository under the k8s/kind-cluster-test.yaml path:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
  - role: control-plane
    extraPortMappings:
      - containerPort: 30000
        hostPort: 30000
        listenAddress: "0.0.0.0"
        protocol: tcp

Assuming we already installed kind CLI on our machine, we need to execute the following command to create a new cluster:

$ kind create cluster --name c1 --config k8s/kind-cluster-test.yaml

You should have the same result as visible on my screen:

We have a single-node Kind cluster ready. There is a single c1-control-plane container running on Docker. As you see, it exposes 30000 port outside of the cluster:

The Kubernetes context is automatically switched to kind-c1. So now, we just need to run the following command from the repository root directory to build and deploy the apps:

$ skaffold run

If you see a similar output in the skaffold run logs, it means that everything works fine.

kubernetes-circleci-skaffold

We can verify a list of Kubernetes services. The first-service is exposed under the 30000 port as expected.

$ kubectl get svc
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
caller-service   ClusterIP   10.96.47.193   <none>        8080/TCP         2m24s
callme-service   ClusterIP   10.96.98.53    <none>        8080/TCP         2m24s
first-service    NodePort    10.96.241.11   <none>        8080:30000/TCP   2m24s

Assuming you have already installed the Grafana k6 tool locally, you may run load tests using the following command:

$ k6 run first-service/src/test/resources/k6/load-test.js

That’s all. Now, let’s define the same actions with the CircleCI workflow.

Test Kubernetes Deployment with the CircleCI Workflow

The CircleCI config.yml file should be placed in the .circle directory. We are doing two things in our pipeline. In the first step, we are executing Maven unit tests without the Kubernetes cluster. That’s why we need a standard executor with OpenJDK 21 and the maven ORB. In order to run Kind during the CircleCI build, we need to have access to the Docker daemon. Therefore, we use the latest version of the ubuntu-2204 machine.

version: 2.1

orbs:
  maven: circleci/maven@1.4.1

executors:
  jdk:
    docker:
      - image: 'cimg/openjdk:21.0'
  machine_executor_amd64:
    machine:
      image: ubuntu-2204:2023.10.1
    environment:
      architecture: "amd64"
      platform: "linux/amd64"

After that, we can proceed to the job declaration. The name of our job is deploy-k8s. It uses the already-defined machine executor. Let’s discuss the required steps after running a standard checkout command:

  1. We need to install the kubectl CLI and copy it to the /usr/local/bin directory. Skaffold uses kubectl to interact with the Kubernetes cluster.
  2. After that, we have to install the skaffold CLI
  3. Our job also requires the kind CLI to be able to create or delete Kind clusters on Docker…
  4. … and the Grafana k6 CLI to run load tests against the app deployed on the cluster
  5. There is a good chance that this step won’t required once CircleCI releases a new version of ubuntu-2204 machine (probably 2024.1.1 according to the release strategy). For now, ubuntu-2204 provides OpenJDK 17, so we need to install OpenJDK 17 to successfully build the app from the source code
  6. After installing all the required tools we can create a new Kubernetes with the kind create cluster command.
  7. Once a cluster is ready, we can deploy our apps using the skaffold run command.
  8. Once the apps are running on the cluster, we can proceed to the tests phase. We are running the test defined inside the first-service/src/test/resources/k6/load-test.js file.
  9. After doing all the required steps, it is important to remove the Kind cluster
 jobs:
  deploy-k8s:
    executor: machine_executor_amd64
    steps:
      - checkout
      - run: # (1)
          name: Install Kubectl
          command: |
            curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
            chmod +x kubectl
            sudo mv ./kubectl /usr/local/bin/kubectl
      - run: # (2)
          name: Install Skaffold
          command: |
            curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64
            chmod +x skaffold
            sudo mv skaffold /usr/local/bin
      - run: # (3)
          name: Install Kind
          command: |
            [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
            chmod +x ./kind
            sudo mv ./kind /usr/local/bin/kind
      - run: # (4)
          name: Install Grafana K6
          command: |
            sudo gpg -k
            sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
            echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
            sudo apt-get update
            sudo apt-get install k6
      - run: # (5)
          name: Install OpenJDK 21
          command: |
            java -version
            sudo apt-get update && sudo apt-get install openjdk-21-jdk
            sudo update-alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
            sudo update-alternatives --set javac /usr/lib/jvm/java-21-openjdk-amd64/bin/javac
            java -version
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
      - run: # (6)
          name: Create Kind Cluster
          command: |
            kind create cluster --name c1 --config k8s/kind-cluster-test.yaml
      - run: # (7)
          name: Deploy to K8s
          command: |
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
            skaffold run
      - run: # (8)
          name: Run K6 Test
          command: |
            kubectl get svc
            k6 run first-service/src/test/resources/k6/load-test.js
      - run: # (9)
          name: Delete Kind Cluster
          command: |
            kind delete cluster --name c1

Here’s the definition of our load test. It has to be written in JavaScript. It defines some thresholds like a % of maximum failed requests or maximum response time for 95% of requests. As you see, we are testing the http://localhost:30000/first/ping endpoint:

import { sleep } from 'k6';
import http from 'k6/http';

export const options = {
  duration: '60s',
  vus: 10,
  thresholds: {
    http_req_failed: ['rate<0.25'],
    http_req_duration: ['p(95)<1000'],
  },
};

export default function () {
  http.get('http://localhost:30000/first/ping');
  sleep(2);
}

Finally, the last part of the CircleCI config file. It defines pipeline workflow. In the first step, we are running tests with Maven. After that, we proceeded to the deploy-k8s job.

workflows:
  build-and-deploy:
    jobs:
      - maven/test:
          name: test
          executor: jdk
      - deploy-k8s:
          requires:
            - test

Once we push a change to the sample Git repository we trigger a new CircleCI build. You can verify it by yourself here in my CircleCI project page.

As you see all the pipeline steps have been finished successfully.

kubernetes-circleci-build

We can display logs for every single step. Here are the logs from the k6 load test step.

There were some errors during the warm-up. However, the test shows that our scenario works on the Kubernetes cluster.

Final Thoughts

CircleCI is one of the most popular CI/CD platforms. Personally, I’m using it for running builds and tests for all my demo repositories on GitHub. For the sample projects dedicated to the Kubernetes cluster, I want to verify such steps as building images with Jib, Kubernetes deployment scripts, or Skaffold configuration. This article shows how to easily perform such tests with CircleCI and Kubernetes cluster running on Kind. Hope it helps 🙂

The post Kubernetes Testing with CircleCI, Kind, and Skaffold appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/feed/ 0 14706
Apache Kafka on Kubernetes with Strimzi https://piotrminkowski.com/2023/11/06/apache-kafka-on-kubernetes-with-strimzi/ https://piotrminkowski.com/2023/11/06/apache-kafka-on-kubernetes-with-strimzi/#comments Mon, 06 Nov 2023 08:49:30 +0000 https://piotrminkowski.com/?p=14613 In this article, you will learn how to install and manage Apache Kafka on Kubernetes with Strimzi. The Strimzi operator lets us declaratively define and configure Kafka clusters, and several other components like Kafka Connect, Mirror Maker, or Cruise Control. Of course, it’s not the only way to install Kafka on Kubernetes. As an alternative, […]

The post Apache Kafka on Kubernetes with Strimzi appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to install and manage Apache Kafka on Kubernetes with Strimzi. The Strimzi operator lets us declaratively define and configure Kafka clusters, and several other components like Kafka Connect, Mirror Maker, or Cruise Control. Of course, it’s not the only way to install Kafka on Kubernetes. As an alternative, we can use the Bitnami Helm chart available here. In comparison to that approach, Strimzi simplifies the creation of additional components. We will analyze it on the example of the Cruise Control tool.

You can find many other articles about Apache Kafka on my blog. For example, to read about concurrency with Spring Kafka please refer to the following post. There is also an article about Kafka transactions available here.

Prerequisites

In order to proceed with the exercise, you need to have a Kubernetes cluster. This cluster should have at least three worker nodes since I’m going to show you the approach with Kafka brokers spread across several nodes. We can easily simulate multiple Kubernetes nodes locally with Kind. You need to install the kind CLI tool and start Docker on your laptop. Here’s the Kind configuration manifest containing a definition of a single control plane and 4 worker nodes:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
- role: worker

Then, we need to create the Kubernetes cluster based on the manifest visible above with the following kind command:

$ kind create cluster --name c1 --config cluster.yaml

The name of our Kind cluster is c1. It corresponds to the kind-c1 Kubernetes context, which is automatically set as default after creating the cluster. After that, we can display a list of Kubernetes nodes using the following kubectl command:

$ kubectl get node
NAME               STATUS   ROLES           AGE  VERSION
c1-control-plane   Ready    control-plane   1m   v1.27.3
c1-worker          Ready    <none>          1m   v1.27.3
c1-worker2         Ready    <none>          1m   v1.27.3
c1-worker3         Ready    <none>          1m   v1.27.3
c1-worker4         Ready    <none>          1m   v1.27.3

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, go to the kafka directory. There are two Spring Boot apps inside the producer and consumer directories. The required Kubernetes manifests are available inside the k8s directory. You can apply them with kubectl or using the Skaffold CLI tool. The repository is already configured to work with Skaffold and Kind. To proceed with the exercise just follow my instructions in the next sections.

Architecture

Let’s analyze our main goals in this exercise. Of course, we want to run a Kafka cluster on Kubernetes as simple as possible. There are several requirements for the cluster:

  1. It should automatically expose broker metrics in the Prometheus format. Then we will use Prometheus mechanisms to get the metrics and store them for visualization.
  2. It should consist of at least 3 brokers. Each broker has to run on a different Kubernetes worker node.
  3. Our Kafka needs to work in the Zookeeper-less mode. Therefore, we need to enable the KRaft protocol between the brokers.
  4. Once we scale up the Kafka cluster, we must automatically rebalance it to reassign partition replicas to the new broker. In order to do that, we will use the Cruise Control support in Strimzi.

Here’s the diagram that visualizes the described architecture. We will also run two simple Spring Boot apps on Kubernetes that connect the Kafka cluster and use it to send/receive messages.

kafka-on-kubernetes-arch

1. Install Monitoring Stack on Kubernetes

In the first step, we will install the monitoring on our Kubernetes cluster. We are going to use the kube-prometheus-stack Helm chart for that. It provides preconfigured instances of Prometheus and Grafana. It also comes with several CRD objects that allow us to easily customize monitoring mechanisms according to our needs. Let’s add the following Helm repository:

$ helm repo add prometheus-community \
    https://prometheus-community.github.io/helm-charts

Then, we can install the chart in the monitoring namespace. We can leave the default configuration.

$ helm install kube-prometheus-stack \
    prometheus-community/kube-prometheus-stack \
    --version 52.1.0 -n monitoring --create-namespace

2. Install Strimzi Operator on Kubernetes

In the next step, we will install the Strimzi operator on Kubernetes using Helm chart. The same as before we need to add the Helm repository:

$ helm repo add strimzi https://strimzi.io/charts

Then, we can proceed to the installation. This time we will override some configuration settings. The Strimzi Helm chart comes with a set of Grafana dashboards to visualize metrics exported by Kafka brokers and some other components managed by Strimzi. We place those dashboards inside the monitoring namespace. By default, the Strimzi chart doesn’t add the dashboards, so we also need to enable that feature in the values YAML file. That’s not all. Because we want to run Kafka in the KRaft mode, we need to enable it using feature gates. Enabling the UseKRaft feature gate requires the KafkaNodePools feature gate to be enabled as well. Then when we deploy a Kafka cluster in KRaft mode, we also must use the KafkaNodePool resources. Here’s the full list of overridden Helm chart values:

dashboards:
  enabled: true
  namespace: monitoring
featureGates: +UseKRaft,+KafkaNodePools,+UnidirectionalTopicOperator

Finally, let’s install the operator in the strimzi namespace using the following command:

$ helm install strimzi-kafka-operator strimzi/strimzi-kafka-operator \
    --version 0.38.0 \
    -n strimzi --create-namespace \
    -f strimzi-values.yaml

3. Run Kafka in the KRaft Mode

In the current version of Strimzi KRaft mode support is still in the alpha phase. This will probably change soon but for now, we have to deal with some inconveniences. In the previous section, we enabled three feature gates required to run Kafka in KRaft mode. Thanks to that we can finally define our Kafka cluster. In the first step, we need to create a node pool. This new Strimzi object is responsible for configuring brokers and controllers in the cluster. Controllers are responsible for coordinating operations and maintaining the cluster’s state. Fortunately, a single node in the poll can act as a controller and a broker at the same time.

Let’s create the KafkaNodePool object for our cluster. As you see it defines two roles: broker and controller (1). We can also configure storage for the cluster members (2). One of our goals is to avoid sharing the same Kubernetes node between Kafka brokers. Therefore, we will define the podAntiAffinity section (3). Setting the topologyKey to kubernetes.io/hostname indicates that the selected pods are not scheduled on nodes with the same hostname (4).

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
  name: dual-role
  namespace: strimzi
  labels:
    strimzi.io/cluster: my-cluster
spec:
  replicas: 3
  roles: # (1)
    - controller
    - broker
  storage: # (2)
    type: jbod
    volumes:
      - id: 0
        type: persistent-claim
        size: 20Gi
        deleteClaim: false
  template:
    pod:
      affinity:
        podAntiAffinity: # (3)
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: strimzi.io/name
                    operator: In
                    values:
                      - my-cluster-kafka
              topologyKey: "kubernetes.io/hostname" # (4)

Once we create a node pool, we can proceed to the Kafka object creation. We need to enable Kraft mode and node pools for the particular cluster by annotating it with strimzi.io/kraft and strimzi.io/node-pools (1). The sections like storage (2) or zookeeper (5) are not used in the KRaft mode but are still required by the CRD. We should also configure the cluster metrics exporter (3) and enable the Cruise Control component (4). Of course, our cluster is exposing API for the client connection under the 9092 port.

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
  namespace: strimzi
  annotations: # (1)
    strimzi.io/node-pools: enabled
    strimzi.io/kraft: enabled
spec:
  kafka:
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      default.replication.factor: 3
      min.insync.replicas: 2
      inter.broker.protocol.version: '3.6'
    storage: # (2)
      type: persistent-claim
      size: 5Gi
      deleteClaim: true
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    version: 3.6.0
    replicas: 3
    metricsConfig: # (3)
      type: jmxPrometheusExporter
      valueFrom:
        configMapKeyRef:
          name: kafka-metrics
          key: kafka-metrics-config.yml
  entityOperator:
    topicOperator: {}
    userOperator: {}
  cruiseControl: {} # (4)
  # (5)
  zookeeper:
    storage:
      type: persistent-claim
      deleteClaim: true
      size: 2Gi
    replicas: 3

The metricsConfig section in the Kafka object took the ConfigMap as the configuration source. This ConfigMap contains a single kafka-metrics-config.yml entry with the Prometheus rules definition.

kind: ConfigMap
apiVersion: v1
metadata:
  name: kafka-metrics
  namespace: strimzi
  labels:
    app: strimzi
data:
  kafka-metrics-config.yml: |
    lowercaseOutputName: true
    rules:
    - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value
      name: kafka_server_$1_$2
      type: GAUGE
      labels:
        clientId: "$3"
        topic: "$4"
        partition: "$5"
    - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), brokerHost=(.+), brokerPort=(.+)><>Value
      name: kafka_server_$1_$2
      type: GAUGE
      labels:
        clientId: "$3"
        broker: "$4:$5"
    - pattern: kafka.server<type=(.+), cipher=(.+), protocol=(.+), listener=(.+), networkProcessor=(.+)><>connections
      name: kafka_server_$1_connections_tls_info
      type: GAUGE
      labels:
        cipher: "$2"
        protocol: "$3"
        listener: "$4"
        networkProcessor: "$5"
    - pattern: kafka.server<type=(.+), clientSoftwareName=(.+), clientSoftwareVersion=(.+), listener=(.+), networkProcessor=(.+)><>connections
      name: kafka_server_$1_connections_software
      type: GAUGE
      labels:
        clientSoftwareName: "$2"
        clientSoftwareVersion: "$3"
        listener: "$4"
        networkProcessor: "$5"
    - pattern: "kafka.server<type=(.+), listener=(.+), networkProcessor=(.+)><>(.+):"
      name: kafka_server_$1_$4
      type: GAUGE
      labels:
        listener: "$2"
        networkProcessor: "$3"
    - pattern: kafka.server<type=(.+), listener=(.+), networkProcessor=(.+)><>(.+)
      name: kafka_server_$1_$4
      type: GAUGE
      labels:
        listener: "$2"
        networkProcessor: "$3"
    - pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*><>MeanRate
      name: kafka_$1_$2_$3_percent
      type: GAUGE
    - pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*><>Value
      name: kafka_$1_$2_$3_percent
      type: GAUGE
    - pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*, (.+)=(.+)><>Value
      name: kafka_$1_$2_$3_percent
      type: GAUGE
      labels:
        "$4": "$5"
    - pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*, (.+)=(.+), (.+)=(.+)><>Count
      name: kafka_$1_$2_$3_total
      type: COUNTER
      labels:
        "$4": "$5"
        "$6": "$7"
    - pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*, (.+)=(.+)><>Count
      name: kafka_$1_$2_$3_total
      type: COUNTER
      labels:
        "$4": "$5"
    - pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*><>Count
      name: kafka_$1_$2_$3_total
      type: COUNTER
    - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+), (.+)=(.+)><>Value
      name: kafka_$1_$2_$3
      type: GAUGE
      labels:
        "$4": "$5"
        "$6": "$7"
    - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+)><>Value
      name: kafka_$1_$2_$3
      type: GAUGE
      labels:
        "$4": "$5"
    - pattern: kafka.(\w+)<type=(.+), name=(.+)><>Value
      name: kafka_$1_$2_$3
      type: GAUGE
    - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+), (.+)=(.+)><>Count
      name: kafka_$1_$2_$3_count
      type: COUNTER
      labels:
        "$4": "$5"
        "$6": "$7"
    - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.*), (.+)=(.+)><>(\d+)thPercentile
      name: kafka_$1_$2_$3
      type: GAUGE
      labels:
        "$4": "$5"
        "$6": "$7"
        quantile: "0.$8"
    - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+)><>Count
      name: kafka_$1_$2_$3_count
      type: COUNTER
      labels:
        "$4": "$5"
    - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.*)><>(\d+)thPercentile
      name: kafka_$1_$2_$3
      type: GAUGE
      labels:
        "$4": "$5"
        quantile: "0.$6"
    - pattern: kafka.(\w+)<type=(.+), name=(.+)><>Count
      name: kafka_$1_$2_$3_count
      type: COUNTER
    - pattern: kafka.(\w+)<type=(.+), name=(.+)><>(\d+)thPercentile
      name: kafka_$1_$2_$3
      type: GAUGE
      labels:
        quantile: "0.$4"
    - pattern: "kafka.server<type=raft-metrics><>(.+-total|.+-max):"
      name: kafka_server_raftmetrics_$1
      type: COUNTER
    - pattern: "kafka.server<type=raft-metrics><>(.+):"
      name: kafka_server_raftmetrics_$1
      type: GAUGE
    - pattern: "kafka.server<type=raft-channel-metrics><>(.+-total|.+-max):"
      name: kafka_server_raftchannelmetrics_$1
      type: COUNTER
    - pattern: "kafka.server<type=raft-channel-metrics><>(.+):"
      name: kafka_server_raftchannelmetrics_$1
      type: GAUGE
    - pattern: "kafka.server<type=broker-metadata-metrics><>(.+):"
      name: kafka_server_brokermetadatametrics_$1
      type: GAUGE

4. Interacting with Kafka on Kubernetes

Once we apply the KafkaNodePool and Kafka objects to the Kubernetes cluster, Strimzi starts provisioning. As a result, you should see the broker pods, a single pod related to Cruise Control, and a metrics exporter pod. Each Kafka broker pod is running on a different Kubernetes node:

Clients can connect Kafka using the my-cluster-kafka-bootstrap Service under the 9092 port:

$ kubectl get svc
NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                        AGE
my-cluster-cruise-control    ClusterIP   10.96.108.204   <none>        9090/TCP                                       4m10s
my-cluster-kafka-bootstrap   ClusterIP   10.96.155.136   <none>        9091/TCP,9092/TCP,9093/TCP                     4m59s
my-cluster-kafka-brokers     ClusterIP   None            <none>        9090/TCP,9091/TCP,8443/TCP,9092/TCP,9093/TCP   4m59s

In the next step, we will deploy our two apps for producing and consuming messages. The producer app sends one message per second to the target topic:

@SpringBootApplication
@EnableScheduling
public class KafkaProducer {

   private static final Logger LOG = LoggerFactory
      .getLogger(KafkaProducer.class);

   public static void main(String[] args) {
      SpringApplication.run(KafkaProducer.class, args);
   }

   AtomicLong id = new AtomicLong();
   @Autowired
   KafkaTemplate<Long, Info> template;

   @Value("${POD:kafka-producer}")
   private String pod;
   @Value("${NAMESPACE:empty}")
   private String namespace;
   @Value("${CLUSTER:localhost}")
   private String cluster;
   @Value("${TOPIC:test}")
   private String topic;

   @Scheduled(fixedRate = 1000)
   public void send() {
      Info info = new Info(id.incrementAndGet(), 
                           pod, namespace, cluster, "HELLO");
      CompletableFuture<SendResult<Long, Info>> result = template
         .send(topic, info.getId(), info);
      result.whenComplete((sr, ex) ->
         LOG.info("Sent({}): {}", sr.getProducerRecord().key(), 
         sr.getProducerRecord().value()));
   }
}

Here’s the Deployment manifest for the producer app:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: producer
spec:
  selector:
    matchLabels:
      app: producer
  template:
    metadata:
      labels:
        app: producer
    spec:
      containers:
      - name: producer
        image: piomin/producer
        resources:
          requests:
            memory: 200Mi
            cpu: 100m
        ports:
        - containerPort: 8080
        env:
          - name: KAFKA_URL
            value: my-cluster-kafka-bootstrap
          - name: CLUSTER
            value: c1
          - name: TOPIC
            value: test-1
          - name: POD
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace

Before running the app we can create the test-1 topic with the Strimzi CRD:

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  name: test-1
  labels:
    strimzi.io/cluster: my-cluster
spec:
  partitions: 12
  replicas: 3
  config:
    retention.ms: 7200000
    segment.bytes: 1000000

The consumer app is listening for incoming messages. Here’s the bean responsible for receiving and logging messages:

@SpringBootApplication
@EnableKafka
public class KafkaConsumer {

   private static final Logger LOG = LoggerFactory
      .getLogger(KafkaConsumer.class);

   public static void main(String[] args) {
      SpringApplication.run(KafkaConsumer.class, args);
   }

   @Value("${app.in.topic}")
   private String topic;

   @KafkaListener(id = "info", topics = "${app.in.topic}")
   public void onMessage(@Payload Info info,
      @Header(name = KafkaHeaders.RECEIVED_KEY, required = false) Long key,
      @Header(KafkaHeaders.RECEIVED_PARTITION) int partition) {
      LOG.info("Received(key={}, partition={}): {}", key, partition, info);
   }
}

Here’s the Deployment manifest for the consumer app:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: consumer
spec:
  selector:
    matchLabels:
      app: consumer
  template:
    metadata:
      labels:
        app: consumer
    spec:
      containers:
      - name: consumer
        image: piomin/consumer
        resources:
          requests:
            memory: 200Mi
            cpu: 100m
        ports:
        - containerPort: 8080
        env:
          - name: TOPIC
            value: test-1
          - name: KAFKA_URL
            value: my-cluster-kafka-bootstrap

We can run both Spring Boot apps using Skaffold. Firstly, we need to go to the kafka directory in our repository. Then let’s run the following command:

$ skaffold run -n strimzi --tail

Finally, we can verify the logs printed by our apps. As you see, all the messages sent by the producer app are received by the consumer app.

kafka-on-kubernetes-logs

5. Kafka Metrics in Prometheus

Once we installed the Strimzi Helm chart with the dashboard.enabled=true and dashboard.namespace=monitoring, we have several Grafana dashboard manifests placed in the monitoring namespace. Each dashboard is represented as a ConfigMap. Let’s display a list of ConfigMaps installed by the Strimzi Helm chart:

$ kubectl get cm -n monitoring | grep strimzi
strimzi-cruise-control                                    1      2m
strimzi-kafka                                             1      2m
strimzi-kafka-bridge                                      1      2m
strimzi-kafka-connect                                     1      2m
strimzi-kafka-exporter                                    1      2m
strimzi-kafka-mirror-maker-2                              1      2m
strimzi-kafka-oauth                                       1      2m
strimzi-kraft                                             1      2m
strimzi-operators                                         1      2m
strimzi-zookeeper                                         1      2m

Since Grafana is also installed in the monitoring namespace, it automatically imports all the dashboards from ConfigMaps annotated with grafana_dashboard. Consequently, after logging into Grafana (admin / prom-operator), we can easily switch between all the Kafka-related dashboards.

The only problem is that Prometheus doesn’t scrape the metrics exposed by the Kafka pods. Since we have already configured metrics exporting on the Strimzi Kafka CRD, Kafka pods expose the /metric endpoint for Prometheus under the 9404 port. Let’s take a look at the Kafka broker pod details:

In order to force Prometheus to scrape metrics from Kafka pods, we need to create the PodMonitor object. We should place it in the monitoring (1) namespace and set the release=kube-prometheus-stack label (2). The PodMonitor object filters all the pods from the strimzi namespace (3) that contains the strimzi.io/kind label having one of the values: Kafka, KafkaConnect, KafkaMirrorMaker, KafkaMirrorMaker2 (4). Also, it has to query the /metrics endpoint under the port with the tcp-prometheus name (5).

apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: kafka-resources-metrics
  namespace: monitoring
  labels:
    app: strimzi
    release: kube-prometheus-stack
spec:
  selector:
    matchExpressions:
      - key: "strimzi.io/kind"
        operator: In
        values: ["Kafka", "KafkaConnect", "KafkaMirrorMaker", "KafkaMirrorMaker2"]
  namespaceSelector:
    matchNames:
      - strimzi
  podMetricsEndpoints:
  - path: /metrics
    port: tcp-prometheus
    relabelings:
    - separator: ;
      regex: __meta_kubernetes_pod_label_(strimzi_io_.+)
      replacement: $1
      action: labelmap
    - sourceLabels: [__meta_kubernetes_namespace]
      separator: ;
      regex: (.*)
      targetLabel: namespace
      replacement: $1
      action: replace
    - sourceLabels: [__meta_kubernetes_pod_name]
      separator: ;
      regex: (.*)
      targetLabel: kubernetes_pod_name
      replacement: $1
      action: replace
    - sourceLabels: [__meta_kubernetes_pod_node_name]
      separator: ;
      regex: (.*)
      targetLabel: node_name
      replacement: $1
      action: replace
    - sourceLabels: [__meta_kubernetes_pod_host_ip]
      separator: ;
      regex: (.*)
      targetLabel: node_ip
      replacement: $1
      action: replace

Finally, we can display the Grafana dashboard with Kafka metrics visualization. Let’s choose the dashboard with the “Strimzi Kafka” name. Here’s the general view:

kafka-on-kubernetes-metrics

There are several other diagrams available. For example, we can take a look at the statistics related to the incoming and outgoing messages.

6. Rebalancing Kafka with Cruise Control

Let’s analyze the typical scenario around Kafka related to increasing the number of brokers in the cluster. Before we do it, we will generate more incoming traffic to the test-1 topic. In order to do it, we can use the Grafana k6 tool. The k6 tool provides several extensions for load testing – including the Kafka plugin. Here’s the Deployment manifest that runs k6 with the Kafka extension on Kubernetes.

kind: ConfigMap
apiVersion: v1
metadata:
  name: load-test-cm
  namespace: strimzi
data:
  load-test.js: |
    import {
      Writer,
      SchemaRegistry,
      SCHEMA_TYPE_JSON,
    } from "k6/x/kafka";
    const writer = new Writer({
      brokers: ["my-cluster-kafka-bootstrap.strimzi:9092"],
      topic: "test-1",
    });
    const schemaRegistry = new SchemaRegistry();
    export default function () {
      writer.produce({
        messages: [
          {
            value: schemaRegistry.serialize({
              data: {
                id: 1,
                source: "test",
                space: "strimzi",
                cluster: "c1",
                message: "HELLO"
              },
              schemaType: SCHEMA_TYPE_JSON,
            }),
          },
        ],
      });
    }
    
    export function teardown(data) {
      writer.close();
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k6-test
  namespace: strimzi
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: k6-test
  template:
    metadata:
      labels:
        app.kubernetes.io/name: k6-test
    spec:
      containers:
        - image: mostafamoradian/xk6-kafka:latest
          name: xk6-kafka
          command:
            - "k6"
            - "run"
            - "--vus"
            - "1"
            - "--duration"
            - "720s"
            - "/tests/load-test.js"
          env:
            - name: KAFKA_URL
              value: my-cluster-kafka-bootstrap
            - name: CLUSTER
              value: c1
            - name: TOPIC
              value: test-1
            - name: POD
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          volumeMounts:
            - mountPath: /tests
              name: test
      volumes:
        - name: test
          configMap:
            name: load-test-cm

Let’s apply the manifest to the strimzi namespace with the following command:

$ kubectl apply -f k8s/k6.yaml

After that, we can take a look at the k6 Pod logs. As you see, it generates and sends a lot of messages to the test-1 topic on our Kafka cluster:

Now, let’s increase the number of Kafka brokers in our cluster. We can do it by changing the value of the replicas field in the KafkaNodePool object:

$ kubectl scale kafkanodepool dual-role --replicas=4 -n strimzi

After a while, Strimzi will start a new pod with another Kafka broker. Although we have a new member of the Kafka cluster, all the partitions are still distributed only across three previous brokers. The situation would be different for the new topic. However, the partitions related to the existing topics won’t be automatically migrated to the new broker instance. Let’s verify the current partition structure for the test-1 topic with kcat CLI (I’m exposing Kafka API locally with kubectl port-forward):

$ kcat -b localhost:9092 -L -t test-1
Metadata for test-1 (from broker -1: localhost:9092/bootstrap):
 4 brokers:
  broker 0 at my-cluster-dual-role-0.my-cluster-kafka-brokers.strimzi.svc:9092
  broker 1 at my-cluster-dual-role-1.my-cluster-kafka-brokers.strimzi.svc:9092
  broker 2 at my-cluster-dual-role-2.my-cluster-kafka-brokers.strimzi.svc:9092
  broker 3 at my-cluster-dual-role-3.my-cluster-kafka-brokers.strimzi.svc:9092 (controller)
 1 topics:
  topic "test-1" with 12 partitions:
    partition 0, leader 0, replicas: 0,1,2, isrs: 1,0,2
    partition 1, leader 1, replicas: 1,2,0, isrs: 1,0,2
    partition 2, leader 2, replicas: 2,0,1, isrs: 1,0,2
    partition 3, leader 0, replicas: 0,1,2, isrs: 1,0,2
    partition 4, leader 1, replicas: 1,2,0, isrs: 1,0,2
    partition 5, leader 2, replicas: 2,0,1, isrs: 1,0,2
    partition 6, leader 0, replicas: 0,1,2, isrs: 1,0,2
    partition 7, leader 1, replicas: 1,2,0, isrs: 1,0,2
    partition 8, leader 2, replicas: 2,0,1, isrs: 1,0,2
    partition 9, leader 0, replicas: 0,2,1, isrs: 1,0,2
    partition 10, leader 2, replicas: 2,1,0, isrs: 1,0,2
    partition 11, leader 1, replicas: 1,0,2, isrs: 1,0,2

Here comes Cruise Control. Cruise Control makes managing and operating Kafka much easier. For example, it allows us to move partitions across brokers after scaling up the cluster. Let’s see how it works. We have already enabled Cruise Control in the Strimzi Kafka CRD. In order to begin a rebalancing procedure, we should create the KafkaRebalance object. This object is responsible for asking Cruise Control to generate an optimization proposal.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaRebalance
metadata:
  name: my-rebalance
  labels:
    strimzi.io/cluster: my-cluster
spec: {}

If the optimization proposal is ready you will see the ProposalReady value in the Status.Conditions.Type field. I won’t get into the details of Cruise Control. It suggested moving 58 partition replicas between separate brokers in the cluster.

Let’s accept the proposal by annotating the KafkaRebalance object with strimzi.io/rebalance=approve:

$ kubectl annotate kafkarebalance my-rebalance \   
    strimzi.io/rebalance=approve -n strimzi

Finally, we can run the kcat command on the test-1 topic once again. Now, as you see, partition replicas are spread across all the brokers.

$ kcat -b localhost:9092 -L -t test-1
Metadata for test-1 (from broker -1: localhost:9092/bootstrap):
 4 brokers:
  broker 0 at my-cluster-dual-role-0.my-cluster-kafka-brokers.strimzi.svc:9092
  broker 1 at my-cluster-dual-role-1.my-cluster-kafka-brokers.strimzi.svc:9092
  broker 2 at my-cluster-dual-role-2.my-cluster-kafka-brokers.strimzi.svc:9092
  broker 3 at my-cluster-dual-role-3.my-cluster-kafka-brokers.strimzi.svc:9092 (controller)
 1 topics:
  topic "test-1" with 12 partitions:
    partition 0, leader 2, replicas: 2,1,3, isrs: 1,2,3
    partition 1, leader 1, replicas: 1,2,0, isrs: 1,0,2
    partition 2, leader 2, replicas: 0,2,1, isrs: 1,0,2
    partition 3, leader 0, replicas: 0,2,3, isrs: 0,2,3
    partition 4, leader 1, replicas: 3,2,1, isrs: 1,2,3
    partition 5, leader 2, replicas: 2,3,0, isrs: 0,2,3
    partition 6, leader 0, replicas: 0,1,2, isrs: 1,0,2
    partition 7, leader 1, replicas: 3,1,0, isrs: 1,0,3
    partition 8, leader 2, replicas: 2,0,1, isrs: 1,0,2
    partition 9, leader 0, replicas: 0,3,1, isrs: 1,0,3
    partition 10, leader 2, replicas: 2,3,0, isrs: 0,2,3
    partition 11, leader 1, replicas: 1,0,3, isrs: 1,0,3

Final Thoughts

Strimzi allows us not only to install and manage Kafka but also the whole ecosystem around it. In this article, I showed how to export metrics to Prometheus and use the Cruise Control tool to rebalance a cluster after scale-up. We also ran Kafka in KRaft mode and then connected two simple Java apps with the cluster through Kubernetes Service.

The post Apache Kafka on Kubernetes with Strimzi appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/11/06/apache-kafka-on-kubernetes-with-strimzi/feed/ 6 14613
Kubernetes Multicluster Load Balancing with Skupper https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/ https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/#respond Fri, 04 Aug 2023 00:03:25 +0000 https://piotrminkowski.com/?p=14372 In this article, you will learn how to leverage Skupper for load balancing between app instances running on several Kubernetes clusters. We will create some Kubernetes clusters locally with Kind. Then we will connect them using Skupper. Skupper cluster interconnection works in Layer 7 (application layer). It means there is no need to create any […]

The post Kubernetes Multicluster Load Balancing with Skupper appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to leverage Skupper for load balancing between app instances running on several Kubernetes clusters. We will create some Kubernetes clusters locally with Kind. Then we will connect them using Skupper.

Skupper cluster interconnection works in Layer 7 (application layer). It means there is no need to create any VNPs or special firewall rules. Skupper is working according to the Virtual Application Network (VAN) approach. Thanks to that it can connect different Kubernetes clusters and guarantee communication between services without exposing them to the Internet. You can read more about the concept behind it in the Skupper docs.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. This time we will do almost everything using a command-line tool (skupper CLI). The repository contains just a sample app Spring Boot with Kubernetes Deployment manifests and Skaffold config. You will find here instructions on how to deploy the app with Skaffold, but you can as well use another tool. As always, follow my instructions for the details 🙂

Create Kubernetes clusters with Kind

In the first step, we will create three Kubernetes clusters with Kind. We need to give them different names: c1, c2 and c3. Accordingly, they are available under the context names: kind-c1, kind-c2 and kind-c3.

$ kind create cluster --name c1
$ kind create cluster --name c2
$ kind create cluster --name c3

In this exercise, we will switch between the clusters a few times. Personally, I’m using the kubectx to switch between different Kubernetes contexts and kubens to switch between the namespaces.

By default, Skupper exposes itself as a Kubernetes LoadBalancer Service. Therefore, we need to enable the load balancer on Kind. In order to do that, we can install MetalLB. You can find the full installation instructions in the Kind docs here. Firstly, let’s switch to the c1 cluster:

$ kubectx kind-c1

Then, we have to apply the following YAML manifest:

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

You should repeat the same procedure for the other two clusters: c2 and c3. However, it is not all. We also need to set up the address pool used by load balancers. To do that, let’s first check a range of IP addresses on the Docker network used by Kind. For me it is 172.19.0.0/16 172.19.0.1.

$ docker network inspect -f '{{.IPAM.Config}}' kind

According to the results, we need to choose the right IP address for all three Kind clusters. Then we have to create the IPAddressPool object, which contains the IPs range. Here’s the YAML manifest for the c1 cluster:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.200-172.19.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

Here’s the pool configuration for e.g. the c2 cluster. It is important that the address range should not conflict with the ranges in two other Kind clusters.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.150-172.19.255.199
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

Finally, the configuration for the c3 cluster:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.100-172.19.255.149
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

After applying the YAML manifests with the kubectl apply -f command we can proceed to the next section.

Install Skupper on Kubernetes

We can install and manage Skupper on Kubernetes in two different ways: with CLI or through YAML manifests. Most of the examples in Skupper documentation use CLI for that, so I guess it is a preferable approach. Consequently, before we start with Kubernetes, we need to install CLI. You can find installation instructions in the Skupper docs here. Once you install it, just verify if it works with the following command:

$ skupper version

After that, we can proceed with Kubernetes clusters. We will create the same namespace interconnect inside all three clusters. To simplify our upcoming exercise we can also set a default namespace for each context (alternatively you can do it with the kubectl config set-context --current --namespace interconnect command).

$ kubectl create ns interconnect
$ kubens interconnect

Then, let’s switch to the kind-c1 cluster. We will stay in this context until the end of our exercise 🙂

$ kubectx kind-c1

Finally, we will install Skupper on our Kubernetes clusters. In order to do that, we have to execute the skupper init command. Fortunately, it allows us to set the target Kubernetes context with the -c parameter. Inside the kind-c1 cluster, we will also enable the Skupper UI dashboard (--enable-console parameter). With the Skupper console, we may e.g. visualize a traffic volume for all targets in the Skupper network.

$ skupper init --enable-console --enable-flow-collector
$ skupper init -c kind-c2
$ skupper init -c kind-c3

Let’s verify the status of the Skupper installation:

$ skupper status
$ skupper status -c kind-c2
$ skupper status -c kind-c3

Here’s the status for Skupper running in the kind-c1 cluster:

kubernetes-skupper-status

We can also display a list of running Skupper pods in the interconnect namespace:

$ kubectl get po
NAME                                          READY   STATUS    RESTARTS   AGE
skupper-prometheus-867f57b89-dc4lq            1/1     Running   0          3m36s
skupper-router-55bbb99b87-k4qn5               2/2     Running   0          3m40s
skupper-service-controller-6bf57595dd-45hvw   2/2     Running   0          3m37s

Now, our goal is to connect both the c2 and c3 Kind clusters with the c1 cluster. In the Skupper nomenclature, we have to create a link between the namespace in the source and target cluster. Before we create a link we need to generate a secret token that signifies permission to create a link. The token also carries the link details. We are generating two tokens on the target cluster. Each token is stored as the YAML file. The first of them is for the kind-c2 cluster (skupper-c2-token.yaml), and the second for the kind-c3 cluster (skupper-c3-token.yaml).

$ skupper token create skupper-c2-token.yaml
$ skupper token create skupper-c3-token.yaml

We will consider several scenarios where we create a link using different parameters. Before that, let’s deploy our sample app on the kind-c2 and kind-c3 clusters.

Running the sample app on Kubernetes with Skaffold

After cloning the sample app repository go to the main directory. You can easily build and deploy the app to both kind-c2 and kind-c3 with the following commands:

$ skaffold dev --kube-context=kind-c2
$ skaffold dev --kube-context=kind-c3

After deploying the app skaffold automatically prints all the logs as shown below. It will be helpful for the next steps in our exercise.

Our app is deployed under the sample-spring-kotlin-microservice name.

Load balancing with Skupper – scenarios

Scenario 1: the same number of pods and link cost

Let’s start with the simplest scenario. We have a single pod of our app running on the kind-c2 and kind-c3 cluster. In Skupper we can also assign a cost to each link to influence the traffic flow. By default, link cost is set to 1 for a new link. In a service network, the routing algorithm attempts to use the path with the lowest total cost from the client to the target server. For now, we will leave a default value. Here’s a visualization of the first scenario:

Let’s create links to the c1 Kind cluster using the previously generated tokens.

$ skupper link create skupper-c2-token.yaml -c kind-c2
$ skupper link create skupper-c3-token.yaml -c kind-c3

If everything goes fine you should see a similar message:

We can also verify the status of links by executing the following commands:

$ skupper link status -c kind-c2
$ skupper link status -c kind-c3

It means that now c2 and c3 Kind clusters are “working” in the same Skupper network as the c1 cluster. The next step is to expose our app running in both the c2 and c3 clusters into the c1 cluster. Skupper works at Layer 7 and by default, it doesn’t connect apps unless we won’t enable that feature for the particular app. In order to expose our apps to the c1 cluster we need to run the following command on both c2 and c3 clusters.

$ skupper expose deployment/sample-spring-kotlin-microservice \
  --port 8080 -c kind-c2
$ skupper expose deployment/sample-spring-kotlin-microservice \
  --port 8080 -c kind-c3

Let’s take a look at what happened at the target (kind-c1) cluster. As you see Skupper created the sample-spring-kotlin-microservice Kubernetes Service that forwards traffic to the skupper-router pod. The Skupper Router is responsible for load-balancing requests across pods being a part of the Skupper network.

To simplify our exercise, we will enable port-forwarding for the Service visible above.

$ kubectl port-forward svc/sample-spring-kotlin-microservice 8080:8080

Thanks to that we don’t have to configure Kubernetes Ingress to call the service. Now, we can send some test requests over localhost, e.g. with siege.

$ siege -r 200 -c 5 http://localhost:8080/persons/1

We can easily verify that the traffic is coming to pods running on the kind-c2 and kind-c3 by looking at the logs. Alternatively, we can go to the Skupper console and see the traffic visualization:

kubernetes-skupper-diagram-first

Scenario 2: different number of pods and same link cost

In the next scenario, we won’t change anything in the Skupper network configuration. We will just run the second pod of the app in the kind-c3 cluster. So now, there is a single pod running in the kind-c2 cluster, and two pods running in the kind-c3 cluster. Here’s our architecture.

Once again, we can send some requests to the previously tested Kubernetes Service with the siege command:

$ siege -r 200 -c 5 http://localhost:8080/persons/2

Let’s take a look at traffic visualization in the Skupper dashboard. We can switch between all available pods. Here’s the diagram for the pod running in the kind-c2 cluster.

kubernetes-skupper-diagram

Here’s the same diagram for the pod running in the kind-c3 cluster. As you see it receives only ~50% (or even less depending on which pod we visualize) of traffic received by the pod in the kind-c2 cluster. That’s because Skupper there are two pods running in the kind-c3 cluster, while Skupper still balances requests across clusters equally.

Scenario 3: only one pod and different link costs

In the current scenario, there is a single pod of the app running on the c2 Kind cluster. At the same time, there are no pods on the c3 cluster (the Deployment exists but it has been scaled down to zero instances). Here’s the visualization of our scenario.

kubernetes-skupper-arch2

The important thing here is that the c3 cluster is preferred by Skupper since the link to it has a lower cost (2) than the link to the c2 cluster (4). So now, we need to remove the previous link, and then create a new one with the following commands:

$ skupper link create skupper-c2-token.yaml --cost 4 -c kind-c2
$ skupper link create skupper-c3-token.yaml --cost 2 -c kind-c3

In order to create a Skupper link once again you first need to delete the previous one with the skupper link delete link1 command. Then you have to generate new tokens with the skupper token create command as we did before.

Let’s take a look at the Skupper network status:

kubernetes-skupper-network-status

Let’s send some test requests to the exposed service. It works without any errors. Since there is only a single running pod, the whole traffic goes there:

Scenario 4 – more pods in one cluster and different link cost

Finally, the last scenario in our exercise. We will use the same Skupper configuration as in Scenario 3. However, this time we will run two pods in the kind-c3 cluster.

kubernetes-skupper-arch-1

We can switch once again to the Skupper dashboard. Now, as you see, all the pods receive a very similar amount of traffic. Here’s the diagram for the pod running on the kind-c2 cluster.

kubernetes-skupper-equal-traffic

Here’s a similar diagram for the pod running on the kind-c3 cluster. After setting the cost of the link assuming the number of pods running on the cluster I was able to split traffic equally between all the pods across both clusters. It works. However, it is not a perfect way for load-balancing. I would expect at least an option for enabling a round-robin between all the pods working in the same Skupper network. The solution presented in this scenario will work as expected unless we enable auto-scaling for the app.

Final Thoughts

Skupper introduces an interesting approach to the Kubernetes multicluster connectivity based fully on Layer 7. You can compare it to another solution based on the different layers like Submariner or Cilium cluster mesh. I described both of them in my previous articles. If you want to read more about Submariner visit the following post. If you are interested in Cilium read that article.

The post Kubernetes Multicluster Load Balancing with Skupper appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/feed/ 0 14372
Development on Kubernetes Multicluster with Devtron https://piotrminkowski.com/2022/11/02/development-on-kubernetes-multicluster-with-devtron/ https://piotrminkowski.com/2022/11/02/development-on-kubernetes-multicluster-with-devtron/#respond Wed, 02 Nov 2022 09:45:47 +0000 https://piotrminkowski.com/?p=13579 In this article, you will learn how to use Devtron for app development on Kubernetes in a multi-cluster environment. Devtron comes with tools for building, deploying, and managing microservices. It simplifies deployment on Kubernetes by providing intuitive UI and Helm charts support. Today, we will run a sample Spring Boot app using our custom Helm […]

The post Development on Kubernetes Multicluster with Devtron appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use Devtron for app development on Kubernetes in a multi-cluster environment. Devtron comes with tools for building, deploying, and managing microservices. It simplifies deployment on Kubernetes by providing intuitive UI and Helm charts support. Today, we will run a sample Spring Boot app using our custom Helm chart. We will deploy it in different namespaces across multiple Kubernetes clusters. Our sample app connects to the database, which runs on Kubernetes and has been deployed using the Devtron Helm chart support.

It’s not my first article about Devtron. You can read more about the GitOps approach with Devtron in this article. Today, I’m going to focus more on the developer-friendly features around Helm charts support.

Install Devtron on Kubernetes

In the first step, we will install Devtron on Kubernetes. There are two options for installation: with CI/CD module or without it. We won’t build a CI/CD process today, but there are some important features for our scenario included in this module. Firstly, let’s add the Devtron Helm repository:

$ helm repo add devtron https://helm.devtron.ai

Then, we have to execute the following Helm command:

$ helm install devtron devtron/devtron-operator \
    --create-namespace --namespace devtroncd \
    --set installer.modules={cicd}

For detailed installation instructions please refer to the Devtron documentation available here.

Create Kubernetes Cluster with Kind

In order to prepare a multi-cluster environment on the local machine, we will use Kind. Let’s create the second Kubernetes cluster c1 by executing the following command:

$ kind create cluster --name c1

The second cluster is available as the kind-c1 context. It becomes a default context after you create a Kind cluster.

Now, our goal is to add the newly created Kind cluster as a managed cluster in Devtron. A single instance of Devtron can manage multiple Kubernetes clusters. Of course, by default, it just manages a local cluster. Before we add our Kind cluster to the Devtron dashboard, we should first configure privileges on that cluster. The following script will generate a bearer token for authentication purposes so that Devtron is able to communicate with the target cluster:

$ curl -O https://raw.githubusercontent.com/devtron-labs/utilities/main/kubeconfig-exporter/kubernetes_export_sa.sh && bash kubernetes_export_sa.sh cd-user devtroncd https://raw.githubusercontent.com/devtron-labs/utilities/main/kubeconfig-exporter/clusterrole.yaml

The bearer token is printed in the output of that command. Just copy it.

We will also have to provide an URL of the master API of a target cluster. Since I’m running Kubernetes on Kind I need to get an internal address of the Docker container that contains Kind. In order to obtain it we need to run the following command:

$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' c1-control-plane

Here’s the IP address of my Kind cluster:

Now, we have all the required data to add a new managed cluster in the Devtron dashboard. In order to do that let’s navigate to the “Global Configuration” section. Then we need to choose the “Clusters and Environments” item and click the “Add cluster” button. We need to put the Kind cluster URL and previously generated bearer token.

If everything works fine, you should see the second cluster on the managed clusters list. Now, you also need to install the Devtron agent on Kind according to the message visible below:

devtron-development-agent

Create Environments

In the next step, we will define three environments. In Devtron environment is assigned to the cluster. We will create a single environment on the local cluster (local), and another two on the Kind cluster (remote-dev, remote-devqa). Each environment has a target namespace. In order to simplify, the name of the namespace is the same as the name environment. Of course, you may set any names you want.

devtron-development-clusters

Now, let’s switch to the “Clusters” view.

As you see there are two clusters connected to Devtron:

devtron-development-cluster-list

We can take a look at the details of each cluster. Here you can see a detailed view for the kind-c1 cluster:

Add Custom Helm Repository

One of the most important Devtron features is support for Helm charts. We can deploy charts individually or by creating a group of charts. By default, there are several Helm repositories available in Devtron including bitnami or elastic. It is also possible to add a custom repository. That’s something that we are going to do. We have our own custom Helm repository with a chart for deploying the Spring Boot app. I have already published it on GitHub under the address https://piomin.github.io/helm-charts/. The name of our chart is spring-boot-api-app, and the latest version is 0.3.2.

In order to add the custom repository in Devtron, we need to go to the “Global Configurations” section once again. Then go to the “Chart repositories” menu item, and click the “Add repository” button. As you see below, I added a new repository under the name piomin.

devtron-development-helm

Once you created a repository you can go to the “Chart Store” section to verify if the new chart is available.

devtron-development-helm-chart

Deploy the Spring Boot App with Devtron

Now, we can proceed to the most important part of our exercise – application deployment. Our sample Spring Boot app is available in the following repository on GitHub. It is a simple REST app written in Kotlin. It exposes some HTTP endpoints for adding and returning persons and uses an in-memory store. Here’s our Spring @RestController:

@RestController
@RequestMapping("/persons")
class PersonController(val repository: PersonRepository) {

   val log: Logger = LoggerFactory.getLogger(PersonController::class.java)

   @GetMapping("/{id}")
   fun findById(@PathVariable id: Int): Person? {
      log.info("findById({})", id)
      return repository.findById(id)
   }

   @GetMapping("/age/{age}")
   fun findByAge(@PathVariable age: Int): List<Person> {
      log.info("findByAge({})", age)
      return repository.findByAge(age)
   }

   @GetMapping
   fun findAll(): List<Person> = repository.findAll()

   @PostMapping
   fun add(@RequestBody person: Person): Person = repository.save(person)

   @PutMapping
   fun update(@RequestBody person: Person): Person = repository.update(person)

   @DeleteMapping("/{id}")
   fun remove(@PathVariable id: Int): Boolean = repository.removeById(id)

}

Let’s imagine we are just working on the latest version of that, and we want to deploy it on Kubernetes to perform some development tests. In the first step, we will build the app locally and push the image to the container registry using Jib Maven Plugin. Here’s the required configuration:

<plugin>
  <groupId>com.google.cloud.tools</groupId>
  <artifactId>jib-maven-plugin</artifactId>
  <version>3.3.0</version>
  <configuration>
    <to>
      
      <tags>
        <tag>1.1</tag>
      </tags>
    </to>
    <container>
      <user>999</user>
    </container>
  </configuration>
</plugin>

Let’s build and push the image to the container registry using the following command:

$ mvn clean compile jib:build -Pjib,tomcat

Besides YAML templates our Helm repository also contains a JSON schema for values.yaml validation. Thanks to that schema we would be able to take advantage of Devtron GUI for creating apps from the chart. Let’s see how it works. Once you click on our custom chart you will be redirected to the page with the details. The latest version of the chart is 0.3.2. Just click the Deploy button.

On the next page, we need to provide a configuration of our app. The target environment is local, which exists on the main cluster. Thanks to Devtron support for Helm values.schema.json we define all values using the GUI form. For example, we can increase change the value of the image to the latest – 1.1.

devtron-development-deploy-app

Once we deploy the app we may verify its status:

devtron-development-app-status

Let’s make some test calls. Our sample Spring Boot exposes Swagger UI, so we can easily send HTTP requests. To interact with the app running on Kubernetes we should enable port-forwarding for our service kubectl port-forward svc/sample-spring-boot-api 8080:8080. After executing that command you can access the Swagger UI under the address http://localhost:8080/swagger-ui.html.

Devtron allows us to view pod logs. We can “grep” them with our criteria. Let’s display the logs related to our test calls.

Deploy App to the Remote Cluster

Now, we will deploy our sample Spring Boot app to the remote cluster. In order to do that go to the same page as before, but instead of the local environment choose remote-dev. It is related to the kind-c1 cluster.

devtron-development-remote

Now, there are two same applications running on two different clusters. We can do the same thing for the app running on the Kind cluster as for the local cluster, e.g. verify its status or check logs.

Deploy Group of Apps

Let’s assume we would like to deploy the app that connects to the database. We can do it in a single step using the Devtron feature called “Chart Group”. With that feature, we can place our Helm chart for Spring Boot and the chart for e.g. Postgres inside the same logical group. Then, we can just deploy the whole group into the target environment. In order to create a chart group go to the Chart Store menu and then click the “Create Group” button. You should set the name of the group and choose the charts that will be included. For me, these are bitnami/postgresql and my custom Helm chart.

devtron-development-chart-group

After creating a group you will see it on the main “Chart Store” page. Now, just click on it to deploy the apps.

After you click the tile with the chart group, you will be predicted to the deploy page.

After you click the “Deploy to…” button Devtron will redirect you to the next page. You can set there a target project and environment for all member charts of the group. We will deploy them to the remote-devqa environment from the kind-c1 cluster. We can use the image from my Docker account: piomin/person:1.1. By default, it tries to connect to the database postgres on the postgres host. The only thing we need to inject into the app container is the postgres user password. It is available inside the postgresql Secret generated by the Bitnami Helm chart. To inject envs defined in that secret use the extraEnvVarsSecret parameter in our custom Spring Boot chart. Finally, let’s deploy both Spring Boot and Postgres in the remove-devqa namespace by clicking the “Deploy” button.

Here’s the final list of apps we have already deployed during this exercise:

Final Thoughts

With Devtron you can easily deploy applications across multiple Kubernetes clusters using Helm chart support. Devtron simplifies development on Kubernetes. You can deploy all required applications just with a “single click” with the chart group feature. Then you can manage and monitor them using a GUI dashboard. In general, you can do everything in the dashboard without passing any YAML manifests by yourself or executing kubectl commands.

The post Development on Kubernetes Multicluster with Devtron appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/11/02/development-on-kubernetes-multicluster-with-devtron/feed/ 0 13579
ActiveMQ Artemis with Spring Boot on Kubernetes https://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/ https://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/#comments Tue, 26 Jul 2022 09:58:34 +0000 https://piotrminkowski.com/?p=12504 This article will teach you how to run ActiveMQ on Kubernetes and integrate it with your app through Spring Boot. We will deploy a clustered ActiveMQ broker using a dedicated operator. Then we are going to build and run two Spring Boot apps. The first of them is running in multiple instances and receiving messages […]

The post ActiveMQ Artemis with Spring Boot on Kubernetes appeared first on Piotr's TechBlog.

]]>
This article will teach you how to run ActiveMQ on Kubernetes and integrate it with your app through Spring Boot. We will deploy a clustered ActiveMQ broker using a dedicated operator. Then we are going to build and run two Spring Boot apps. The first of them is running in multiple instances and receiving messages from the queue, while the second is sending messages to that queue. In order to test the ActiveMQ cluster, we will use Kind. The consumer app connects to the cluster using several different modes. We will discuss those modes in detail.

You can find a lot of articles about other message brokers like RabbitMQ or Kafka on my blog. If you would to read about RabbitMQ on Kubernetes please refer to that article. In order to find out more about Kafka and Spring Boot integration, you can read the article about Kafka Streams and Spring Cloud Stream available here. Previously I didn’t write much about ActiveMQ, but it is also a very popular message broker. For example, it supports the latest version of AMQP protocol, while Rabbit is based on their extension of AMQP 0.9.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then go to the messaging directory. You will find there three Spring Boot apps: simple-producer, simple-consumer and simple-counter. After that, you should just follow my instructions. Let’s begin.

Integrate Spring Boot with ActiveMQ

Let’s begin with integration between our Spring Boot apps and the ActiveMQ Artemis broker. In fact, ActiveMQ Artemis is the base of the commercial product provided by Red Hat called AMQ Broker. Red Hat actively develops a Spring Boot starter for ActiveMQ and an operator for running it on Kubernetes. In order to access Spring Boot, you need to include the Red Hat Maven repository in your pom.xml file:

<repository>
  <id>red-hat-ga</id>
  <url>https://maven.repository.redhat.com/ga</url>
</repository>

After that, you can include a starter in your Maven pom.xml:

<dependency>
  <groupId>org.amqphub.spring</groupId>
  <artifactId>amqp-10-jms-spring-boot-starter</artifactId>
  <version>2.5.6</version>
  <exclusions>
    <exclusion>
      <groupId>org.slf4j</groupId>
      <artifactId>log4j-over-slf4j</artifactId>
    </exclusion>
  </exclusions>
</dependency>

Then, we just need to enable JMS for our app with the @EnableJMS annotation:

@SpringBootApplication
@EnableJms
public class SimpleConsumer {

   public static void main(String[] args) {
      SpringApplication.run(SimpleConsumer.class, args);
   }

}

Our application is very simple. It just receives and prints an incoming message. The method for receiving messages should be annotated with @JmsListener. The destination field contains the name of a target queue.

@Service
public class Listener {

   private static final Logger LOG = LoggerFactory
      .getLogger(Listener.class);

   @JmsListener(destination = "test-1")
   public void processMsg(SimpleMessage message) {
      LOG.info("============= Received: " + message);
   }

}

Here’s the class that represents our message:

public class SimpleMessage implements Serializable {

   private Long id;
   private String source;
   private String content;

   public SimpleMessage() {
   }

   public SimpleMessage(Long id, String source, String content) {
      this.id = id;
      this.source = source;
      this.content = content;
   }

   // ... GETTERS AND SETTERS

   @Override
   public String toString() {
      return "SimpleMessage{" +
              "id=" + id +
              ", source='" + source + '\'' +
              ", content='" + content + '\'' +
              '}';
   }
}

Finally, we need to set connection configuration settings. With AMQP Spring Boot starter it is very simple. We just need to set the property amqphub.amqp10jms.remoteUrl. For now, we are going to base on the environment variable set at the level of Kubernetes Deployment.

amqphub.amqp10jms.remoteUrl = ${ARTEMIS_URL}

The producer application is pretty similar. Instead of the annotation for receiving messages, we use Spring JmsTemplate for producing and sending messages to the target queue. The method for sending messages is exposed as an HTTP POST /producer/send endpoint.

@RestController
@RequestMapping("/producer")
public class ProducerController {

   private static long id = 1;
   private final JmsTemplate jmsTemplate;
   @Value("${DESTINATION}")
   private String destination;

   public ProducerController(JmsTemplate jmsTemplate) {
      this.jmsTemplate = jmsTemplate;
   }

   @PostMapping("/send")
   public SimpleMessage send(@RequestBody SimpleMessage message) {
      if (message.getId() == null) {
          message.setId(id++);
      }
      jmsTemplate.convertAndSend(destination, message);
      return message;
   }
}

Create a Kind cluster with Nginx Ingress

Our example apps are ready. Before deploying them, we need to prepare the local Kubernetes cluster. We will deploy there the ActiveMQ cluster consisting of three brokers. Therefore, our Kubernetes cluster will also consist of three nodes. Consequently, there are three instances of the consumer app running on Kubernetes. They are connecting to the ActiveMQ brokers over the AMQP protocol. There is also a single instance of the producer app that sends messages on demand. Here’s the diagram of our architecture.

activemq-spring-boot-kubernetes-arch

In order to run a multi-node Kubernetes cluster locally, we will use Kind. We will test not only communication over AMQP protocol but also expose the ActiveMQ management console over HTTP. Because ActiveMQ uses headless services for exposing a web console we have to create and configure Ingress on Kind to access it. Let’s begin.

In the first step, we are going to create a Kind cluster. It consists of a control plane and three workers. The configuration has to be prepared correctly to run the Nginx Ingress Controller. We should add the ingress-ready label to a single worker node and expose ports 80 and 443. Here’s the final version of a Kind config file:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
    kubeadmConfigPatches:
    - |
      kind: JoinConfiguration
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "ingress-ready=true"
    extraPortMappings:
    - containerPort: 80
      hostPort: 80
      protocol: TCP
    - containerPort: 443
      hostPort: 443
      protocol: TCP  
  - role: worker
  - role: worker

Now, let’s create a Kind cluster by executing the following command:

$ kind create cluster --config kind-config.yaml

If your cluster has been successfully created you should see similar information:

After that, let’s install the Nginx Ingress Controller. It is just a single command:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

Let’s verify the installation:

$ kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS  AGE
ingress-nginx-admission-create-wbbzh        0/1     Completed   0         1m
ingress-nginx-admission-patch-ws2mv         0/1     Completed   0         1m
ingress-nginx-controller-86b6d5756c-rkbmz   1/1     Running     0         1m

Install ActiveMQ Artemis on Kubernetes

Finally, we may proceed to the ActiveMQ Artemis installation. Firstly, let’s install the required CRDs. You may find all the YAML manifests inside the operator repository on GitHub.

$ git clone https://github.com/artemiscloud/activemq-artemis-operator.git
$ cd activemq-artemis-operator

The manifests with CRDs are located in the deploy/crds directory:

$ kubectl create -f ./deploy/crds

After that, we can install the operator:

$ kubectl create -f ./deploy/service_account.yaml
$ kubectl create -f ./deploy/role.yaml
$ kubectl create -f ./deploy/role_binding.yaml
$ kubectl create -f ./deploy/election_role.yaml
$ kubectl create -f ./deploy/election_role_binding.yaml
$ kubectl create -f ./deploy/operator_config.yaml
$ kubectl create -f ./deploy/operator.yaml

In order to create a cluster, we have to create the ActiveMQArtemis object. It contains a number of brokers being a part of the cluster (1). We should also set the accessor, to expose the AMQP port outside of every single broker pod (2). Of course, we will also expose the management console (3).

apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
  name: ex-aao
spec:
  deploymentPlan:
    size: 3 # (1)
    image: placeholder
    messageMigration: true
    resources:
      limits:
        cpu: "500m"
        memory: "1024Mi"
      requests:
        cpu: "250m"
        memory: "512Mi"
  acceptors: # (2)
    - name: amqp
      protocols: amqp
      port: 5672
      connectionsAllowed: 5
  console: # (3)
    expose: true

Once the ActiveMQArtemis is created, and the operator starts the deployment process. It creates the StatefulSet object:

$ kubectl get statefulset
NAME        READY   AGE
ex-aao-ss   3/3     1m

It starts all three pods with brokers sequentially:

$ kubectl get pod -l application=ex-aao-app
NAME          READY   STATUS    RESTARTS    AGE
ex-aao-ss-0   1/1     Running   0           5m
ex-aao-ss-1   1/1     Running   0           3m
ex-aao-ss-2   1/1     Running   0           1m

Let’s display a list of Services created by the operator. There is a single Service per broker for exposing the AMQP port (ex-aao-amqp-*) and web console (ex-aao-wsconsj-*):

activemq-spring-boot-kubernetes-services

The operator automatically creates Ingress objects per each web console Service. We will modify them by adding different hosts. Let’s say that is the one.activemq.com domain for the first broker, two.activemq.com for the second broker, etc.

$ kubectl get ing    
NAME                      CLASS    HOSTS                  ADDRESS     PORTS   AGE
ex-aao-wconsj-0-svc-ing   <none>   one.activemq.com       localhost   80      1h
ex-aao-wconsj-1-svc-ing   <none>   two.activemq.com       localhost   80      1h
ex-aao-wconsj-2-svc-ing   <none>   three.activemq.com                  localhost   80      1h

After creating ingresses we would have to add the following line in /etc/hosts.

127.0.0.1    one.activemq.com two.activemq.com three.activemq.com

Now, we access the management console, for example for the third broker under the following URL http://three.activemq.com/console.

activemq-spring-boot-kubernetes-console

Once the broker is ready, we may define a test queue. The name of that queue is test-1.

apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemisAddress
metadata:
  name: address-1
spec:
  addressName: address-1
  queueName: test-1
  routingType: anycast

Run the Spring Boot app on Kubernetes and connect to ActiveMQ

Now, let’s deploy the consumer app. In the Deployment manifest, we have to set the ActiveMQ cluster connection URL. But wait… how to connect it? There are three brokers exposed using three separate Kubernetes Services. Fortunately, the AMQP Spring Boot starter supports it. We may set the addresses of three brokers inside the failover section. Let’s try it to see what will happen.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple-consumer
spec:
  replicas: 3
  selector:
    matchLabels:
      app: simple-consumer
  template:
    metadata:
      labels:
        app: simple-consumer
    spec:
      containers:
      - name: simple-consumer
        image: piomin/simple-consumer
        env:
          - name: ARTEMIS_URL
            value: failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)
        resources:
          limits:
            memory: 256Mi
            cpu: 500m
          requests:
            memory: 128Mi
            cpu: 250m

The application is prepared to be deployed with Skaffold. If you run the skaffold dev command you will deploy and see the logs of all three instances of the consumer app. What’s the result? All the instances connect to the first URL from the list as shown below.

Fortunately, there is a failover parameter that helps distribute client connections more evenly across multiple remote peers. With the failover.randomize option, URIs are randomly shuffled before attempting to connect to one of them. Let’s replace the ARTEMIS_URL env in the Deployment manifest with the following line:

failover:(amqp://ex-aao-amqp-0-svc:5672,amqp://ex-aao-amqp-1-svc:5672,amqp://ex-aao-amqp-2-svc:5672)?failover.randomize=true

The distribution between broker instances looks slightly better. Of course, the result is random, so you may get different results.

The first way to distribute the connections is through the dedicated Kubernetes Service. We don’t have to leverage the services created automatically by the operator. We can create our own Service that load balances between all available pods with brokers.

kind: Service
apiVersion: v1
metadata:
  name: ex-aao-amqp-lb
spec:
  ports:
    - name: amqp
      protocol: TCP
      port: 5672
  type: ClusterIP
  selector:
    application: ex-aao-app

Now, we can resign from the failover section on the client side and fully rely on Kubernetes mechanisms.

spec:
  containers:
  - name: simple-consumer
    image: piomin/simple-consumer
    env:
      - name: ARTEMIS_URL
        value: amqp://ex-aao-amqp-lb:5672

This time we won’t see anything in the application logs, because all the instances connect to the same URL. We can verify a distribution between all the broker instances using e.g. the management web console. Here’s a list of consumers on the first instance of ActiveMQ:

Below, you will exactly the same results for the second instance. All the consumer app instances have been distributed equally between all available brokers inside the cluster.

Now, we are going to deploy the producer app. We use the same Kubernetes Service for connecting the ActiveMQ cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple-producer
spec:
  replicas: 3
  selector:
    matchLabels:
      app: simple-producer
  template:
    metadata:
      labels:
        app: simple-producer
    spec:
      containers:
        - name: simple-producer
          image: piomin/simple-producer
          env:
            - name: ARTEMIS_URL
              value: amqp://ex-aao-amqp-lb:5672
            - name: DESTINATION
              value: test-1
          ports:
            - containerPort: 8080

Because we have to call the HTTP endpoint let’s create the Service for the producer app:

apiVersion: v1
kind: Service
metadata:
  name: simple-producer
spec:
  type: ClusterIP
  selector:
    app: simple-producer
  ports:
  - port: 8080

Let’s deploy the producer app using Skaffold with port-forwarding enabled:

$ skaffold dev --port-forward

Here’s a list of our Deployments:

In order to send a test message just execute the following command:

$ curl http://localhost:8080/producer/send \
  -d "{\"source\":\"test\",\"content\":\"Hello\"}" \
  -H "Content-Type:application/json"

Advanced configuration

If you need more advanced traffic distribution between brokers inside the cluster you can achieve it in several ways. For example, we can dynamically override configuration property on runtime. Here’s a very simple example. After starting the application we are connecting the external service over HTTP. It returns the next instance number.

@Configuration
public class AmqpConfig {

    @PostConstruct
    public void init() {
        RestTemplate t = new RestTemplateBuilder().build();
        int x = t.getForObject("http://simple-counter:8080/counter", Integer.class);
        System.setProperty("amqphub.amqp10jms.remoteUrl",
                "amqp://ex-aao-amqp-" + x + "-svc:5672");
    }

}

Here’s the implementation of the counter app. It just increments the number and divides it by the number of the broker instances. Of course, we may create a more advanced implementation, and provide e.g. connection to the instance of a broker running on the same Kubernetes node as the app pod.

@SpringBootApplication
@RestController
@RequestMapping("/counter")
public class CounterApp {

   private static int c = 0;

   public static void main(String[] args) {
      SpringApplication.run(CounterApp.class, args);
   }

   @Value("${DIVIDER:0}")
   int divider;

   @GetMapping
   public Integer count() {
      if (divider > 0)
         return c++ % divider;
      else
         return c++;
   }
}

Final Thoughts

ActiveMQ is an interesting alternative to RabbitMQ as a message broker. In this article, you learned how to run, manage and integrate ActiveMQ with Spring Boot on Kubernetes. It can be declaratively managed on Kubernetes thanks to ActiveMQ Artemis Operator. You can also easily integrate it with Spring Boot using a dedicated starter. It provides various configuration options and is actively developed by Red Hat and the community.

The post ActiveMQ Artemis with Spring Boot on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/07/26/activemq-artemis-with-spring-boot-on-kubernetes/feed/ 6 12504
Create and Manage Kubernetes Clusters with Cluster API and ArgoCD https://piotrminkowski.com/2021/12/03/create-kubernetes-clusters-with-cluster-api-and-argocd/ https://piotrminkowski.com/2021/12/03/create-kubernetes-clusters-with-cluster-api-and-argocd/#comments Fri, 03 Dec 2021 15:10:50 +0000 https://piotrminkowski.com/?p=10285 In this article, you will learn how to create and manage multiple Kubernetes clusters using Kubernetes Cluster API and ArgoCD. We will create a single, local cluster with Kind. On that cluster, we will provision the process of other Kubernetes clusters creation. In order to perform that process automatically, we will use ArgoCD. Thanks to […]

The post Create and Manage Kubernetes Clusters with Cluster API and ArgoCD appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create and manage multiple Kubernetes clusters using Kubernetes Cluster API and ArgoCD. We will create a single, local cluster with Kind. On that cluster, we will provision the process of other Kubernetes clusters creation. In order to perform that process automatically, we will use ArgoCD. Thanks to it, we can handle the whole process from a single Git repository. Before we start, let’s do a theoretical brief.

If you are interested in topics related to the Kubernetes multi-clustering you may also read some other articles about it:

  1. Kubernetes Multicluster with Kind and Cilium
  2. Multicluster Traffic Mirroring with Istio and Kind
  3. Kubernetes Multicluster with Kind and Submariner

Introduction

Did you hear about a project called Kubernetes Cluster API? It provides declarative APIs and tools to simplify provisioning, upgrading, and managing multiple Kubernetes clusters. In fact, it is a very interesting concept. We are creating a single Kubernetes cluster that manages the lifecycle of other clusters. On this cluster, we are installing Cluster API. And then we are just defining new workload clusters, by creating Cluster API objects. Looks simple? And that’s what is.

Cluster API provides a set of CRDs extending the Kubernetes API. Each of them represents a customization of a Kubernetes cluster installation. I will not get into the details. But, if you are interested you may read more about it here. What’s is important for us, it provides a CLI that handles the lifecycle of a Cluster API management cluster. Also, it allows creating clusters on multiple infrastructures including AWS, GCP, or Azure. However, today we are going to run the whole infrastructure locally on Docker and Kind. It is also possible with Kubernetes Cluster API since it supports Docker.

We will use Cluster API CLI just to initialize the management cluster and generate YAML templates. The whole process will be managed by the ArgoCD installed on the management cluster. Argo CD perfectly fits our scenario, since it supports multi-clusters. The instance installed on a single cluster can manage many other clusters that is able to connect with.

Finally, the last tool used today – Kind. Thanks to it, we can run multiple Kubernetes clusters on the same machine using Docker container nodes. Let’s take a look at the architecture of our solution described in this article.

Architecture with Kubernetes Cluster API and ArgoCD

Here’s the picture with our architecture. The whole infrastructure is running locally on Docker. We install Kubernetes Cluster API and ArgoCD on the management cluster. Then, using both those tools we are creating new clusters with Kind. After that, we are going to apply some Kubernetes objects into the workload clusters (c1, c2) like Namespace, ResourceQuota or RoleBinding. Of course, the whole process is managed by the Argo CD instance and configuration is stored in the Git repository.

kubernetes-cluster-api-argocd-arch

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should just follow my instructions. Let’s begin.

Create Management Cluster with Kind and Cluster API

In the first step, we are going to create a management cluster on Kind. You need to have Docker installed on your machine, kubectl and kind to do that exercise by yourself. Because we use Docker infrastructure to run Kubernetes workloads clusters, Kind must have an access to the Docker host. Here’s the definition of the Kind cluster. Let’s say the name of the file is mgmt-cluster-config.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
    - hostPath: /var/run/docker.sock
      containerPath: /var/run/docker.sock

Now, let’s just apply the configuration visible above when creating a new cluster with Kind:

$ kind create cluster --config mgmt-cluster-config.yaml --name mgmt

If everything goes fine you should see a similar output. After that, your Kubernetes context is automatically set to kind-mgmt.

Then, we need to initialize a management cluster. In other words, we have to install Cluster API on our Kind cluster. In order to do that, we first need to install Cluster API CLI on the local machine. On macOS, I can use the brew install clusterctl command. Once the clusterctl has been successfully installed I can run the following command:

$ clusterctl init --infrastructure docker

The result should be similar to the following. Maybe, without this timeout 🙂 I’m not sure why it happens, but it doesn’t have any negative impact on the next steps.

kubernetes-cluster-api-argocd-init-mgmt

Once we successfully initialized a management cluster we may verify it. Let’s display e.g. a list of namespaces. There are five new namespaces created by the Cluster API.

kubectl get ns
NAME                                STATUS   AGE
capd-system                         Active   3m37s
capi-kubeadm-bootstrap-system       Active   3m42s
capi-kubeadm-control-plane-system   Active   3m40s
capi-system                         Active   3m44s
cert-manager                        Active   4m8s
default                             Active   12m
kube-node-lease                     Active   12m
kube-public                         Active   12m
kube-system                         Active   12m
local-path-storage                  Active   12m

Also, let’s display a list of all pods. All the pods created inside new namespaces should be in a running state.

$ kubectl get pod -A

We can also display a list of installed CRDs. Anyway, the Kubernetes Cluster API is running on the management cluster and we can proceed to the further steps.

Install Argo CD on the management Kubernetes cluster

I will install Argo CD in the default namespace. But you can as well create a namespace argocd and install it there (following Argo CD documentation).

$ kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Then, let’s just verify the installation:

$ kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
argocd-application-controller-0       1/1     Running   0          63s
argocd-dex-server-6dcf645b6b-6dlk9    1/1     Running   0          63s
argocd-redis-5b6967fdfc-vg5k6         1/1     Running   0          63s
argocd-repo-server-7598bf5999-96mh5   1/1     Running   0          63s
argocd-server-79f9bc9b44-d6c8q        1/1     Running   0          63s

As you probably know, Argo CD provides a web UI for management. To access it on the local port (8080), I will run the kubectl port-forward command:

$ kubectl port-forward svc/argocd-server 8080:80

Now, the UI is available under http://localhost:8080. To login there you need to find a Kubernetes Secret argocd-initial-admin-secret and decode the password. The username is admin. You can easily decode secrets using for example Lens – advanced Kubernetes IDE. For now, only just log in there. We will back to Argo CD UI later.

Create Kubernetes cluster with Cluster API and ArgoCD

We will use the clusterctl CLI to generate YAML manifests with a declaration of a new Kubernetes cluster. To do that we need to run the following command. It will generate and save the manifest into the c1-clusterapi.yaml file.

$ clusterctl generate cluster c1 --flavor development \
  --infrastructure docker \
  --kubernetes-version v1.21.1 \
  --control-plane-machine-count=3 \
  --worker-machine-count=3 \
  > c1-clusterapi.yaml

Our c1 cluster consists of three master and three worker nodes. Following Cluster API documentation we would have to apply the generated manifests into the management cluster. However, we are going to use ArgoCD to automatically apply the Cluster API manifest stored in the Git repository to Kubernetes. So, let’s create a manifest with Cluster API objects in the Git repository. To simplify the process I will use Helm templates. Because there are two clusters to create we have to Argo CD applications that use the same template with parametrization. Ok, so here’s the Helm template based on the manifest generated in the previous step. You can find it in our sample Git repository under the path /mgmt/templates/cluster-api-template.yaml.

apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: {{ .Values.cluster.name }}
  namespace: default
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
        - 192.168.0.0/16
    serviceDomain: cluster.local
    services:
      cidrBlocks:
        - 10.128.0.0/12
  controlPlaneRef:
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    name: {{ .Values.cluster.name }}-control-plane
    namespace: default
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: DockerCluster
    name: {{ .Values.cluster.name }}
    namespace: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerCluster
metadata:
  name: {{ .Values.cluster.name }}
  namespace: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
  name: {{ .Values.cluster.name }}-control-plane
  namespace: default
spec:
  template:
    spec:
      extraMounts:
        - containerPath: /var/run/docker.sock
          hostPath: /var/run/docker.sock
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
  name: {{ .Values.cluster.name }}-control-plane
  namespace: default
spec:
  kubeadmConfigSpec:
    clusterConfiguration:
      apiServer:
        certSANs:
          - localhost
          - 127.0.0.1
      controllerManager:
        extraArgs:
          enable-hostpath-provisioner: "true"
    initConfiguration:
      nodeRegistration:
        criSocket: /var/run/containerd/containerd.sock
        kubeletExtraArgs:
          cgroup-driver: cgroupfs
          eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
    joinConfiguration:
      nodeRegistration:
        criSocket: /var/run/containerd/containerd.sock
        kubeletExtraArgs:
          cgroup-driver: cgroupfs
          eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
  machineTemplate:
    infrastructureRef:
      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: DockerMachineTemplate
      name: {{ .Values.cluster.name }}-control-plane
      namespace: default
  replicas: {{ .Values.cluster.masterNodes }}
  version: {{ .Values.cluster.version }}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
  name: {{ .Values.cluster.name }}-md-0
  namespace: default
spec:
  template:
    spec: {}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
  name: {{ .Values.cluster.name }}-md-0
  namespace: default
spec:
  template:
    spec:
      joinConfiguration:
        nodeRegistration:
          kubeletExtraArgs:
            cgroup-driver: cgroupfs
            eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
  name: {{ .Values.cluster.name }}-md-0
  namespace: default
spec:
  clusterName: {{ .Values.cluster.name }}
  replicas: {{ .Values.cluster.workerNodes }}
  selector:
    matchLabels: null
  template:
    spec:
      bootstrap:
        configRef:
          apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
          kind: KubeadmConfigTemplate
          name: {{ .Values.cluster.name }}-md-0
          namespace: default
      clusterName: {{ .Values.cluster.name }}
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: DockerMachineTemplate
        name: {{ .Values.cluster.name }}-md-0
        namespace: default
      version: {{ .Values.cluster.version }}

We can parameterize four properties related to cluster creation: name of the cluster, number of master and worker nodes, or a version of Kubernetes. Since we use Helm for that, we just need to create the values.yaml file containing values of those parameters in YAML format. Here’s the values.yaml file for the first cluster. You can find it in the sample Git repository under the path /mgmt/values-c1.yaml.

cluster:
  name: c1
  masterNodes: 3
  workerNodes: 3
  version: v1.21.1

Here’s the same configuration for the second cluster. As you see, there is a single master node and a single worker node. You can find it in the sample Git repository under the path /mgmt/values-c2.yaml.

cluster:
  name: c2
  masterNodes: 1
  workerNodes: 1
  version: v1.21.1

Create Argo CD applications

Since Argo CD supports Helm, we just need to set the right values.yaml file in the configuration of the ArgoCD application. Except that, we also need to set the address of our Git configuration repository and the directory with manifests inside the repository. All the configuration for the management cluster is stored inside the mgmt directory.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: c1-cluster-create
spec:
  destination:
    name: ''
    namespace: ''
    server: 'https://kubernetes.default.svc'
  source:
    path: mgmt
    repoURL: 'https://github.com/piomin/sample-kubernetes-cluster-api-argocd.git'
    targetRevision: HEAD
    helm:
      valueFiles:
        - values-c1.yaml
  project: default

Here’s a similar declaration for the second cluster:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: c2-cluster-create
spec:
  destination:
    name: ''
    namespace: ''
    server: 'https://kubernetes.default.svc'
  source:
    path: mgmt
    repoURL: 'https://github.com/piomin/sample-kubernetes-cluster-api-argocd.git'
    targetRevision: HEAD
    helm:
      valueFiles:
        - values-c2.yaml
  project: default

Argo CD requires privileges to manage Cluster API objects. Just to simplify, let’s add the cluster-admin role to the argocd-application-controller ServiceAccount used by Argo CD.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-admin-argocd-contoller
subjects:
  - kind: ServiceAccount
    name: argocd-application-controller
    namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin

After creating applications in Argo CD you may synchronize them manually (or enable the auto-sync option). It begins the process of creating workload clusters by the Cluster API tool.

kubernetes-cluster-api-argocd-ui

Verify Kubernetes clusters using Cluster API CLI

After performing synchronization with Argo CD we can verify a list of available Kubernetes clusters. To do that just use the following kind command:

$ kind get clusters
c1
c2
mgmt

As you see there are three running clusters! Kubernetes Cluster API installed on the management cluster has created two other clusters based on the configuration applied by Argo CD. To check if everything goes fine we may use the clusterctl describe command. After executing this command you would probably have a similar result to the visible below.

The control plane is not ready. It is described in the Cluster API documentation. We need to install a CNI provider on our workload clusters in the next step. Cluster API documentation suggests installing Calico as the CNI plugin. We will do it, but before we need to switch to the kind-c1 and kind-c2 contexts. Of course, they were not created on our local machine by the Cluster API, so first we need to export them to our Kubeconfig file. Let’s do that for both workload clusters.

$ kind export kubeconfig --name c1
$ kind export kubeconfig --name c2

I’m not sure why, but it exports contexts with 0.0.0.0 as the address of clusters. So in the next step, I also had to edit my Kubeconfig file and change this address to 127.0.0.1 as shown below. Now, I can connect both clusters using kubectl from my local machine.

And install Calico CNI on both clusters.

$ kubectl apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml --context kind-c1
$ kubectl apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml --context kind-c2

I could also automate that step in Argo CD. But for now, I just want to finish the installation. In the next section, I’m going to describe how to manage both these clusters using Argo CD running on the management cluster. Now, if you verify the status of both clusters using the clusterctl describe command, it looks perfectly fine.

kubernetes-cluster-api-argocd-cli

Managing workload clusters with ArgoCD

In the previous section, we have successfully created two Kubernetes clusters using the Cluster API tool and ArgoCD. To clarify, all the Kubernetes objects required to perform that operation were created on the management cluster. Now, we would like to apply a simple configuration visible below to our both workload clusters. Of course, we will also use the same instance of Argo CD for it.

apiVersion: v1
kind: Namespace
metadata:
  name: demo
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: demo-quota
  namespace: demo
spec:
  hard:
    pods: '10'
    requests.cpu: '1'
    requests.memory: 1Gi
    limits.cpu: '2'
    limits.memory: 4Gi
---
apiVersion: v1
kind: LimitRange
metadata:
  name: demo-limitrange
  namespace: demo
spec:
  limits:
    - default:
        memory: 512Mi
        cpu: 500m
      defaultRequest:
        cpu: 100m
        memory: 128Mi
      type: Container

Unfortunately, there is no built-in integration between ArgoCD and the Kubernetes Cluster API tool. Although Cluster API creates a secret containing the Kubeconfig file per each created cluster, Argo CD is not able to recognize it to automatically add such a cluster to the managed clusters. If you are interested in more details, there is an interesting discussion about it here. Anyway, the goal, for now, is to add both our workload clusters to the list of clusters managed by the global instance of Argo CD running on the management cluster. To do that, we first need to log in to Argo CD. We need to use the same credentials and URL used to interact with web UI.

$ argocd login localhost:8080

Now, we just need to run the following commands, assuming we have already exported both Kubernetes contexts to our local Kubeconfig file:

$ argocd cluster add kind-c1
$ argocd cluster add kind-c2

If you run Docker on macOS or Windows it is not such a simple thing to do. You need to use an internal Docker address of your cluster. Cluster API creates secrets containing the Kubeconfig file for all created clusters. We can use it to verify the internal address of our Kubernetes API. Here’s a list of secrets for our workload clusters:

$ kubectl get secrets | grep kubeconfig
c1-kubeconfig                               cluster.x-k8s.io/secret               1      85m
c2-kubeconfig                               cluster.x-k8s.io/secret               1      57m

We can obtain the internal address after decoding a particular secret. For example, the internal address of my c1 cluster is 172.20.0.3.

Under the hood, Argo CD creates a secret related to each of managed clusters. It is recognized basing on the label name and value: argocd.argoproj.io/secret-type: cluster.

apiVersion: v1
kind: Secret
metadata:
  name: c1-cluster-secret
  labels:
    argocd.argoproj.io/secret-type: cluster
type: Opaque
data:
  name: c1
  server: https://172.20.0.3:6443
  config: |
    {
      "tlsClientConfig": {
        "insecure": false,
        "caData": "<base64 encoded certificate>",
        "certData": "<base64 encoded certificate>",
        "keyData": "<base64 encoded key>"
      }
    }

If you added all your clusters successfully, you should see the following list in the Clusters section on your Argo CD instance.

Create Argo CD application for workload clusters

Finally, let’s create Argo CD applications for managing configuration on both workload clusters.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: c1-cluster-config
spec:
  project: default
  source:
    repoURL: 'https://github.com/piomin/sample-kubernetes-cluster-api-argocd.git'
    path: workload
    targetRevision: HEAD
  destination:
    server: 'https://172.20.0.3:6443'

And similarly to apply the configuration on the second cluster:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: c2-cluster-config
spec:
  project: default
  source:
    repoURL: 'https://github.com/piomin/sample-kubernetes-cluster-api-argocd.git'
    path: workload
    targetRevision: HEAD
  destination:
    server: 'https://172.20.0.10:6443'

Once you created both applications on Argo CD you synchronize them.

And finally, let’s verify that configuration has been successfully applied to the target clusters.

The post Create and Manage Kubernetes Clusters with Cluster API and ArgoCD appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/12/03/create-kubernetes-clusters-with-cluster-api-and-argocd/feed/ 4 10285
Kubernetes Multicluster with Kind and Cilium https://piotrminkowski.com/2021/10/25/kubernetes-multicluster-with-kind-and-cilium/ https://piotrminkowski.com/2021/10/25/kubernetes-multicluster-with-kind-and-cilium/#comments Mon, 25 Oct 2021 11:13:17 +0000 https://piotrminkowski.com/?p=10151 In this article, you will learn how to configure Kubernetes multicluster locally with Kind and Cilium. If you are looking for some other articles about local Kubernetes multicluster you should also read Kubernetes Multicluster with Kind and Submariner and Multicluster Traffic Mirroring with Istio and Kind. Cilium can act as a CNI plugin on your […]

The post Kubernetes Multicluster with Kind and Cilium appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to configure Kubernetes multicluster locally with Kind and Cilium. If you are looking for some other articles about local Kubernetes multicluster you should also read Kubernetes Multicluster with Kind and Submariner and Multicluster Traffic Mirroring with Istio and Kind.

Cilium can act as a CNI plugin on your Kubernetes cluster. It uses a Linux kernel technology called BPF, that enables the dynamic insertion of security visibility and control logic within the Linux kernel. It provides distributed load-balancing for pod to pod traffic and identity-based implementation of the NetworkPolicy resource. However, in this article, we will focus on its feature called Cluster Mesh, which allows setting up direct networking across multiple Kubernetes clusters.

Prerequisites

Before starting with this exercise you need to install some tools on your local machine. Of course, you need to have kubectl to interact with your Kubernetes cluster. Except this, you will also have to install:

  1. Kind CLI – in order to run multiple Kubernetes clusters locally. For more details refer here
  2. Cilium CLI – in order to manage and inspect the state of a Cilium installation. For more details you may refer here
  3. Skaffold CLI (optionally) – if you would to build and run applications directly from the code. Otherwise, you may just use my images published on Docker Hub. In case you decided to build directly from the source code you also need to have JDK and Maven
  4. Helm CLI – we will use Helm to install Cilium on Kubernetes. Alternatively we could use Cilium CLI for that, but with Helm chart we can easily enable some additional required Cilium features

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

You can also use application images instead of building them directly from the code. The image of the callme-service application is available here https://hub.docker.com/r/piomin/callme-service, while the image for the caller-service application is available here: https://hub.docker.com/r/piomin/callme-service.

Run Kubernetes clusters locally with Kind

When running a new Kind cluster we first need to disable the default CNI plugin based on kindnetd. Of course, we will use Cilium instead. Moreover, pod CIDR ranges in both our clusters must be non-conflicting and have unique IP addresses. That’s why we also provide podSubnet and serviceSubnet in the Kind cluster configuration manifest. Here’s a configuration file for our first cluster:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  podSubnet: "10.0.0.0/16"
  serviceSubnet: "10.1.0.0/16"

The name of that cluster is c1. So that the name of Kubernetes context is kind-c1. In order to create a new Kind cluster using the YAML manifest visible above we should run the following command:

$ kind create cluster --name c1 --config kind-c1-config.yaml 

Here’s a configuration file for our second cluster. It has different pod and service CIDRs than the first cluster:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  podSubnet: "10.2.0.0/16"
  serviceSubnet: "10.3.0.0/16"

The same as before we need to run the kind create command with a different name c2 and a YAML manifest for the second cluster:

$ kind create cluster --name c2 --config kind-c2-config.yaml

Install Cilium CNI on Kubernetes

Once we have successfully created two local Kubernetes clusters with Kind we may proceed to the Cilium installation. Firstly, let’s switch to the context of the kind-c1 cluster:

$ kubectl config use-context kind-c1

We will install Cilium using the Helm chart. To do that, we should add a new Helm repository.

$ helm repo add cilium https://helm.cilium.io/

For the Cluster Mesh option, we need to enable some Cilium features disabled by default. Also, the important thing is to set cluster.name and cluster.id parameters.

$ helm install cilium cilium/cilium --version 1.10.5 \
   --namespace kube-system \
   --set nodeinit.enabled=true \
   --set kubeProxyReplacement=partial \
   --set hostServices.enabled=false \
   --set externalIPs.enabled=true \
   --set nodePort.enabled=true \
   --set hostPort.enabled=true \
   --set cluster.name=c1 \
   --set cluster.id=1

Let’s switch to the context of the kind-c2 cluster:

$ kubectl config use-context kind-c2

We need to set different values for cluster.name and cluster.id parameters in the Helm installation command.

$ helm install cilium cilium/cilium --version 1.10.5 \
   --namespace kube-system \
   --set nodeinit.enabled=true \
   --set kubeProxyReplacement=partial \
   --set hostServices.enabled=false \
   --set externalIPs.enabled=true \
   --set nodePort.enabled=true \
   --set hostPort.enabled=true \
   --set cluster.name=c2 \
   --set cluster.id=2

After installing Cilium you can easily verify the status by running the cilium status command on both clusters. Just to clarify, the Cilium Cluster Mesh is not enabled yet.

kubernetes-multicluster-cilium-status

Install MetalLB on Kind

When deploying Cluster Mesh Cilium attempt to auto-detect the best service type for the LoadBalancer to expose the Cluster Mesh control plane to other clusters. The default and recommended option is LoadBalancer IP (there is also NodePort and ClusterIP available). That’s why we need to enable external IP support on Kind, which is not provided by default (on macOS and Windows). Fortunately, we may install MetalLB on our cluster. MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. Firstly, let’s create a namespace for MetalLB components:

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/namespace.yaml

Then we have to create a secret.

$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Let’s install MetalLB components in the metallb-system namespace.

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/metallb.yaml

If everything works fine you should have the following pods in the metallb-system namespace.

$ kubectl get pod -n metallb-system
NAME                          READY   STATUS    RESTARTS   AGE
controller-6cc57c4567-dk8fw   1/1     Running   1          12h
speaker-26g67                 1/1     Running   2          12h
speaker-2dhzf                 1/1     Running   1          12h
speaker-4fn7t                 1/1     Running   2          12h
speaker-djbtq                 1/1     Running   1          12h

Now, we need to setup address pools for LoadBalancer. We should check the range of addresses for kind network on Docker:

$ docker network inspect -f '{{.IPAM.Config}}' kind

For me the address pool for kind is 172.20.0.0/16. Let’s say I’ll configure 50 IPs starting from 172.20.255.200. Then we should create a manifest with external pool IP:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.200-172.20.255.250

The pool for the second cluster should be different to avoid conflicts in addresses. I’ll also configure 50 IPs, this time starting from 172.20.255.150:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.150-172.20.255.199

After that, we are ready to enable Cilium Cluster Mesh.

Enable Cilium Multicluster on Kubernetes

In the command visible enable Cilium multicluster mesh for the first Kubernetes cluster. As you see we are going to choose the LoadBalancer type to connect both clusters. If you compare it with Cilium documentation, I also had to set option create-ca, since there is no CA generated when installing Cilium with Helm:

$ cilium clustermesh enable --context kind-c1 \
   --service-type LoadBalancer \
   --create-ca

Then, we may verify it works successfully:

$ cilium clustermesh status --context kind-c1 --wait

Now, let’s do the same thing for the second cluster:

$ cilium clustermesh enable --context kind-c2 \
   --service-type LoadBalancer \
   --create-ca

Then, we can verify the status of a cluster mesh on the second cluster:

$ cilium clustermesh status --context kind-c2 --wait

Finally, we can connect both clusters together. This step only needs to be done in one direction. Following Cilium documentation, the connection will automatically be established in both directions:

$ cilium clustermesh connect --context kind-c1 \
   --destination-context kind-c2

If everything goes fine you should see a similar result as shown below.

After that, let’s verify the status of the Cilium cluster mesh once again:

$ cilium clustermesh status --context kind-c1 --wait

If everything goes fine you should see a similar result as shown below.

You can also verify the Kubernetes Service with Cilium Mesh Control Plane.

$ kubectl get svc -A | grep clustermesh
kube-system   clustermesh-apiserver   LoadBalancer   10.1.150.156   172.20.255.200   2379:32323/TCP           13h

You can validate the connectivity by running the connectivity test in multi-cluster mode:

$ cilium connectivity test --context kind-c1 --multi-cluster kind-c2

To be honest, all these tests were failed for my kind clusters 🙂 I was quite concerned. I’m not very sure how those tests work. However, it didn’t cause that I gave up on deploying my applications on both Kubernetes clusters to test Cilium multicluster.

Testing Cilium Kubernetes Multicluster with Java apps

Establishing load-balancing between clusters is achieved by defining a Kubernetes service with an identical name and namespace in each cluster and adding the annotation io.cilium/global-service: "true" to declare it global. Cilium will automatically perform load-balancing to pods in both clusters. Here’s the Kubernetes Service object definition for the callme-service application. As you see it is not exposed outside the cluster since it has ClusterIP type.

apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
  annotations:
    io.cilium/global-service: "true"
spec:
  type: ClusterIP
  ports:
    - port: 8080
      name: http
  selector:
    app: callme-service

Here’s the deployment manifest of the callme-service application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

Our test case is very simple. I’m going to deploy the caller-service application on the c1 cluster. The caller-service calls REST endpoint exposed by the callme-service. The callme-service application is running on the c2 cluster. I’m not changing anything in the implementation in comparison to the versions running on the single cluster. It means that the caller-service calls the callme-service endpoint using Kubernetes Service name and HTTP port (http://callme-service:8080).

kubernetes-multicluster-cilium-test

Ok, so now let’s just deploy the callme-service on the c2 cluster using skaffold. Before running the command go to the callme-service directory. Of course, you can also deploy a ready image with kubectl. The deployment manifest is available in the callme-service/k8s directory.

$ skaffold run --kube-context kind-c2

The caller-service application exposes a single HTTP endpoint that prints information about the version. It also calls the similar endpoint exposed by the callme-service. As I mentioned before, it uses the name of Kubernetes Service in communication.

@RestController
@RequestMapping("/caller")
public class CallerController {

    private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

    @Autowired
    RestTemplate restTemplate;
    @Value("${VERSION}")
    private String version;

    @GetMapping("/ping")
    public String ping() {
        String response = restTemplate.getForObject("http://callme-service:8080/callme/ping", String.class);
        LOGGER.info("Calling: response={}", response);
        return "I'm caller-service " + version + ". Calling... " + response;
    }
}

Here’s the Deployment manifest for the caller-service.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
        version: v1
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        env:
          - name: VERSION
            value: "v1"

Also, we need to create Kubernetes Service.

apiVersion: v1
kind: Service
metadata:
  name: caller-service
  labels:
    app: caller-service
spec:
  type: ClusterIP
  ports:
    - port: 8080
      name: http
  selector:
    app: caller-service

Don’t forget to apply callme-service Service also on the c1 cluster, although there are no running instances on that cluster. I can simply deploy it using Skaffold. All required manifests are applied automatically. Before running the following command go to the caller-service directory.

$ skaffold dev --port-forward --kube-context kind-c1

Thanks to the port-forward option I can simply test the caller-service on localhost. What is worth mentioning Skaffold supports Kind, so you don’t have to do any additional steps to deploy applications there. Finally, let’s test the communication by calling the HTTP endpoint exposed by the caller-service.

$ curl http://localhost:8080/caller/ping

Final Thoughts

It was not really hard to configure Kubernetes multicluster with Cilium and Kind. I had just a problem with their test connectivity tool that doesn’t work for cluster mesh. However, my simple test with two applications running on different Kubernetes clusters and communicating via HTTP was successful.

The post Kubernetes Multicluster with Kind and Cilium appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/10/25/kubernetes-multicluster-with-kind-and-cilium/feed/ 4 10151
Multicluster Traffic Mirroring with Istio and Kind https://piotrminkowski.com/2021/07/12/multicluster-traffic-mirroring-with-istio-and-kind/ https://piotrminkowski.com/2021/07/12/multicluster-traffic-mirroring-with-istio-and-kind/#comments Mon, 12 Jul 2021 08:27:13 +0000 https://piotrminkowski.com/?p=9902 In this article, you will learn how to create an Istio mesh with mirroring between multiple Kubernetes clusters running on Kind. We will deploy the same application in two Kubernetes clusters, and then we will mirror the traffic between those clusters. When such a scenario might be useful? Let’s assume we have two Kubernetes clusters. […]

The post Multicluster Traffic Mirroring with Istio and Kind appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create an Istio mesh with mirroring between multiple Kubernetes clusters running on Kind. We will deploy the same application in two Kubernetes clusters, and then we will mirror the traffic between those clusters. When such a scenario might be useful?

Let’s assume we have two Kubernetes clusters. The first of them is a production cluster, while the second is a test cluster. While there is huge incoming traffic to the production cluster, there is no traffic to the test cluster. What can we do in such a situation? We can just send a portion of production traffic to the test cluster. With Istio, you can also mirror internal traffic between e.g. microservices.

To simulate the scenario described above we will create two Kubernetes clusters locally with Kind. Then, we will install Istio mesh in multi-primary mode between different networks. The Kubernetes API Server and Istio Gateway need to be accessible by pods running on a different cluster. We have two applications. The caller-service application is running on the c1 cluster. It calls callme-service. The v1 version of the callme-service application is deployed on the c1 cluster, while the v2 version of the application is deployed on the c2 cluster. We will mirror 50% of traffic coming from the v1 version of our application to the v2 version running on the different clusters. The following picture illustrates our architecture.

istio-mirroring-arch

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

Both applications are configured to be deployed with Skaffold. In that case, you just need to download Skaffold CLI following the instructions available here. Of course, you also need to have Java and Maven available on your PC.

Create Kubernetes clusters with Kind

Firstly, let’s create two Kubernetes clusters using Kind. We don’t have to override any default settings, so we can just use the following command to create clusters.

$ kind create cluster --name c1
$ kind create cluster --name c2

Kind automatically creates a Kubernetes context and adds it to the config file. Just to verify, let’s display a list of running clusters.

$ kind get clusters
c1
c2

Also, we can display a list of contexts created by Kind.

$ kubectx | grep kind
kind-c1
kind-c2

Install MetalLB on Kubernetes clusters

To establish a connection between multiple clusters locally we need to expose some services as LoadBalancer. That’s why we need to install MetalLB. MetalLB is a load-balancer implementation for bare metal Kubernetes clusters. Firstly, we have to create the metalb-system namespace. Those operations should be performed on both our clusters.

$ kubectl apply -f \
  https://raw.githubusercontent.com/metallb/metallb/master/manifests/namespace.yaml

Then, we are going to create the memberlists secret required by MetalLB.

$ kubectl create secret generic -n metallb-system memberlist \
  --from-literal=secretkey="$(openssl rand -base64 128)" 

Finally, let’s install MetalLB.

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/metallb.yaml

In order to complete the configuration, we need to provide a range of IP addresses MetalLB controls. We want this range to be on the docker kind network.

$ docker network inspect -f '{{.IPAM.Config}}' kind

For me it is CIDR 172.20.0.0/16. Basing on it, we can configure the MetalLB IP pool per each cluster. For the first cluster c1 I’m setting addresses starting from 172.20.255.200 and ending with 172.20.255.250.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.200-172.20.255.250

We need to apply the configuration to the first cluster.

$ kubectl apply -f k8s/metallb-c1.yaml --context kind-c1

For the second cluster c2 I’m setting addresses starting from 172.20.255.150 and ending with 172.20.255.199.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.150-172.20.255.199

Finally, we can apply the configuration to the second cluster.

$ kubectl apply -f k8s/metallb-c2.yaml --context kind-c2

Install Istio on Kubernetes in multicluster mode

A multicluster service mesh deployment requires establishing trust between all clusters in the mesh. In order to do that we should configure the Istio certificate authority (CA) with a root certificate, signing certificate, and key. We can easily do it using Istio tools. First, we to go to the Istio installation directory on your PC. After that, we may use the Makefile.selfsigned.mk script available inside the tools/certs directory

$ cd $ISTIO_HOME/tools/certs/

The following command generates the root certificate and key.

$ make -f Makefile.selfsigned.mk root-ca

The following command generates an intermediate certificate and key for the Istio CA for each cluster. This will generate the required files in a directory named with a cluster name.

$ make -f Makefile.selfsigned.mk kind-c1-cacerts
$ make -f Makefile.selfsigned.mk kind-c2-cacerts

Then we may create Kubernetes Secret basing on the generated certificates. The same operation should be performed for the second cluster kind-c2.

$ kubectl create namespace istio-system
$ kubectl create secret generic cacerts -n istio-system \
      --from-file=kind-c1/ca-cert.pem \
      --from-file=kind-c1/ca-key.pem \
      --from-file=kind-c1/root-cert.pem \
      --from-file=kind-c1/cert-chain.pem

We are going to install Istio using the operator. It is important to set the same meshID for both clusters and different networks. We also need to create Istio Gateway for communication between two clusters inside a single mesh. It should be labeled with topology.istio.io/network=network1. The Gateway definition also contains two environment variables ISTIO_META_ROUTER_MODE and ISTIO_META_REQUESTED_NETWORK_VIEW. The first variable is responsible for setting the sni-dnat value that adds the clusters required for AUTO_PASSTHROUGH mode.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: kind-c1
      network: network1
  components:
    ingressGateways:
      - name: istio-eastwestgateway
        label:
          istio: eastwestgateway
          app: istio-eastwestgateway
          topology.istio.io/network: network1
        enabled: true
        k8s:
          env:
            - name: ISTIO_META_ROUTER_MODE
              value: "sni-dnat"
            - name: ISTIO_META_REQUESTED_NETWORK_VIEW
              value: network1
          service:
            ports:
              - name: status-port
                port: 15021
                targetPort: 15021
              - name: tls
                port: 15443
                targetPort: 15443
              - name: tls-istiod
                port: 15012
                targetPort: 15012
              - name: tls-webhook
                port: 15017
                targetPort: 15017

Before installing Istio we should label the istio-system namespace with topology.istio.io/network=network1. The Istio installation manifest is available in the repository as the k8s/istio-c1.yaml file.

$ kubectl --context kind-c1 label namespace istio-system \
      topology.istio.io/network=network1
$ istioctl install --config k8s/istio-c1.yaml \
      --context kind-c

There a similar IstioOperator definition for the second cluster. The only difference is in the name of the network, which is now network2 , and the name of the cluster.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: kind-c2
      network: network2
  components:
    ingressGateways:
      - name: istio-eastwestgateway
        label:
          istio: eastwestgateway
          app: istio-eastwestgateway
          topology.istio.io/network: network2
        enabled: true
        k8s:
          env:
            - name: ISTIO_META_ROUTER_MODE
              value: "sni-dnat"
            - name: ISTIO_META_REQUESTED_NETWORK_VIEW
              value: network2
          service:
            ports:
              - name: status-port
                port: 15021
                targetPort: 15021
              - name: tls
                port: 15443
                targetPort: 15443
              - name: tls-istiod
                port: 15012
                targetPort: 15012
              - name: tls-webhook
                port: 15017
                targetPort: 15017

The same as for the first cluster, let’s label the istio-system namespace with topology.istio.io/network and install Istio using operator manifest.

$ kubectl --context kind-c2 label namespace istio-system \
      topology.istio.io/network=network2
$ istioctl install --config k8s/istio-c2.yaml \
      --context kind-c2

Configure multicluster connectivity

Since the clusters are on separate networks, we need to expose all local services on the gateway in both clusters. Services behind that gateway can be accessed only by services with a trusted TLS certificate and workload ID. The definition of the cross gateway is exactly the same for both clusters. You can that manifest in the repository as k8s/istio-cross-gateway.yaml.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: cross-network-gateway
spec:
  selector:
    istio: eastwestgateway
  servers:
    - port:
        number: 15443
        name: tls
        protocol: TLS
      tls:
        mode: AUTO_PASSTHROUGH
      hosts:
        - "*.local"

Let’s apply the Gateway object to both clusters.

$ kubectl apply -f k8s/istio-cross-gateway.yaml \
      --context kind-c1
$ kubectl apply -f k8s/istio-cross-gateway.yaml \
      --context kind-c2

In the last step in this scenario, we enable endpoint discovery between Kubernetes clusters. To do that, we have to install a remote secret in the kind-c2 cluster that provides access to the kind-c1 API server. And vice versa. Fortunately, Istio provides an experimental feature for generating remote secrets.

$ istioctl x create-remote-secret --context=kind-c1 --name=kind-c1 
$ istioctl x create-remote-secret --context=kind-c2 --name=kind-c2

Before applying generated secrets we need to change the address of the cluster. Instead of localhost and dynamically generated port, we have to use c1-control-plane:6443 for the first cluster, and respectively c2-control-plane:6443 for the second cluster. The remote secrets generated for my clusters are committed in the project repository as k8s/secret1.yaml and k8s/secret2.yaml. You compare them with secrets generated for your clusters. Replace them with your secrets, but remember to change the address of your clusters.

$ kubectl apply -f k8s/secret1.yaml --context kind-c2
$ kubectl apply -f k8s/secret2.yaml --context kind-c1

Configure Mirroring with Istio

We are going to deploy our sample applications in the default namespace. Therefore,  automatic sidecar injection should be enabled for that namespace.

$ kubectl label --context kind-c1 namespace default \
    istio-injection=enabled
$ kubectl label --context ind-c2 namespace default \
    istio-injection=enabled

Before configuring Istio rules let’s deploy v1 version of the callme-service application on the kind-c1 cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

Then, we will deploy the v2 version of the callme-service application on the kind-c2 cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v2
  template:
    metadata:
      labels:
        app: callme-service
        version: v2
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v2"

Of course, we should also create Kubernetes Service on both clusters.

apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

The Istio DestinationRule defines two Subsets for callme-service basing on the version label.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: callme-service-destination
spec:
  host: callme-service
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

Finally, we may configure traffic mirroring with Istio. The 50% of traffic coming to the callme-service deployed on the kind-c1 cluster is sent to the callme-service deployed on the kind-c2 cluster.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: callme-service-route
spec:
  hosts:
    - callme-service
  http:
    - route:
      - destination:
          host: callme-service
          subset: v1
        weight: 100
      mirror:
        host: callme-service
        subset: v2
      mirrorPercentage:
        value: 50.

We will also deploy the caller-service application on the kind-c1 cluster. It calls endpoint GET /callme/ping exposed by the callme-service application.

$ kubectl get pod --context kind-c1
NAME                                 READY   STATUS    RESTARTS   AGE
caller-service-b9dbbd6c8-q6dpg       2/2     Running   0          1h
callme-service-v1-7b65795f48-w7zlq   2/2     Running   0          1h

Let’s verify the list of running pods in the default namespace in the kind-c2 cluster.

$ kubectl get pod --context kind-c2
NAME                                 READY   STATUS    RESTARTS   AGE
callme-service-v2-665b876579-rsfks   2/2     Running   0          1h

In order to test Istio mirroring through multiple Kubernetes clusters, we call the endpoint GET /caller/ping exposed by caller-service. As I mentioned before it calls the similar endpoint exposed by the callme-service application with the HTTP client. The simplest way to test it is through enabling port-forwarding. Thanks to that, the caller-service Service is available on the local port 8080. Let’s call that endpoint 20 times with siege.

$ siege -r 20 -c 1 http://localhost:8080/caller/service

After that, you can verify the logs for callme-service-v1 and callme-service-v2 deployments.

$ kubectl logs pod/callme-service-v1-7b65795f48-w7zlq --context kind-c1
$ kubectl logs pod/callme-service-v2-665b876579-rsfks --context kind-c2

You should see the following log 20 times for the kind-c1 cluster.

I'm callme-service v1

And respectively you should see the following log 10 times for the kind-c2 cluster, because we mirror 50% of traffic from v1 to v2.

I'm callme-service v2

Final Thoughts

This article shows how to create Istio multicluster with traffic mirroring between different networks. If you would like to simulate a similar scenario in the same network you may use a tool called Submariner. You may find more details about running Submariner on Kubernetes in this article Kubernetes Multicluster with Kind and Submariner.

The post Multicluster Traffic Mirroring with Istio and Kind appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/07/12/multicluster-traffic-mirroring-with-istio-and-kind/feed/ 16 9902
Kubernetes Multicluster with Kind and Submariner https://piotrminkowski.com/2021/07/08/kubernetes-multicluster-with-kind-and-submariner/ https://piotrminkowski.com/2021/07/08/kubernetes-multicluster-with-kind-and-submariner/#comments Thu, 08 Jul 2021 12:41:46 +0000 https://piotrminkowski.com/?p=9881 In this article, you will learn how to create multiple Kubernetes clusters locally and establish direct communication between them with Kind and Submariner. Kind (Kubernetes in Docker) is a tool for running local Kubernetes clusters using Docker containers. Each Kubernetes node is a separated Docker container. All these containers are running in the same Docker network […]

The post Kubernetes Multicluster with Kind and Submariner appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create multiple Kubernetes clusters locally and establish direct communication between them with Kind and Submariner. Kind (Kubernetes in Docker) is a tool for running local Kubernetes clusters using Docker containers. Each Kubernetes node is a separated Docker container. All these containers are running in the same Docker network kind.

Our goal in this article is to establish direct communication between pods running in two different Kubernetes clusters created with Kind. Of course, it is not possible by default. We should treat such clusters as two Kubernetes clusters running in different networks. Here comes Submariner. It is a tool originally created by Rancher. It enables direct networking between pods and services in different Kubernetes clusters, either on-premises or in the cloud.

Let’s perform a quick brief of our architecture. We have two applications caller-service and callme-service. Also, there are two Kubernetes clusters c1 and c2 created using Kind. The caller-service application is running on the c1 cluster, while the callme-service application is running on the c2 cluster. The caller-service application communicates with the callme-service application directly without using Kubernetes Ingress.

kubernetes-submariner-arch2

Architecture – Submariner on Kubernetes

Let me say some words about Submariner. Since it is a relatively new tool you may have no touch with it. It runs a single, central broker and then joins several members to this broker. Basically, a member is a Kubernetes cluster that is a part of the Submariner cluster. All the members may communicate directly with each other. The Broker component facilitates the exchange of metadata information between Submariner gateways deployed in participating Kubernetes clusters.

The architecture of our example system is visible below. We run the Submariner Broker on the c1 cluster. Then we run Submariner “agents” on both clusters. Service discovery is based on the Lighthouse project. It provides DNS discovery for Kubernetes clusters connected by Submariner. You may read more details about it here.

kubernetes-submariner-arch1

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

Both applications are configured to be deployed with Skaffold. In that case, you just need to download Skaffold CLI following the instructions available here. Of course, you also need to have Java and Maven available on your PC.

If you are interested in more about using Skaffold to build and deploy Java applications you can read my article Local Java Development on Kubernetes.

Create Kubernetes clusters with Kind

Firstly, let’s create two Kubernetes clusters using Kind. Each cluster consists of a control plane and a worker node. Since we are going to install Calico as a networking plugin on Kubernetes, we will disable a default CNI plugin on Kind. Finally, we need to configure CIDRs for pods and services. The IP pool should be unique per both clusters. Here’s the Kind configuration manifest for the first cluster. It is available in the project repository under the path k8s/kind-cluster-c1.yaml.

kind: Cluster
name: c1
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
networking:
  podSubnet: 10.240.0.0/16
  serviceSubnet: 10.110.0.0/16
  disableDefaultCNI: true

Then, let’s create the first cluster using the configuration manifest visible above.

$ kind create cluster --config k8s/kind-cluster-c1.yaml

We have a similar configuration manifest for a second cluster. The only difference is in the name of the cluster and CIDRs for Kubernetes pods and services. It is available in the project repository under the path k8s/kind-cluster-c2.yaml.

kind: Cluster
name: c2
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
networking:
  podSubnet: 10.241.0.0/16
  serviceSubnet: 10.111.0.0/16
  disableDefaultCNI: true

After that, let’s create the second cluster using the configuration manifest visible above.

$ kind create cluster --config k8s/kind-cluster-c2.yaml

Once the clusters have been successfully created we can verify them using the following command.

$ kind get clusters
c1
c2

Kind automatically creates two Kubernetes contexts for those clusters. We can switch between the kind-c1 and kind-c2 context.

Install Calico on Kubernetes

We will use the Tigera operator to install Calico as a default CNI on Kubernetes. It is possible to use different installation methods, but that with operator is the simplest one. Firstly, let’s switch to the kind-c1 context.

$ kubectx kind-c1

I’m using the kubectx tool for switching between Kubernetes contexts and namespaces. You can download the latest version of this tool from the following site: https://github.com/ahmetb/kubectx/releases.

In the first step, we install the Tigera operator on the cluster.

$ kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml

After that, we need to create the Installation CRD object responsible for installing Calico on Kubernetes. We can configure all the required parameters inside a single file. It is important to set the same CIDRs as pods CIDRs inside the Kind configuration file. Here’s the manifest for the first cluster.

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  calicoNetwork:
    ipPools:
      - blockSize: 26
        cidr: 10.240.0.0/16
        encapsulation: VXLANCrossSubnet
        natOutgoing: Enabled
        nodeSelector: all()

The manifest is available in the repository as the k8s/tigera-c1.yaml file. Let’s apply it.

$ kubectl apply -f k8s/tigera-c1.yaml

Then, we may switch to the kind-c2 context and create a similar manifest with the Calico installation.

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  calicoNetwork:
    ipPools:
      - blockSize: 26
        cidr: 10.241.0.0/16
        encapsulation: VXLANCrossSubnet
        natOutgoing: Enabled
        nodeSelector: all()

Finally, let’s apply it to the second cluster using the k8s/tigera-c2.yaml file.

$ kubectl apply -f k8s/tigera-c2.yaml

We may verify the installation of Calico by listing running pods in the calico-system namespace. Here’s the result on my local Kubernetes cluster.

$ kubectl get pod -n calico-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-696ffc7f48-86rfz   1/1     Running   0          75s
calico-node-nhkn5                          1/1     Running   0          76s
calico-node-qkkqk                          1/1     Running   0          76s
calico-typha-6d6c85c77b-ffmt5              1/1     Running   0          70s
calico-typha-6d6c85c77b-w8x6t              1/1     Running   0          76s

By default, Kind uses a simple networking implementation – Kindnetd. However, this CNI plugin is not tested with Submariner. Therefore, we need to change it to one of the already supported ones like Calico.

Install Submariner on Kubernetes

In order to install Submariner on our Kind cluster, we first need to download CLI.

$ curl -Ls https://get.submariner.io | bash
$ export PATH=$PATH:~/.local/bin

Submariner subctl CLI requires xz-utils. So, first, you need to install this package by executing the following command: apt update -y && apt install xz-utils -y.

After that, we can use the subctl binary to deploy the Submarine Broker. If you use Docker on Mac or Windows (like me) you need to perform those operations inside a container with the Kind control plane. So first, let’s get inside the control plane container. Kind automatically sets the name of that container as a conjunction of a cluster name and -control-plane suffix.

$ docker exec -it c1-control-plane /bin/bash

That container has already kubectl been installed. The only thing we need to do is to add the context of the second Kubernetes cluster kind-c2. I just copied it from my local Kube config file, which contains the right data. It has been added by Kind during Kubernetes cluster creation. You can check out the location of the Kubernetes config inside the c1-control-plane container by displaying the KUBECONFIG environment variable.

$ echo $KUBECONFIG
/etc/kubernetes/admin.conf

If you are copying data from your local Kube config file you just need to change the address of your Kubernetes cluster. Instead of the external IP and port, you have to set the internal Docker container IP and port.

We should use the following IP address for internal communication between both clusters.

Now, we can deploy the Submariner Broker on the c1 cluster. After running the following command Submariner installs an operator on Kubernetes and generates the broker-info.subm file. That file is then used to join members to the Submariner cluster.

$ subctl deploy-broker

Enable direct communication between Kubernetes clusters with Submariner

Let’s clarify some things before proceeding. We have already created a Submariner Broker on the c1 cluster. To simplify the process I’m using the same Kubernetes cluster as a Submariner Broker and Member. We also use subctl CLI to add members to a Submariner cluster. One of the essential components that have to be installed is a Submariner Gateway Engine. It is deployed as a DaemonSet that is configured to run on nodes labelled with submariner.io/gateway=true. So, in the first step, we will set this label on both Kubernetes worker nodes of c1 and c2 clusters.

$ kubectl label node c1-worker submariner.io/gateway=true
$ kubectl label node c2-worker submariner.io/gateway=true --context kind-c2

Just to remind you – we are still inside the c1-control-plane container. Now we can add a first member to our Submariner cluster. To do that, we still use subctl CLI command as shown below. With the join command, we need to the broker-info.subm file already generated while running the deploy-broker command. We will also disable NAT traversal for IPsec.

$ subctl join broker-info.subm --natt=false --clusterid kind-c1

After that, we may add a second member to our cluster.

$ subctl join broker-info.subm --natt=false --clusterid kind-c2 --kubecontext kind-c2

The Submariner operator creates several deployments in the submariner-operator namespace. Let’s display a list of pods running there.

$ kubectl get pod -n submariner-operator
NAME                                             READY   STATUS    RESTARTS   AGE
submariner-gateway-kd6zs                         1/1     Running   0          5m50s
submariner-lighthouse-agent-b798b8987-f6zvl      1/1     Running   0          5m48s
submariner-lighthouse-coredns-845c9cdf6f-8qhrj   1/1     Running   0          5m46s
submariner-lighthouse-coredns-845c9cdf6f-xmd6q   1/1     Running   0          5m46s
submariner-operator-586cb56578-qgwh6             1/1     Running   1          6m17s
submariner-routeagent-fcptn                      1/1     Running   0          5m49s
submariner-routeagent-pn54f                      1/1     Running   0          5m49s

We can also use some subctl commands. Let’s display a list of Submariner gateways.

$ subctl show gateways 

Showing information for cluster "kind-c2":
NODE                            HA STATUS       SUMMARY                         
c2-worker                       active          All connections (1) are established

Showing information for cluster "c1":
NODE                            HA STATUS       SUMMARY                         
c1-worker                       active          All connections (1) are established

Or a list of Submariner connections.

$ subctl show connections

Showing information for cluster "c1":
GATEWAY    CLUSTER  REMOTE IP   NAT  CABLE DRIVER  SUBNETS                       STATUS     RTT avg.    
c2-worker  kind-c2  172.20.0.5  no   libreswan     10.111.0.0/16, 10.241.0.0/16  connected  384.957µs   

Showing information for cluster "kind-c2":
GATEWAY    CLUSTER  REMOTE IP   NAT  CABLE DRIVER  SUBNETS                       STATUS     RTT avg.    
c1-worker  kind-c1  172.20.0.2  no   libreswan     10.110.0.0/16, 10.240.0.0/16  connected  592.728µs

Deploy applications on Kubernetes and expose them with Submariner

Since we have already installed Submariner on both clusters we can deploy our sample applications. Let’s begin with caller-service. Make sure you are in the kind-c1 context. Then go to the caller-service directory and deploy the application using Skaffold as shown below.

$ cd caller-service
$ skaffold dev --port-forward

Then, you should switch to the kind-c2 context. Now, deploy the callme-service application.

$ cd callme-service
$ skaffold run

In the next step, we need to expose our service to Submariner. To do that you have to execute the following command with subctl.

$ subctl export service --namespace default callme-service

Submariner exposes services on the domain clusterset.local. So, our service is now available under the URL callme-service.default.svc.clusterset.local. Here’s a part of a code in caller-service responsible for communication with callme-service through the Submariner DNS.

@GetMapping("/ping")
public String ping() {
   LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), version);
   String response = restTemplate
         .getForObject("http://callme-service.default.svc.clusterset.local:8080/callme/ping", String.class);
   LOGGER.info("Calling: response={}", response);
   return "I'm caller-service " + version + ". Calling... " + response;
}

In order to analyze what happened let’s display some CRD objects created by Submariner. Firstly, it created ServiceExport on the cluster with the exposed service. In our case, it is the kind-c2 cluster.

$ kubectl get ServiceExport        
NAME             AGE
callme-service   15s

Once we exposed the service it is automatically imported on the second cluster. We need to switch to the kind-c1 cluster and then display the ServiceImport object.

$ kubectl get ServiceImport -n submariner-operator
NAME                             TYPE           IP                  AGE
callme-service-default-kind-c2   ClusterSetIP   ["10.111.176.50"]   4m55s

The ServiceImport object stores the IP address of Kubernetes Service callme-service.

$ kubectl get svc --context kind-c2
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
callme-service   ClusterIP   10.111.176.50   <none>        8080/TCP   31m
kubernetes       ClusterIP   10.111.0.1      <none>        443/TCP    74m

Finally, we may test a connection between clusters by calling the following endpoint. The caller-service calls the GET /callme/ping endpoint exposed by callme-service. Thanks to enabling the port-forward option on the Skaffold command we may access the service locally on port 8080.

$ curl http://localhost:8080/caller/ping

The post Kubernetes Multicluster with Kind and Submariner appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/07/08/kubernetes-multicluster-with-kind-and-submariner/feed/ 8 9881
Development on Kubernetes: Choose a platform https://piotrminkowski.com/2020/08/05/development-on-kubernetes-choose-a-platform/ https://piotrminkowski.com/2020/08/05/development-on-kubernetes-choose-a-platform/#comments Wed, 05 Aug 2020 13:01:38 +0000 http://piotrminkowski.com/?p=8290 An important step before you begin the implementation of microservices is to choose the Kubernetes cluster for development. In this article, I’m going to describe several available solutions. You can find a video version of every part of this tutorial on my YouTube channel. The second part is available here: Microservices on Kubernetes: Part 2 […]

The post Development on Kubernetes: Choose a platform appeared first on Piotr's TechBlog.

]]>
An important step before you begin the implementation of microservices is to choose the Kubernetes cluster for development. In this article, I’m going to describe several available solutions.

youtube.pngYou can find a video version of every part of this tutorial on my YouTube channel. The second part is available here: Microservices on Kubernetes: Part 2 – Cluster setup

The first important question is if I should prefer a local single-node instance or maybe deploy my applications directly on a remote cluster. Sometimes an installation of the local Kubernetes cluster for development may be troublesome, especially if you use Windows OS. You also need to have sufficient RAM or CPU resources on your machine. On the other hand, communication with a remote platform can take more time, and such a managed Kubernetes cluster may not be free.
This article is the second part of my guide, where I’ll be showing you tools, frameworks, and platforms that speed up the development of JVM microservices on Kubernetes. We are going to implement sample microservices-based architectures using Kotlin and then deploy and run them on different Kubernetes clusters.

The previous part of my guide is available here: Development on Kubernetes: IDE & Tools

Minikube

Minikube runs a single-node Kubernetes cluster for development inside a VM on your local machine. It supports VM drivers like VirtualBox, HyperV, KVM2. Since Minikube is relatively a mature solution in the Kubernetes world, the list of supported features is pretty impressive. These features are LoadBalancer, Multi-cluster, NodePorts, Persistent Volumes, Ingress, Dashboard, or Container runtimes.
All you need is Docker (or similarly compatible) container or a Virtual Machine environment, and Kubernetes may be started using a single command: minikube start. The minimal requirements are 2 CPUs, 2GB of free memory, and 20GB of free disk space.
Something especially useful during development is the ability to install addons. For example, we may easily enable the whole EFK stack with predefined configuration using a single command.

$ minikube addons enable efk

Kubernetes on Docker Desktop

Kubernetes on Docker Desktop is an interesting alternative to Minikube for running cluster on your local machine. Docker Desktop includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, is not configurable, and is a single-node cluster.
Unfortunately, it is not available for all Windows users. You need Windows Windows 10 64-bit: Pro, Enterprise, or Education. For Windows 10 Home you first need to enable the WSL 2 feature. We also need to have 4GB of RAM, Hyper-V, and Containers Windows features enabled. In return, you have both Docker and Kubernetes in a single tool and UI dashboard, where you may change the configuration or perform some basic troubleshooting.

development-on-kubernetes-docker-desktop

Kubernetes in Docker (kind)

kind is a tool for running local Kubernetes clusters using Docker container “nodes”. It supports multi-node clusters including HA, and may be installed on Linux, macOS, and Windows. Creating a cluster is very similar to minikube’s approach – we need to execute command kind create cluster. Since it does not use VM, but moves the cluster into Docker containers, it leads to a significantly faster startup speed than Minikube or Kubernetes on Docker Desktop. That’s why it may be an interesting option during local development.

Civo

Civo seems to be an interesting alternative to other Kubernetes hosted platforms. Since it is based on a lightweight version of Kubernetes called k3s, a new cluster can be created much faster than on other platforms. For me, it took less than 2 minutes for the 3-nodes managed cluster. The other good news is that you may be a beta tester of that product where you receive a free credit 70$ monthly. Of course, Civo is a relatively new solution not free from errors and shortcomings.
We can download Civo CLI to interact with our cluster. We can easily install there a popular software like Postgresql, MongoDB, Redis, or cloud-native edge router Traefik.

development-on-kubernetes-civo

To interact with Civo using their CLI we first need to copy the API key, that is available in section “Security”. Then you should execute the following CLI commands. After that, you can use the Civo cluster with kubectl. Since creating a new cluster takes only 2 minutes you can remove it after your development is finished and create a new one on demand.

$ civo apikey add piomin-civo-newkey 
$ civo apikey current piomin-civo-newkey
$ civo k3s config piomin-civo --merge

Google Kubernetes Engine

Google Kubernetes Engine is probably your first choice for a remote cluster. Not only that Kubernetes is associated with Google. It offers the best free trial plan including 300$ credit for 12 months. You can choose between many products that can be installed on your cluster in one click. If you are running Kubernetes cluster for development with default settings (3 nodes with total capacity 6vCPU and 12GB of RAM) on-demand a free credit would be enough for the whole year.
With Google Cloud Console you can manage your cluster easily.

development-on-kubernetes-gke

There are also some disadvantages. It takes relatively much time to create a new cluster. But the good news is that we can scale down the existing cluster to 0 using Node Pool, and scale it up if needed. Such an operation is much faster.

development-on-kubernetes-gkepool

Another disadvantage is the lack of the latest versions of the software. For example, it is possible to install version 1.16 of Kubernetes or Istio 1.4 when using predefined templates (of course you can install Istio manually using the latest version 1.6). If you are looking for a guide to deploying JVM-based application on GKE you may refer to my article Running Kotlin Microservice on Google Kubernetes Engine.

Digital Ocean

Digital Ocean is advertised as being designed for developers. It allows you to spin up a managed Kubernetes cluster for development in just a few clicks. For me, it took around 7 minutes to create a 3-nodes cluster there. The estimated cost of such a plan is 60$ per month. You are getting 100$ free credit for two months in the beginning.
You can scale it down to a single node or destroy the whole cluster and create a new one on demand. It is also possible to use some predefined templates to install additional products like Linkerd, NGINX Ingress Controller, Jeager, or even Okteto platform in one click. By default, the total cluster capacity is 6vCPU, 12GB of RAM, and 240GB of disk space.
A pricing plan on the Digital Ocean is pretty clear. You are paying just for running worker nodes. For standard node (2vCPU, 4GB RAM) it is 0.03$/hour. So if you would use such a cluster for development needs and destroy it after every usage the total monthly cost shouldn’t be large. It comes with a preinstalled Kubernetes Dashboard as shown below.

development-on-kubernetes-digitalocean

Something that can make it stand out is a possibility to install version 1.18 of Kubernetes. For example, on Google Cloud or Amazon Web Services, we may currently install version 1.16. However, when comparing with GKE it offers a much shorter trial period and smaller free credit.

Okteto

I have already written about Okteto in one of my previous articles Development on Kubernetes with Okteto and Spring Boot. I described there a process of local development and running Spring Boot application on a remote cluster. The main idea behind Okteto is “Code locally with the tools you know and love. Run and debug directly in Okteto Cloud.”. With this development platform you do not have a whole Kubernetes cluster available, but only a single namespace, where you can deploy your applications.
Their current offer for developers is pretty attractive. In a free plan, you get a single namespace, 4vCPU, 8GB of memory, and 5GB of disk space. All applications are shutting down after 24 hours of inactivity. You can also buy a Developer Pro Plan, which offers 2 namespaces and never sleeps for 20$/month.
With Okteto you can easily deploy popular databases and message brokers like MongoDB, Postgresql, Redis, or RabbitMQ in one click. You may also integrate your application with such software by defining Okteto manifest in the root directory of your project.

okteto-webui

Conclusion

I’m using most of these solutions. Which of them is chosen depends on the use case. For example, if I need to set up a predefined EFK stack quickly I can do it easily on Minikube. Otherwise, if my application is connecting with some third-party solutions like RabbitMQ, or databases (MongoDB, Postgresql) I can easily deploy such an environment on Okteto or Civo. In a standard situation, I’m using Kubernetes on Docker Desktop, which automatically starts as a service on Windows.

The post Development on Kubernetes: Choose a platform appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/08/05/development-on-kubernetes-choose-a-platform/feed/ 5 8290