MetalLB Archives - Piotr's TechBlog https://piotrminkowski.com/tag/metallb/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 04 Aug 2023 00:03:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 MetalLB Archives - Piotr's TechBlog https://piotrminkowski.com/tag/metallb/ 32 32 181738725 Kubernetes Multicluster Load Balancing with Skupper https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/ https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/#respond Fri, 04 Aug 2023 00:03:25 +0000 https://piotrminkowski.com/?p=14372 In this article, you will learn how to leverage Skupper for load balancing between app instances running on several Kubernetes clusters. We will create some Kubernetes clusters locally with Kind. Then we will connect them using Skupper. Skupper cluster interconnection works in Layer 7 (application layer). It means there is no need to create any […]

The post Kubernetes Multicluster Load Balancing with Skupper appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to leverage Skupper for load balancing between app instances running on several Kubernetes clusters. We will create some Kubernetes clusters locally with Kind. Then we will connect them using Skupper.

Skupper cluster interconnection works in Layer 7 (application layer). It means there is no need to create any VNPs or special firewall rules. Skupper is working according to the Virtual Application Network (VAN) approach. Thanks to that it can connect different Kubernetes clusters and guarantee communication between services without exposing them to the Internet. You can read more about the concept behind it in the Skupper docs.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. This time we will do almost everything using a command-line tool (skupper CLI). The repository contains just a sample app Spring Boot with Kubernetes Deployment manifests and Skaffold config. You will find here instructions on how to deploy the app with Skaffold, but you can as well use another tool. As always, follow my instructions for the details 🙂

Create Kubernetes clusters with Kind

In the first step, we will create three Kubernetes clusters with Kind. We need to give them different names: c1, c2 and c3. Accordingly, they are available under the context names: kind-c1, kind-c2 and kind-c3.

$ kind create cluster --name c1
$ kind create cluster --name c2
$ kind create cluster --name c3

In this exercise, we will switch between the clusters a few times. Personally, I’m using the kubectx to switch between different Kubernetes contexts and kubens to switch between the namespaces.

By default, Skupper exposes itself as a Kubernetes LoadBalancer Service. Therefore, we need to enable the load balancer on Kind. In order to do that, we can install MetalLB. You can find the full installation instructions in the Kind docs here. Firstly, let’s switch to the c1 cluster:

$ kubectx kind-c1

Then, we have to apply the following YAML manifest:

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

You should repeat the same procedure for the other two clusters: c2 and c3. However, it is not all. We also need to set up the address pool used by load balancers. To do that, let’s first check a range of IP addresses on the Docker network used by Kind. For me it is 172.19.0.0/16 172.19.0.1.

$ docker network inspect -f '{{.IPAM.Config}}' kind

According to the results, we need to choose the right IP address for all three Kind clusters. Then we have to create the IPAddressPool object, which contains the IPs range. Here’s the YAML manifest for the c1 cluster:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.200-172.19.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

Here’s the pool configuration for e.g. the c2 cluster. It is important that the address range should not conflict with the ranges in two other Kind clusters.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.150-172.19.255.199
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

Finally, the configuration for the c3 cluster:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.100-172.19.255.149
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

After applying the YAML manifests with the kubectl apply -f command we can proceed to the next section.

Install Skupper on Kubernetes

We can install and manage Skupper on Kubernetes in two different ways: with CLI or through YAML manifests. Most of the examples in Skupper documentation use CLI for that, so I guess it is a preferable approach. Consequently, before we start with Kubernetes, we need to install CLI. You can find installation instructions in the Skupper docs here. Once you install it, just verify if it works with the following command:

$ skupper version

After that, we can proceed with Kubernetes clusters. We will create the same namespace interconnect inside all three clusters. To simplify our upcoming exercise we can also set a default namespace for each context (alternatively you can do it with the kubectl config set-context --current --namespace interconnect command).

$ kubectl create ns interconnect
$ kubens interconnect

Then, let’s switch to the kind-c1 cluster. We will stay in this context until the end of our exercise 🙂

$ kubectx kind-c1

Finally, we will install Skupper on our Kubernetes clusters. In order to do that, we have to execute the skupper init command. Fortunately, it allows us to set the target Kubernetes context with the -c parameter. Inside the kind-c1 cluster, we will also enable the Skupper UI dashboard (--enable-console parameter). With the Skupper console, we may e.g. visualize a traffic volume for all targets in the Skupper network.

$ skupper init --enable-console --enable-flow-collector
$ skupper init -c kind-c2
$ skupper init -c kind-c3

Let’s verify the status of the Skupper installation:

$ skupper status
$ skupper status -c kind-c2
$ skupper status -c kind-c3

Here’s the status for Skupper running in the kind-c1 cluster:

kubernetes-skupper-status

We can also display a list of running Skupper pods in the interconnect namespace:

$ kubectl get po
NAME                                          READY   STATUS    RESTARTS   AGE
skupper-prometheus-867f57b89-dc4lq            1/1     Running   0          3m36s
skupper-router-55bbb99b87-k4qn5               2/2     Running   0          3m40s
skupper-service-controller-6bf57595dd-45hvw   2/2     Running   0          3m37s

Now, our goal is to connect both the c2 and c3 Kind clusters with the c1 cluster. In the Skupper nomenclature, we have to create a link between the namespace in the source and target cluster. Before we create a link we need to generate a secret token that signifies permission to create a link. The token also carries the link details. We are generating two tokens on the target cluster. Each token is stored as the YAML file. The first of them is for the kind-c2 cluster (skupper-c2-token.yaml), and the second for the kind-c3 cluster (skupper-c3-token.yaml).

$ skupper token create skupper-c2-token.yaml
$ skupper token create skupper-c3-token.yaml

We will consider several scenarios where we create a link using different parameters. Before that, let’s deploy our sample app on the kind-c2 and kind-c3 clusters.

Running the sample app on Kubernetes with Skaffold

After cloning the sample app repository go to the main directory. You can easily build and deploy the app to both kind-c2 and kind-c3 with the following commands:

$ skaffold dev --kube-context=kind-c2
$ skaffold dev --kube-context=kind-c3

After deploying the app skaffold automatically prints all the logs as shown below. It will be helpful for the next steps in our exercise.

Our app is deployed under the sample-spring-kotlin-microservice name.

Load balancing with Skupper – scenarios

Scenario 1: the same number of pods and link cost

Let’s start with the simplest scenario. We have a single pod of our app running on the kind-c2 and kind-c3 cluster. In Skupper we can also assign a cost to each link to influence the traffic flow. By default, link cost is set to 1 for a new link. In a service network, the routing algorithm attempts to use the path with the lowest total cost from the client to the target server. For now, we will leave a default value. Here’s a visualization of the first scenario:

Let’s create links to the c1 Kind cluster using the previously generated tokens.

$ skupper link create skupper-c2-token.yaml -c kind-c2
$ skupper link create skupper-c3-token.yaml -c kind-c3

If everything goes fine you should see a similar message:

We can also verify the status of links by executing the following commands:

$ skupper link status -c kind-c2
$ skupper link status -c kind-c3

It means that now c2 and c3 Kind clusters are “working” in the same Skupper network as the c1 cluster. The next step is to expose our app running in both the c2 and c3 clusters into the c1 cluster. Skupper works at Layer 7 and by default, it doesn’t connect apps unless we won’t enable that feature for the particular app. In order to expose our apps to the c1 cluster we need to run the following command on both c2 and c3 clusters.

$ skupper expose deployment/sample-spring-kotlin-microservice \
  --port 8080 -c kind-c2
$ skupper expose deployment/sample-spring-kotlin-microservice \
  --port 8080 -c kind-c3

Let’s take a look at what happened at the target (kind-c1) cluster. As you see Skupper created the sample-spring-kotlin-microservice Kubernetes Service that forwards traffic to the skupper-router pod. The Skupper Router is responsible for load-balancing requests across pods being a part of the Skupper network.

To simplify our exercise, we will enable port-forwarding for the Service visible above.

$ kubectl port-forward svc/sample-spring-kotlin-microservice 8080:8080

Thanks to that we don’t have to configure Kubernetes Ingress to call the service. Now, we can send some test requests over localhost, e.g. with siege.

$ siege -r 200 -c 5 http://localhost:8080/persons/1

We can easily verify that the traffic is coming to pods running on the kind-c2 and kind-c3 by looking at the logs. Alternatively, we can go to the Skupper console and see the traffic visualization:

kubernetes-skupper-diagram-first

Scenario 2: different number of pods and same link cost

In the next scenario, we won’t change anything in the Skupper network configuration. We will just run the second pod of the app in the kind-c3 cluster. So now, there is a single pod running in the kind-c2 cluster, and two pods running in the kind-c3 cluster. Here’s our architecture.

Once again, we can send some requests to the previously tested Kubernetes Service with the siege command:

$ siege -r 200 -c 5 http://localhost:8080/persons/2

Let’s take a look at traffic visualization in the Skupper dashboard. We can switch between all available pods. Here’s the diagram for the pod running in the kind-c2 cluster.

kubernetes-skupper-diagram

Here’s the same diagram for the pod running in the kind-c3 cluster. As you see it receives only ~50% (or even less depending on which pod we visualize) of traffic received by the pod in the kind-c2 cluster. That’s because Skupper there are two pods running in the kind-c3 cluster, while Skupper still balances requests across clusters equally.

Scenario 3: only one pod and different link costs

In the current scenario, there is a single pod of the app running on the c2 Kind cluster. At the same time, there are no pods on the c3 cluster (the Deployment exists but it has been scaled down to zero instances). Here’s the visualization of our scenario.

kubernetes-skupper-arch2

The important thing here is that the c3 cluster is preferred by Skupper since the link to it has a lower cost (2) than the link to the c2 cluster (4). So now, we need to remove the previous link, and then create a new one with the following commands:

$ skupper link create skupper-c2-token.yaml --cost 4 -c kind-c2
$ skupper link create skupper-c3-token.yaml --cost 2 -c kind-c3

In order to create a Skupper link once again you first need to delete the previous one with the skupper link delete link1 command. Then you have to generate new tokens with the skupper token create command as we did before.

Let’s take a look at the Skupper network status:

kubernetes-skupper-network-status

Let’s send some test requests to the exposed service. It works without any errors. Since there is only a single running pod, the whole traffic goes there:

Scenario 4 – more pods in one cluster and different link cost

Finally, the last scenario in our exercise. We will use the same Skupper configuration as in Scenario 3. However, this time we will run two pods in the kind-c3 cluster.

kubernetes-skupper-arch-1

We can switch once again to the Skupper dashboard. Now, as you see, all the pods receive a very similar amount of traffic. Here’s the diagram for the pod running on the kind-c2 cluster.

kubernetes-skupper-equal-traffic

Here’s a similar diagram for the pod running on the kind-c3 cluster. After setting the cost of the link assuming the number of pods running on the cluster I was able to split traffic equally between all the pods across both clusters. It works. However, it is not a perfect way for load-balancing. I would expect at least an option for enabling a round-robin between all the pods working in the same Skupper network. The solution presented in this scenario will work as expected unless we enable auto-scaling for the app.

Final Thoughts

Skupper introduces an interesting approach to the Kubernetes multicluster connectivity based fully on Layer 7. You can compare it to another solution based on the different layers like Submariner or Cilium cluster mesh. I described both of them in my previous articles. If you want to read more about Submariner visit the following post. If you are interested in Cilium read that article.

The post Kubernetes Multicluster Load Balancing with Skupper appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/feed/ 0 14372
Kubernetes Multicluster with Kind and Cilium https://piotrminkowski.com/2021/10/25/kubernetes-multicluster-with-kind-and-cilium/ https://piotrminkowski.com/2021/10/25/kubernetes-multicluster-with-kind-and-cilium/#comments Mon, 25 Oct 2021 11:13:17 +0000 https://piotrminkowski.com/?p=10151 In this article, you will learn how to configure Kubernetes multicluster locally with Kind and Cilium. If you are looking for some other articles about local Kubernetes multicluster you should also read Kubernetes Multicluster with Kind and Submariner and Multicluster Traffic Mirroring with Istio and Kind. Cilium can act as a CNI plugin on your […]

The post Kubernetes Multicluster with Kind and Cilium appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to configure Kubernetes multicluster locally with Kind and Cilium. If you are looking for some other articles about local Kubernetes multicluster you should also read Kubernetes Multicluster with Kind and Submariner and Multicluster Traffic Mirroring with Istio and Kind.

Cilium can act as a CNI plugin on your Kubernetes cluster. It uses a Linux kernel technology called BPF, that enables the dynamic insertion of security visibility and control logic within the Linux kernel. It provides distributed load-balancing for pod to pod traffic and identity-based implementation of the NetworkPolicy resource. However, in this article, we will focus on its feature called Cluster Mesh, which allows setting up direct networking across multiple Kubernetes clusters.

Prerequisites

Before starting with this exercise you need to install some tools on your local machine. Of course, you need to have kubectl to interact with your Kubernetes cluster. Except this, you will also have to install:

  1. Kind CLI – in order to run multiple Kubernetes clusters locally. For more details refer here
  2. Cilium CLI – in order to manage and inspect the state of a Cilium installation. For more details you may refer here
  3. Skaffold CLI (optionally) – if you would to build and run applications directly from the code. Otherwise, you may just use my images published on Docker Hub. In case you decided to build directly from the source code you also need to have JDK and Maven
  4. Helm CLI – we will use Helm to install Cilium on Kubernetes. Alternatively we could use Cilium CLI for that, but with Helm chart we can easily enable some additional required Cilium features

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

You can also use application images instead of building them directly from the code. The image of the callme-service application is available here https://hub.docker.com/r/piomin/callme-service, while the image for the caller-service application is available here: https://hub.docker.com/r/piomin/callme-service.

Run Kubernetes clusters locally with Kind

When running a new Kind cluster we first need to disable the default CNI plugin based on kindnetd. Of course, we will use Cilium instead. Moreover, pod CIDR ranges in both our clusters must be non-conflicting and have unique IP addresses. That’s why we also provide podSubnet and serviceSubnet in the Kind cluster configuration manifest. Here’s a configuration file for our first cluster:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  podSubnet: "10.0.0.0/16"
  serviceSubnet: "10.1.0.0/16"

The name of that cluster is c1. So that the name of Kubernetes context is kind-c1. In order to create a new Kind cluster using the YAML manifest visible above we should run the following command:

$ kind create cluster --name c1 --config kind-c1-config.yaml 

Here’s a configuration file for our second cluster. It has different pod and service CIDRs than the first cluster:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  podSubnet: "10.2.0.0/16"
  serviceSubnet: "10.3.0.0/16"

The same as before we need to run the kind create command with a different name c2 and a YAML manifest for the second cluster:

$ kind create cluster --name c2 --config kind-c2-config.yaml

Install Cilium CNI on Kubernetes

Once we have successfully created two local Kubernetes clusters with Kind we may proceed to the Cilium installation. Firstly, let’s switch to the context of the kind-c1 cluster:

$ kubectl config use-context kind-c1

We will install Cilium using the Helm chart. To do that, we should add a new Helm repository.

$ helm repo add cilium https://helm.cilium.io/

For the Cluster Mesh option, we need to enable some Cilium features disabled by default. Also, the important thing is to set cluster.name and cluster.id parameters.

$ helm install cilium cilium/cilium --version 1.10.5 \
   --namespace kube-system \
   --set nodeinit.enabled=true \
   --set kubeProxyReplacement=partial \
   --set hostServices.enabled=false \
   --set externalIPs.enabled=true \
   --set nodePort.enabled=true \
   --set hostPort.enabled=true \
   --set cluster.name=c1 \
   --set cluster.id=1

Let’s switch to the context of the kind-c2 cluster:

$ kubectl config use-context kind-c2

We need to set different values for cluster.name and cluster.id parameters in the Helm installation command.

$ helm install cilium cilium/cilium --version 1.10.5 \
   --namespace kube-system \
   --set nodeinit.enabled=true \
   --set kubeProxyReplacement=partial \
   --set hostServices.enabled=false \
   --set externalIPs.enabled=true \
   --set nodePort.enabled=true \
   --set hostPort.enabled=true \
   --set cluster.name=c2 \
   --set cluster.id=2

After installing Cilium you can easily verify the status by running the cilium status command on both clusters. Just to clarify, the Cilium Cluster Mesh is not enabled yet.

kubernetes-multicluster-cilium-status

Install MetalLB on Kind

When deploying Cluster Mesh Cilium attempt to auto-detect the best service type for the LoadBalancer to expose the Cluster Mesh control plane to other clusters. The default and recommended option is LoadBalancer IP (there is also NodePort and ClusterIP available). That’s why we need to enable external IP support on Kind, which is not provided by default (on macOS and Windows). Fortunately, we may install MetalLB on our cluster. MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. Firstly, let’s create a namespace for MetalLB components:

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/namespace.yaml

Then we have to create a secret.

$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Let’s install MetalLB components in the metallb-system namespace.

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/metallb.yaml

If everything works fine you should have the following pods in the metallb-system namespace.

$ kubectl get pod -n metallb-system
NAME                          READY   STATUS    RESTARTS   AGE
controller-6cc57c4567-dk8fw   1/1     Running   1          12h
speaker-26g67                 1/1     Running   2          12h
speaker-2dhzf                 1/1     Running   1          12h
speaker-4fn7t                 1/1     Running   2          12h
speaker-djbtq                 1/1     Running   1          12h

Now, we need to setup address pools for LoadBalancer. We should check the range of addresses for kind network on Docker:

$ docker network inspect -f '{{.IPAM.Config}}' kind

For me the address pool for kind is 172.20.0.0/16. Let’s say I’ll configure 50 IPs starting from 172.20.255.200. Then we should create a manifest with external pool IP:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.200-172.20.255.250

The pool for the second cluster should be different to avoid conflicts in addresses. I’ll also configure 50 IPs, this time starting from 172.20.255.150:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.150-172.20.255.199

After that, we are ready to enable Cilium Cluster Mesh.

Enable Cilium Multicluster on Kubernetes

In the command visible enable Cilium multicluster mesh for the first Kubernetes cluster. As you see we are going to choose the LoadBalancer type to connect both clusters. If you compare it with Cilium documentation, I also had to set option create-ca, since there is no CA generated when installing Cilium with Helm:

$ cilium clustermesh enable --context kind-c1 \
   --service-type LoadBalancer \
   --create-ca

Then, we may verify it works successfully:

$ cilium clustermesh status --context kind-c1 --wait

Now, let’s do the same thing for the second cluster:

$ cilium clustermesh enable --context kind-c2 \
   --service-type LoadBalancer \
   --create-ca

Then, we can verify the status of a cluster mesh on the second cluster:

$ cilium clustermesh status --context kind-c2 --wait

Finally, we can connect both clusters together. This step only needs to be done in one direction. Following Cilium documentation, the connection will automatically be established in both directions:

$ cilium clustermesh connect --context kind-c1 \
   --destination-context kind-c2

If everything goes fine you should see a similar result as shown below.

After that, let’s verify the status of the Cilium cluster mesh once again:

$ cilium clustermesh status --context kind-c1 --wait

If everything goes fine you should see a similar result as shown below.

You can also verify the Kubernetes Service with Cilium Mesh Control Plane.

$ kubectl get svc -A | grep clustermesh
kube-system   clustermesh-apiserver   LoadBalancer   10.1.150.156   172.20.255.200   2379:32323/TCP           13h

You can validate the connectivity by running the connectivity test in multi-cluster mode:

$ cilium connectivity test --context kind-c1 --multi-cluster kind-c2

To be honest, all these tests were failed for my kind clusters 🙂 I was quite concerned. I’m not very sure how those tests work. However, it didn’t cause that I gave up on deploying my applications on both Kubernetes clusters to test Cilium multicluster.

Testing Cilium Kubernetes Multicluster with Java apps

Establishing load-balancing between clusters is achieved by defining a Kubernetes service with an identical name and namespace in each cluster and adding the annotation io.cilium/global-service: "true" to declare it global. Cilium will automatically perform load-balancing to pods in both clusters. Here’s the Kubernetes Service object definition for the callme-service application. As you see it is not exposed outside the cluster since it has ClusterIP type.

apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
  annotations:
    io.cilium/global-service: "true"
spec:
  type: ClusterIP
  ports:
    - port: 8080
      name: http
  selector:
    app: callme-service

Here’s the deployment manifest of the callme-service application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

Our test case is very simple. I’m going to deploy the caller-service application on the c1 cluster. The caller-service calls REST endpoint exposed by the callme-service. The callme-service application is running on the c2 cluster. I’m not changing anything in the implementation in comparison to the versions running on the single cluster. It means that the caller-service calls the callme-service endpoint using Kubernetes Service name and HTTP port (http://callme-service:8080).

kubernetes-multicluster-cilium-test

Ok, so now let’s just deploy the callme-service on the c2 cluster using skaffold. Before running the command go to the callme-service directory. Of course, you can also deploy a ready image with kubectl. The deployment manifest is available in the callme-service/k8s directory.

$ skaffold run --kube-context kind-c2

The caller-service application exposes a single HTTP endpoint that prints information about the version. It also calls the similar endpoint exposed by the callme-service. As I mentioned before, it uses the name of Kubernetes Service in communication.

@RestController
@RequestMapping("/caller")
public class CallerController {

    private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

    @Autowired
    RestTemplate restTemplate;
    @Value("${VERSION}")
    private String version;

    @GetMapping("/ping")
    public String ping() {
        String response = restTemplate.getForObject("http://callme-service:8080/callme/ping", String.class);
        LOGGER.info("Calling: response={}", response);
        return "I'm caller-service " + version + ". Calling... " + response;
    }
}

Here’s the Deployment manifest for the caller-service.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
        version: v1
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        env:
          - name: VERSION
            value: "v1"

Also, we need to create Kubernetes Service.

apiVersion: v1
kind: Service
metadata:
  name: caller-service
  labels:
    app: caller-service
spec:
  type: ClusterIP
  ports:
    - port: 8080
      name: http
  selector:
    app: caller-service

Don’t forget to apply callme-service Service also on the c1 cluster, although there are no running instances on that cluster. I can simply deploy it using Skaffold. All required manifests are applied automatically. Before running the following command go to the caller-service directory.

$ skaffold dev --port-forward --kube-context kind-c1

Thanks to the port-forward option I can simply test the caller-service on localhost. What is worth mentioning Skaffold supports Kind, so you don’t have to do any additional steps to deploy applications there. Finally, let’s test the communication by calling the HTTP endpoint exposed by the caller-service.

$ curl http://localhost:8080/caller/ping

Final Thoughts

It was not really hard to configure Kubernetes multicluster with Cilium and Kind. I had just a problem with their test connectivity tool that doesn’t work for cluster mesh. However, my simple test with two applications running on different Kubernetes clusters and communicating via HTTP was successful.

The post Kubernetes Multicluster with Kind and Cilium appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/10/25/kubernetes-multicluster-with-kind-and-cilium/feed/ 4 10151
Multicluster Traffic Mirroring with Istio and Kind https://piotrminkowski.com/2021/07/12/multicluster-traffic-mirroring-with-istio-and-kind/ https://piotrminkowski.com/2021/07/12/multicluster-traffic-mirroring-with-istio-and-kind/#comments Mon, 12 Jul 2021 08:27:13 +0000 https://piotrminkowski.com/?p=9902 In this article, you will learn how to create an Istio mesh with mirroring between multiple Kubernetes clusters running on Kind. We will deploy the same application in two Kubernetes clusters, and then we will mirror the traffic between those clusters. When such a scenario might be useful? Let’s assume we have two Kubernetes clusters. […]

The post Multicluster Traffic Mirroring with Istio and Kind appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create an Istio mesh with mirroring between multiple Kubernetes clusters running on Kind. We will deploy the same application in two Kubernetes clusters, and then we will mirror the traffic between those clusters. When such a scenario might be useful?

Let’s assume we have two Kubernetes clusters. The first of them is a production cluster, while the second is a test cluster. While there is huge incoming traffic to the production cluster, there is no traffic to the test cluster. What can we do in such a situation? We can just send a portion of production traffic to the test cluster. With Istio, you can also mirror internal traffic between e.g. microservices.

To simulate the scenario described above we will create two Kubernetes clusters locally with Kind. Then, we will install Istio mesh in multi-primary mode between different networks. The Kubernetes API Server and Istio Gateway need to be accessible by pods running on a different cluster. We have two applications. The caller-service application is running on the c1 cluster. It calls callme-service. The v1 version of the callme-service application is deployed on the c1 cluster, while the v2 version of the application is deployed on the c2 cluster. We will mirror 50% of traffic coming from the v1 version of our application to the v2 version running on the different clusters. The following picture illustrates our architecture.

istio-mirroring-arch

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

Both applications are configured to be deployed with Skaffold. In that case, you just need to download Skaffold CLI following the instructions available here. Of course, you also need to have Java and Maven available on your PC.

Create Kubernetes clusters with Kind

Firstly, let’s create two Kubernetes clusters using Kind. We don’t have to override any default settings, so we can just use the following command to create clusters.

$ kind create cluster --name c1
$ kind create cluster --name c2

Kind automatically creates a Kubernetes context and adds it to the config file. Just to verify, let’s display a list of running clusters.

$ kind get clusters
c1
c2

Also, we can display a list of contexts created by Kind.

$ kubectx | grep kind
kind-c1
kind-c2

Install MetalLB on Kubernetes clusters

To establish a connection between multiple clusters locally we need to expose some services as LoadBalancer. That’s why we need to install MetalLB. MetalLB is a load-balancer implementation for bare metal Kubernetes clusters. Firstly, we have to create the metalb-system namespace. Those operations should be performed on both our clusters.

$ kubectl apply -f \
  https://raw.githubusercontent.com/metallb/metallb/master/manifests/namespace.yaml

Then, we are going to create the memberlists secret required by MetalLB.

$ kubectl create secret generic -n metallb-system memberlist \
  --from-literal=secretkey="$(openssl rand -base64 128)" 

Finally, let’s install MetalLB.

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/metallb.yaml

In order to complete the configuration, we need to provide a range of IP addresses MetalLB controls. We want this range to be on the docker kind network.

$ docker network inspect -f '{{.IPAM.Config}}' kind

For me it is CIDR 172.20.0.0/16. Basing on it, we can configure the MetalLB IP pool per each cluster. For the first cluster c1 I’m setting addresses starting from 172.20.255.200 and ending with 172.20.255.250.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.200-172.20.255.250

We need to apply the configuration to the first cluster.

$ kubectl apply -f k8s/metallb-c1.yaml --context kind-c1

For the second cluster c2 I’m setting addresses starting from 172.20.255.150 and ending with 172.20.255.199.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.150-172.20.255.199

Finally, we can apply the configuration to the second cluster.

$ kubectl apply -f k8s/metallb-c2.yaml --context kind-c2

Install Istio on Kubernetes in multicluster mode

A multicluster service mesh deployment requires establishing trust between all clusters in the mesh. In order to do that we should configure the Istio certificate authority (CA) with a root certificate, signing certificate, and key. We can easily do it using Istio tools. First, we to go to the Istio installation directory on your PC. After that, we may use the Makefile.selfsigned.mk script available inside the tools/certs directory

$ cd $ISTIO_HOME/tools/certs/

The following command generates the root certificate and key.

$ make -f Makefile.selfsigned.mk root-ca

The following command generates an intermediate certificate and key for the Istio CA for each cluster. This will generate the required files in a directory named with a cluster name.

$ make -f Makefile.selfsigned.mk kind-c1-cacerts
$ make -f Makefile.selfsigned.mk kind-c2-cacerts

Then we may create Kubernetes Secret basing on the generated certificates. The same operation should be performed for the second cluster kind-c2.

$ kubectl create namespace istio-system
$ kubectl create secret generic cacerts -n istio-system \
      --from-file=kind-c1/ca-cert.pem \
      --from-file=kind-c1/ca-key.pem \
      --from-file=kind-c1/root-cert.pem \
      --from-file=kind-c1/cert-chain.pem

We are going to install Istio using the operator. It is important to set the same meshID for both clusters and different networks. We also need to create Istio Gateway for communication between two clusters inside a single mesh. It should be labeled with topology.istio.io/network=network1. The Gateway definition also contains two environment variables ISTIO_META_ROUTER_MODE and ISTIO_META_REQUESTED_NETWORK_VIEW. The first variable is responsible for setting the sni-dnat value that adds the clusters required for AUTO_PASSTHROUGH mode.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: kind-c1
      network: network1
  components:
    ingressGateways:
      - name: istio-eastwestgateway
        label:
          istio: eastwestgateway
          app: istio-eastwestgateway
          topology.istio.io/network: network1
        enabled: true
        k8s:
          env:
            - name: ISTIO_META_ROUTER_MODE
              value: "sni-dnat"
            - name: ISTIO_META_REQUESTED_NETWORK_VIEW
              value: network1
          service:
            ports:
              - name: status-port
                port: 15021
                targetPort: 15021
              - name: tls
                port: 15443
                targetPort: 15443
              - name: tls-istiod
                port: 15012
                targetPort: 15012
              - name: tls-webhook
                port: 15017
                targetPort: 15017

Before installing Istio we should label the istio-system namespace with topology.istio.io/network=network1. The Istio installation manifest is available in the repository as the k8s/istio-c1.yaml file.

$ kubectl --context kind-c1 label namespace istio-system \
      topology.istio.io/network=network1
$ istioctl install --config k8s/istio-c1.yaml \
      --context kind-c

There a similar IstioOperator definition for the second cluster. The only difference is in the name of the network, which is now network2 , and the name of the cluster.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: kind-c2
      network: network2
  components:
    ingressGateways:
      - name: istio-eastwestgateway
        label:
          istio: eastwestgateway
          app: istio-eastwestgateway
          topology.istio.io/network: network2
        enabled: true
        k8s:
          env:
            - name: ISTIO_META_ROUTER_MODE
              value: "sni-dnat"
            - name: ISTIO_META_REQUESTED_NETWORK_VIEW
              value: network2
          service:
            ports:
              - name: status-port
                port: 15021
                targetPort: 15021
              - name: tls
                port: 15443
                targetPort: 15443
              - name: tls-istiod
                port: 15012
                targetPort: 15012
              - name: tls-webhook
                port: 15017
                targetPort: 15017

The same as for the first cluster, let’s label the istio-system namespace with topology.istio.io/network and install Istio using operator manifest.

$ kubectl --context kind-c2 label namespace istio-system \
      topology.istio.io/network=network2
$ istioctl install --config k8s/istio-c2.yaml \
      --context kind-c2

Configure multicluster connectivity

Since the clusters are on separate networks, we need to expose all local services on the gateway in both clusters. Services behind that gateway can be accessed only by services with a trusted TLS certificate and workload ID. The definition of the cross gateway is exactly the same for both clusters. You can that manifest in the repository as k8s/istio-cross-gateway.yaml.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: cross-network-gateway
spec:
  selector:
    istio: eastwestgateway
  servers:
    - port:
        number: 15443
        name: tls
        protocol: TLS
      tls:
        mode: AUTO_PASSTHROUGH
      hosts:
        - "*.local"

Let’s apply the Gateway object to both clusters.

$ kubectl apply -f k8s/istio-cross-gateway.yaml \
      --context kind-c1
$ kubectl apply -f k8s/istio-cross-gateway.yaml \
      --context kind-c2

In the last step in this scenario, we enable endpoint discovery between Kubernetes clusters. To do that, we have to install a remote secret in the kind-c2 cluster that provides access to the kind-c1 API server. And vice versa. Fortunately, Istio provides an experimental feature for generating remote secrets.

$ istioctl x create-remote-secret --context=kind-c1 --name=kind-c1 
$ istioctl x create-remote-secret --context=kind-c2 --name=kind-c2

Before applying generated secrets we need to change the address of the cluster. Instead of localhost and dynamically generated port, we have to use c1-control-plane:6443 for the first cluster, and respectively c2-control-plane:6443 for the second cluster. The remote secrets generated for my clusters are committed in the project repository as k8s/secret1.yaml and k8s/secret2.yaml. You compare them with secrets generated for your clusters. Replace them with your secrets, but remember to change the address of your clusters.

$ kubectl apply -f k8s/secret1.yaml --context kind-c2
$ kubectl apply -f k8s/secret2.yaml --context kind-c1

Configure Mirroring with Istio

We are going to deploy our sample applications in the default namespace. Therefore,  automatic sidecar injection should be enabled for that namespace.

$ kubectl label --context kind-c1 namespace default \
    istio-injection=enabled
$ kubectl label --context ind-c2 namespace default \
    istio-injection=enabled

Before configuring Istio rules let’s deploy v1 version of the callme-service application on the kind-c1 cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

Then, we will deploy the v2 version of the callme-service application on the kind-c2 cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v2
  template:
    metadata:
      labels:
        app: callme-service
        version: v2
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v2"

Of course, we should also create Kubernetes Service on both clusters.

apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

The Istio DestinationRule defines two Subsets for callme-service basing on the version label.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: callme-service-destination
spec:
  host: callme-service
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

Finally, we may configure traffic mirroring with Istio. The 50% of traffic coming to the callme-service deployed on the kind-c1 cluster is sent to the callme-service deployed on the kind-c2 cluster.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: callme-service-route
spec:
  hosts:
    - callme-service
  http:
    - route:
      - destination:
          host: callme-service
          subset: v1
        weight: 100
      mirror:
        host: callme-service
        subset: v2
      mirrorPercentage:
        value: 50.

We will also deploy the caller-service application on the kind-c1 cluster. It calls endpoint GET /callme/ping exposed by the callme-service application.

$ kubectl get pod --context kind-c1
NAME                                 READY   STATUS    RESTARTS   AGE
caller-service-b9dbbd6c8-q6dpg       2/2     Running   0          1h
callme-service-v1-7b65795f48-w7zlq   2/2     Running   0          1h

Let’s verify the list of running pods in the default namespace in the kind-c2 cluster.

$ kubectl get pod --context kind-c2
NAME                                 READY   STATUS    RESTARTS   AGE
callme-service-v2-665b876579-rsfks   2/2     Running   0          1h

In order to test Istio mirroring through multiple Kubernetes clusters, we call the endpoint GET /caller/ping exposed by caller-service. As I mentioned before it calls the similar endpoint exposed by the callme-service application with the HTTP client. The simplest way to test it is through enabling port-forwarding. Thanks to that, the caller-service Service is available on the local port 8080. Let’s call that endpoint 20 times with siege.

$ siege -r 20 -c 1 http://localhost:8080/caller/service

After that, you can verify the logs for callme-service-v1 and callme-service-v2 deployments.

$ kubectl logs pod/callme-service-v1-7b65795f48-w7zlq --context kind-c1
$ kubectl logs pod/callme-service-v2-665b876579-rsfks --context kind-c2

You should see the following log 20 times for the kind-c1 cluster.

I'm callme-service v1

And respectively you should see the following log 10 times for the kind-c2 cluster, because we mirror 50% of traffic from v1 to v2.

I'm callme-service v2

Final Thoughts

This article shows how to create Istio multicluster with traffic mirroring between different networks. If you would like to simulate a similar scenario in the same network you may use a tool called Submariner. You may find more details about running Submariner on Kubernetes in this article Kubernetes Multicluster with Kind and Submariner.

The post Multicluster Traffic Mirroring with Istio and Kind appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/07/12/multicluster-traffic-mirroring-with-istio-and-kind/feed/ 16 9902