multicluster Archives - Piotr's TechBlog https://piotrminkowski.com/tag/multicluster/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 03 Dec 2021 15:13:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 multicluster Archives - Piotr's TechBlog https://piotrminkowski.com/tag/multicluster/ 32 32 181738725 Create and Manage Kubernetes Clusters with Cluster API and ArgoCD https://piotrminkowski.com/2021/12/03/create-kubernetes-clusters-with-cluster-api-and-argocd/ https://piotrminkowski.com/2021/12/03/create-kubernetes-clusters-with-cluster-api-and-argocd/#comments Fri, 03 Dec 2021 15:10:50 +0000 https://piotrminkowski.com/?p=10285 In this article, you will learn how to create and manage multiple Kubernetes clusters using Kubernetes Cluster API and ArgoCD. We will create a single, local cluster with Kind. On that cluster, we will provision the process of other Kubernetes clusters creation. In order to perform that process automatically, we will use ArgoCD. Thanks to […]

The post Create and Manage Kubernetes Clusters with Cluster API and ArgoCD appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create and manage multiple Kubernetes clusters using Kubernetes Cluster API and ArgoCD. We will create a single, local cluster with Kind. On that cluster, we will provision the process of other Kubernetes clusters creation. In order to perform that process automatically, we will use ArgoCD. Thanks to it, we can handle the whole process from a single Git repository. Before we start, let’s do a theoretical brief.

If you are interested in topics related to the Kubernetes multi-clustering you may also read some other articles about it:

  1. Kubernetes Multicluster with Kind and Cilium
  2. Multicluster Traffic Mirroring with Istio and Kind
  3. Kubernetes Multicluster with Kind and Submariner

Introduction

Did you hear about a project called Kubernetes Cluster API? It provides declarative APIs and tools to simplify provisioning, upgrading, and managing multiple Kubernetes clusters. In fact, it is a very interesting concept. We are creating a single Kubernetes cluster that manages the lifecycle of other clusters. On this cluster, we are installing Cluster API. And then we are just defining new workload clusters, by creating Cluster API objects. Looks simple? And that’s what is.

Cluster API provides a set of CRDs extending the Kubernetes API. Each of them represents a customization of a Kubernetes cluster installation. I will not get into the details. But, if you are interested you may read more about it here. What’s is important for us, it provides a CLI that handles the lifecycle of a Cluster API management cluster. Also, it allows creating clusters on multiple infrastructures including AWS, GCP, or Azure. However, today we are going to run the whole infrastructure locally on Docker and Kind. It is also possible with Kubernetes Cluster API since it supports Docker.

We will use Cluster API CLI just to initialize the management cluster and generate YAML templates. The whole process will be managed by the ArgoCD installed on the management cluster. Argo CD perfectly fits our scenario, since it supports multi-clusters. The instance installed on a single cluster can manage many other clusters that is able to connect with.

Finally, the last tool used today – Kind. Thanks to it, we can run multiple Kubernetes clusters on the same machine using Docker container nodes. Let’s take a look at the architecture of our solution described in this article.

Architecture with Kubernetes Cluster API and ArgoCD

Here’s the picture with our architecture. The whole infrastructure is running locally on Docker. We install Kubernetes Cluster API and ArgoCD on the management cluster. Then, using both those tools we are creating new clusters with Kind. After that, we are going to apply some Kubernetes objects into the workload clusters (c1, c2) like Namespace, ResourceQuota or RoleBinding. Of course, the whole process is managed by the Argo CD instance and configuration is stored in the Git repository.

kubernetes-cluster-api-argocd-arch

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should just follow my instructions. Let’s begin.

Create Management Cluster with Kind and Cluster API

In the first step, we are going to create a management cluster on Kind. You need to have Docker installed on your machine, kubectl and kind to do that exercise by yourself. Because we use Docker infrastructure to run Kubernetes workloads clusters, Kind must have an access to the Docker host. Here’s the definition of the Kind cluster. Let’s say the name of the file is mgmt-cluster-config.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
    - hostPath: /var/run/docker.sock
      containerPath: /var/run/docker.sock

Now, let’s just apply the configuration visible above when creating a new cluster with Kind:

$ kind create cluster --config mgmt-cluster-config.yaml --name mgmt

If everything goes fine you should see a similar output. After that, your Kubernetes context is automatically set to kind-mgmt.

Then, we need to initialize a management cluster. In other words, we have to install Cluster API on our Kind cluster. In order to do that, we first need to install Cluster API CLI on the local machine. On macOS, I can use the brew install clusterctl command. Once the clusterctl has been successfully installed I can run the following command:

$ clusterctl init --infrastructure docker

The result should be similar to the following. Maybe, without this timeout 🙂 I’m not sure why it happens, but it doesn’t have any negative impact on the next steps.

kubernetes-cluster-api-argocd-init-mgmt

Once we successfully initialized a management cluster we may verify it. Let’s display e.g. a list of namespaces. There are five new namespaces created by the Cluster API.

kubectl get ns
NAME                                STATUS   AGE
capd-system                         Active   3m37s
capi-kubeadm-bootstrap-system       Active   3m42s
capi-kubeadm-control-plane-system   Active   3m40s
capi-system                         Active   3m44s
cert-manager                        Active   4m8s
default                             Active   12m
kube-node-lease                     Active   12m
kube-public                         Active   12m
kube-system                         Active   12m
local-path-storage                  Active   12m

Also, let’s display a list of all pods. All the pods created inside new namespaces should be in a running state.

$ kubectl get pod -A

We can also display a list of installed CRDs. Anyway, the Kubernetes Cluster API is running on the management cluster and we can proceed to the further steps.

Install Argo CD on the management Kubernetes cluster

I will install Argo CD in the default namespace. But you can as well create a namespace argocd and install it there (following Argo CD documentation).

$ kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Then, let’s just verify the installation:

$ kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
argocd-application-controller-0       1/1     Running   0          63s
argocd-dex-server-6dcf645b6b-6dlk9    1/1     Running   0          63s
argocd-redis-5b6967fdfc-vg5k6         1/1     Running   0          63s
argocd-repo-server-7598bf5999-96mh5   1/1     Running   0          63s
argocd-server-79f9bc9b44-d6c8q        1/1     Running   0          63s

As you probably know, Argo CD provides a web UI for management. To access it on the local port (8080), I will run the kubectl port-forward command:

$ kubectl port-forward svc/argocd-server 8080:80

Now, the UI is available under http://localhost:8080. To login there you need to find a Kubernetes Secret argocd-initial-admin-secret and decode the password. The username is admin. You can easily decode secrets using for example Lens – advanced Kubernetes IDE. For now, only just log in there. We will back to Argo CD UI later.

Create Kubernetes cluster with Cluster API and ArgoCD

We will use the clusterctl CLI to generate YAML manifests with a declaration of a new Kubernetes cluster. To do that we need to run the following command. It will generate and save the manifest into the c1-clusterapi.yaml file.

$ clusterctl generate cluster c1 --flavor development \
  --infrastructure docker \
  --kubernetes-version v1.21.1 \
  --control-plane-machine-count=3 \
  --worker-machine-count=3 \
  > c1-clusterapi.yaml

Our c1 cluster consists of three master and three worker nodes. Following Cluster API documentation we would have to apply the generated manifests into the management cluster. However, we are going to use ArgoCD to automatically apply the Cluster API manifest stored in the Git repository to Kubernetes. So, let’s create a manifest with Cluster API objects in the Git repository. To simplify the process I will use Helm templates. Because there are two clusters to create we have to Argo CD applications that use the same template with parametrization. Ok, so here’s the Helm template based on the manifest generated in the previous step. You can find it in our sample Git repository under the path /mgmt/templates/cluster-api-template.yaml.

apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: {{ .Values.cluster.name }}
  namespace: default
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
        - 192.168.0.0/16
    serviceDomain: cluster.local
    services:
      cidrBlocks:
        - 10.128.0.0/12
  controlPlaneRef:
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    name: {{ .Values.cluster.name }}-control-plane
    namespace: default
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: DockerCluster
    name: {{ .Values.cluster.name }}
    namespace: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerCluster
metadata:
  name: {{ .Values.cluster.name }}
  namespace: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
  name: {{ .Values.cluster.name }}-control-plane
  namespace: default
spec:
  template:
    spec:
      extraMounts:
        - containerPath: /var/run/docker.sock
          hostPath: /var/run/docker.sock
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
  name: {{ .Values.cluster.name }}-control-plane
  namespace: default
spec:
  kubeadmConfigSpec:
    clusterConfiguration:
      apiServer:
        certSANs:
          - localhost
          - 127.0.0.1
      controllerManager:
        extraArgs:
          enable-hostpath-provisioner: "true"
    initConfiguration:
      nodeRegistration:
        criSocket: /var/run/containerd/containerd.sock
        kubeletExtraArgs:
          cgroup-driver: cgroupfs
          eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
    joinConfiguration:
      nodeRegistration:
        criSocket: /var/run/containerd/containerd.sock
        kubeletExtraArgs:
          cgroup-driver: cgroupfs
          eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
  machineTemplate:
    infrastructureRef:
      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: DockerMachineTemplate
      name: {{ .Values.cluster.name }}-control-plane
      namespace: default
  replicas: {{ .Values.cluster.masterNodes }}
  version: {{ .Values.cluster.version }}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
  name: {{ .Values.cluster.name }}-md-0
  namespace: default
spec:
  template:
    spec: {}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
  name: {{ .Values.cluster.name }}-md-0
  namespace: default
spec:
  template:
    spec:
      joinConfiguration:
        nodeRegistration:
          kubeletExtraArgs:
            cgroup-driver: cgroupfs
            eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
  name: {{ .Values.cluster.name }}-md-0
  namespace: default
spec:
  clusterName: {{ .Values.cluster.name }}
  replicas: {{ .Values.cluster.workerNodes }}
  selector:
    matchLabels: null
  template:
    spec:
      bootstrap:
        configRef:
          apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
          kind: KubeadmConfigTemplate
          name: {{ .Values.cluster.name }}-md-0
          namespace: default
      clusterName: {{ .Values.cluster.name }}
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: DockerMachineTemplate
        name: {{ .Values.cluster.name }}-md-0
        namespace: default
      version: {{ .Values.cluster.version }}

We can parameterize four properties related to cluster creation: name of the cluster, number of master and worker nodes, or a version of Kubernetes. Since we use Helm for that, we just need to create the values.yaml file containing values of those parameters in YAML format. Here’s the values.yaml file for the first cluster. You can find it in the sample Git repository under the path /mgmt/values-c1.yaml.

cluster:
  name: c1
  masterNodes: 3
  workerNodes: 3
  version: v1.21.1

Here’s the same configuration for the second cluster. As you see, there is a single master node and a single worker node. You can find it in the sample Git repository under the path /mgmt/values-c2.yaml.

cluster:
  name: c2
  masterNodes: 1
  workerNodes: 1
  version: v1.21.1

Create Argo CD applications

Since Argo CD supports Helm, we just need to set the right values.yaml file in the configuration of the ArgoCD application. Except that, we also need to set the address of our Git configuration repository and the directory with manifests inside the repository. All the configuration for the management cluster is stored inside the mgmt directory.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: c1-cluster-create
spec:
  destination:
    name: ''
    namespace: ''
    server: 'https://kubernetes.default.svc'
  source:
    path: mgmt
    repoURL: 'https://github.com/piomin/sample-kubernetes-cluster-api-argocd.git'
    targetRevision: HEAD
    helm:
      valueFiles:
        - values-c1.yaml
  project: default

Here’s a similar declaration for the second cluster:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: c2-cluster-create
spec:
  destination:
    name: ''
    namespace: ''
    server: 'https://kubernetes.default.svc'
  source:
    path: mgmt
    repoURL: 'https://github.com/piomin/sample-kubernetes-cluster-api-argocd.git'
    targetRevision: HEAD
    helm:
      valueFiles:
        - values-c2.yaml
  project: default

Argo CD requires privileges to manage Cluster API objects. Just to simplify, let’s add the cluster-admin role to the argocd-application-controller ServiceAccount used by Argo CD.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-admin-argocd-contoller
subjects:
  - kind: ServiceAccount
    name: argocd-application-controller
    namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin

After creating applications in Argo CD you may synchronize them manually (or enable the auto-sync option). It begins the process of creating workload clusters by the Cluster API tool.

kubernetes-cluster-api-argocd-ui

Verify Kubernetes clusters using Cluster API CLI

After performing synchronization with Argo CD we can verify a list of available Kubernetes clusters. To do that just use the following kind command:

$ kind get clusters
c1
c2
mgmt

As you see there are three running clusters! Kubernetes Cluster API installed on the management cluster has created two other clusters based on the configuration applied by Argo CD. To check if everything goes fine we may use the clusterctl describe command. After executing this command you would probably have a similar result to the visible below.

The control plane is not ready. It is described in the Cluster API documentation. We need to install a CNI provider on our workload clusters in the next step. Cluster API documentation suggests installing Calico as the CNI plugin. We will do it, but before we need to switch to the kind-c1 and kind-c2 contexts. Of course, they were not created on our local machine by the Cluster API, so first we need to export them to our Kubeconfig file. Let’s do that for both workload clusters.

$ kind export kubeconfig --name c1
$ kind export kubeconfig --name c2

I’m not sure why, but it exports contexts with 0.0.0.0 as the address of clusters. So in the next step, I also had to edit my Kubeconfig file and change this address to 127.0.0.1 as shown below. Now, I can connect both clusters using kubectl from my local machine.

And install Calico CNI on both clusters.

$ kubectl apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml --context kind-c1
$ kubectl apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml --context kind-c2

I could also automate that step in Argo CD. But for now, I just want to finish the installation. In the next section, I’m going to describe how to manage both these clusters using Argo CD running on the management cluster. Now, if you verify the status of both clusters using the clusterctl describe command, it looks perfectly fine.

kubernetes-cluster-api-argocd-cli

Managing workload clusters with ArgoCD

In the previous section, we have successfully created two Kubernetes clusters using the Cluster API tool and ArgoCD. To clarify, all the Kubernetes objects required to perform that operation were created on the management cluster. Now, we would like to apply a simple configuration visible below to our both workload clusters. Of course, we will also use the same instance of Argo CD for it.

apiVersion: v1
kind: Namespace
metadata:
  name: demo
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: demo-quota
  namespace: demo
spec:
  hard:
    pods: '10'
    requests.cpu: '1'
    requests.memory: 1Gi
    limits.cpu: '2'
    limits.memory: 4Gi
---
apiVersion: v1
kind: LimitRange
metadata:
  name: demo-limitrange
  namespace: demo
spec:
  limits:
    - default:
        memory: 512Mi
        cpu: 500m
      defaultRequest:
        cpu: 100m
        memory: 128Mi
      type: Container

Unfortunately, there is no built-in integration between ArgoCD and the Kubernetes Cluster API tool. Although Cluster API creates a secret containing the Kubeconfig file per each created cluster, Argo CD is not able to recognize it to automatically add such a cluster to the managed clusters. If you are interested in more details, there is an interesting discussion about it here. Anyway, the goal, for now, is to add both our workload clusters to the list of clusters managed by the global instance of Argo CD running on the management cluster. To do that, we first need to log in to Argo CD. We need to use the same credentials and URL used to interact with web UI.

$ argocd login localhost:8080

Now, we just need to run the following commands, assuming we have already exported both Kubernetes contexts to our local Kubeconfig file:

$ argocd cluster add kind-c1
$ argocd cluster add kind-c2

If you run Docker on macOS or Windows it is not such a simple thing to do. You need to use an internal Docker address of your cluster. Cluster API creates secrets containing the Kubeconfig file for all created clusters. We can use it to verify the internal address of our Kubernetes API. Here’s a list of secrets for our workload clusters:

$ kubectl get secrets | grep kubeconfig
c1-kubeconfig                               cluster.x-k8s.io/secret               1      85m
c2-kubeconfig                               cluster.x-k8s.io/secret               1      57m

We can obtain the internal address after decoding a particular secret. For example, the internal address of my c1 cluster is 172.20.0.3.

Under the hood, Argo CD creates a secret related to each of managed clusters. It is recognized basing on the label name and value: argocd.argoproj.io/secret-type: cluster.

apiVersion: v1
kind: Secret
metadata:
  name: c1-cluster-secret
  labels:
    argocd.argoproj.io/secret-type: cluster
type: Opaque
data:
  name: c1
  server: https://172.20.0.3:6443
  config: |
    {
      "tlsClientConfig": {
        "insecure": false,
        "caData": "<base64 encoded certificate>",
        "certData": "<base64 encoded certificate>",
        "keyData": "<base64 encoded key>"
      }
    }

If you added all your clusters successfully, you should see the following list in the Clusters section on your Argo CD instance.

Create Argo CD application for workload clusters

Finally, let’s create Argo CD applications for managing configuration on both workload clusters.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: c1-cluster-config
spec:
  project: default
  source:
    repoURL: 'https://github.com/piomin/sample-kubernetes-cluster-api-argocd.git'
    path: workload
    targetRevision: HEAD
  destination:
    server: 'https://172.20.0.3:6443'

And similarly to apply the configuration on the second cluster:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: c2-cluster-config
spec:
  project: default
  source:
    repoURL: 'https://github.com/piomin/sample-kubernetes-cluster-api-argocd.git'
    path: workload
    targetRevision: HEAD
  destination:
    server: 'https://172.20.0.10:6443'

Once you created both applications on Argo CD you synchronize them.

And finally, let’s verify that configuration has been successfully applied to the target clusters.

The post Create and Manage Kubernetes Clusters with Cluster API and ArgoCD appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/12/03/create-kubernetes-clusters-with-cluster-api-and-argocd/feed/ 4 10285
Kubernetes Multicluster with Kind and Cilium https://piotrminkowski.com/2021/10/25/kubernetes-multicluster-with-kind-and-cilium/ https://piotrminkowski.com/2021/10/25/kubernetes-multicluster-with-kind-and-cilium/#comments Mon, 25 Oct 2021 11:13:17 +0000 https://piotrminkowski.com/?p=10151 In this article, you will learn how to configure Kubernetes multicluster locally with Kind and Cilium. If you are looking for some other articles about local Kubernetes multicluster you should also read Kubernetes Multicluster with Kind and Submariner and Multicluster Traffic Mirroring with Istio and Kind. Cilium can act as a CNI plugin on your […]

The post Kubernetes Multicluster with Kind and Cilium appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to configure Kubernetes multicluster locally with Kind and Cilium. If you are looking for some other articles about local Kubernetes multicluster you should also read Kubernetes Multicluster with Kind and Submariner and Multicluster Traffic Mirroring with Istio and Kind.

Cilium can act as a CNI plugin on your Kubernetes cluster. It uses a Linux kernel technology called BPF, that enables the dynamic insertion of security visibility and control logic within the Linux kernel. It provides distributed load-balancing for pod to pod traffic and identity-based implementation of the NetworkPolicy resource. However, in this article, we will focus on its feature called Cluster Mesh, which allows setting up direct networking across multiple Kubernetes clusters.

Prerequisites

Before starting with this exercise you need to install some tools on your local machine. Of course, you need to have kubectl to interact with your Kubernetes cluster. Except this, you will also have to install:

  1. Kind CLI – in order to run multiple Kubernetes clusters locally. For more details refer here
  2. Cilium CLI – in order to manage and inspect the state of a Cilium installation. For more details you may refer here
  3. Skaffold CLI (optionally) – if you would to build and run applications directly from the code. Otherwise, you may just use my images published on Docker Hub. In case you decided to build directly from the source code you also need to have JDK and Maven
  4. Helm CLI – we will use Helm to install Cilium on Kubernetes. Alternatively we could use Cilium CLI for that, but with Helm chart we can easily enable some additional required Cilium features

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

You can also use application images instead of building them directly from the code. The image of the callme-service application is available here https://hub.docker.com/r/piomin/callme-service, while the image for the caller-service application is available here: https://hub.docker.com/r/piomin/callme-service.

Run Kubernetes clusters locally with Kind

When running a new Kind cluster we first need to disable the default CNI plugin based on kindnetd. Of course, we will use Cilium instead. Moreover, pod CIDR ranges in both our clusters must be non-conflicting and have unique IP addresses. That’s why we also provide podSubnet and serviceSubnet in the Kind cluster configuration manifest. Here’s a configuration file for our first cluster:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  podSubnet: "10.0.0.0/16"
  serviceSubnet: "10.1.0.0/16"

The name of that cluster is c1. So that the name of Kubernetes context is kind-c1. In order to create a new Kind cluster using the YAML manifest visible above we should run the following command:

$ kind create cluster --name c1 --config kind-c1-config.yaml 

Here’s a configuration file for our second cluster. It has different pod and service CIDRs than the first cluster:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  podSubnet: "10.2.0.0/16"
  serviceSubnet: "10.3.0.0/16"

The same as before we need to run the kind create command with a different name c2 and a YAML manifest for the second cluster:

$ kind create cluster --name c2 --config kind-c2-config.yaml

Install Cilium CNI on Kubernetes

Once we have successfully created two local Kubernetes clusters with Kind we may proceed to the Cilium installation. Firstly, let’s switch to the context of the kind-c1 cluster:

$ kubectl config use-context kind-c1

We will install Cilium using the Helm chart. To do that, we should add a new Helm repository.

$ helm repo add cilium https://helm.cilium.io/

For the Cluster Mesh option, we need to enable some Cilium features disabled by default. Also, the important thing is to set cluster.name and cluster.id parameters.

$ helm install cilium cilium/cilium --version 1.10.5 \
   --namespace kube-system \
   --set nodeinit.enabled=true \
   --set kubeProxyReplacement=partial \
   --set hostServices.enabled=false \
   --set externalIPs.enabled=true \
   --set nodePort.enabled=true \
   --set hostPort.enabled=true \
   --set cluster.name=c1 \
   --set cluster.id=1

Let’s switch to the context of the kind-c2 cluster:

$ kubectl config use-context kind-c2

We need to set different values for cluster.name and cluster.id parameters in the Helm installation command.

$ helm install cilium cilium/cilium --version 1.10.5 \
   --namespace kube-system \
   --set nodeinit.enabled=true \
   --set kubeProxyReplacement=partial \
   --set hostServices.enabled=false \
   --set externalIPs.enabled=true \
   --set nodePort.enabled=true \
   --set hostPort.enabled=true \
   --set cluster.name=c2 \
   --set cluster.id=2

After installing Cilium you can easily verify the status by running the cilium status command on both clusters. Just to clarify, the Cilium Cluster Mesh is not enabled yet.

kubernetes-multicluster-cilium-status

Install MetalLB on Kind

When deploying Cluster Mesh Cilium attempt to auto-detect the best service type for the LoadBalancer to expose the Cluster Mesh control plane to other clusters. The default and recommended option is LoadBalancer IP (there is also NodePort and ClusterIP available). That’s why we need to enable external IP support on Kind, which is not provided by default (on macOS and Windows). Fortunately, we may install MetalLB on our cluster. MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. Firstly, let’s create a namespace for MetalLB components:

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/namespace.yaml

Then we have to create a secret.

$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Let’s install MetalLB components in the metallb-system namespace.

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/metallb.yaml

If everything works fine you should have the following pods in the metallb-system namespace.

$ kubectl get pod -n metallb-system
NAME                          READY   STATUS    RESTARTS   AGE
controller-6cc57c4567-dk8fw   1/1     Running   1          12h
speaker-26g67                 1/1     Running   2          12h
speaker-2dhzf                 1/1     Running   1          12h
speaker-4fn7t                 1/1     Running   2          12h
speaker-djbtq                 1/1     Running   1          12h

Now, we need to setup address pools for LoadBalancer. We should check the range of addresses for kind network on Docker:

$ docker network inspect -f '{{.IPAM.Config}}' kind

For me the address pool for kind is 172.20.0.0/16. Let’s say I’ll configure 50 IPs starting from 172.20.255.200. Then we should create a manifest with external pool IP:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.200-172.20.255.250

The pool for the second cluster should be different to avoid conflicts in addresses. I’ll also configure 50 IPs, this time starting from 172.20.255.150:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.150-172.20.255.199

After that, we are ready to enable Cilium Cluster Mesh.

Enable Cilium Multicluster on Kubernetes

In the command visible enable Cilium multicluster mesh for the first Kubernetes cluster. As you see we are going to choose the LoadBalancer type to connect both clusters. If you compare it with Cilium documentation, I also had to set option create-ca, since there is no CA generated when installing Cilium with Helm:

$ cilium clustermesh enable --context kind-c1 \
   --service-type LoadBalancer \
   --create-ca

Then, we may verify it works successfully:

$ cilium clustermesh status --context kind-c1 --wait

Now, let’s do the same thing for the second cluster:

$ cilium clustermesh enable --context kind-c2 \
   --service-type LoadBalancer \
   --create-ca

Then, we can verify the status of a cluster mesh on the second cluster:

$ cilium clustermesh status --context kind-c2 --wait

Finally, we can connect both clusters together. This step only needs to be done in one direction. Following Cilium documentation, the connection will automatically be established in both directions:

$ cilium clustermesh connect --context kind-c1 \
   --destination-context kind-c2

If everything goes fine you should see a similar result as shown below.

After that, let’s verify the status of the Cilium cluster mesh once again:

$ cilium clustermesh status --context kind-c1 --wait

If everything goes fine you should see a similar result as shown below.

You can also verify the Kubernetes Service with Cilium Mesh Control Plane.

$ kubectl get svc -A | grep clustermesh
kube-system   clustermesh-apiserver   LoadBalancer   10.1.150.156   172.20.255.200   2379:32323/TCP           13h

You can validate the connectivity by running the connectivity test in multi-cluster mode:

$ cilium connectivity test --context kind-c1 --multi-cluster kind-c2

To be honest, all these tests were failed for my kind clusters 🙂 I was quite concerned. I’m not very sure how those tests work. However, it didn’t cause that I gave up on deploying my applications on both Kubernetes clusters to test Cilium multicluster.

Testing Cilium Kubernetes Multicluster with Java apps

Establishing load-balancing between clusters is achieved by defining a Kubernetes service with an identical name and namespace in each cluster and adding the annotation io.cilium/global-service: "true" to declare it global. Cilium will automatically perform load-balancing to pods in both clusters. Here’s the Kubernetes Service object definition for the callme-service application. As you see it is not exposed outside the cluster since it has ClusterIP type.

apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
  annotations:
    io.cilium/global-service: "true"
spec:
  type: ClusterIP
  ports:
    - port: 8080
      name: http
  selector:
    app: callme-service

Here’s the deployment manifest of the callme-service application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

Our test case is very simple. I’m going to deploy the caller-service application on the c1 cluster. The caller-service calls REST endpoint exposed by the callme-service. The callme-service application is running on the c2 cluster. I’m not changing anything in the implementation in comparison to the versions running on the single cluster. It means that the caller-service calls the callme-service endpoint using Kubernetes Service name and HTTP port (http://callme-service:8080).

kubernetes-multicluster-cilium-test

Ok, so now let’s just deploy the callme-service on the c2 cluster using skaffold. Before running the command go to the callme-service directory. Of course, you can also deploy a ready image with kubectl. The deployment manifest is available in the callme-service/k8s directory.

$ skaffold run --kube-context kind-c2

The caller-service application exposes a single HTTP endpoint that prints information about the version. It also calls the similar endpoint exposed by the callme-service. As I mentioned before, it uses the name of Kubernetes Service in communication.

@RestController
@RequestMapping("/caller")
public class CallerController {

    private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

    @Autowired
    RestTemplate restTemplate;
    @Value("${VERSION}")
    private String version;

    @GetMapping("/ping")
    public String ping() {
        String response = restTemplate.getForObject("http://callme-service:8080/callme/ping", String.class);
        LOGGER.info("Calling: response={}", response);
        return "I'm caller-service " + version + ". Calling... " + response;
    }
}

Here’s the Deployment manifest for the caller-service.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
        version: v1
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        env:
          - name: VERSION
            value: "v1"

Also, we need to create Kubernetes Service.

apiVersion: v1
kind: Service
metadata:
  name: caller-service
  labels:
    app: caller-service
spec:
  type: ClusterIP
  ports:
    - port: 8080
      name: http
  selector:
    app: caller-service

Don’t forget to apply callme-service Service also on the c1 cluster, although there are no running instances on that cluster. I can simply deploy it using Skaffold. All required manifests are applied automatically. Before running the following command go to the caller-service directory.

$ skaffold dev --port-forward --kube-context kind-c1

Thanks to the port-forward option I can simply test the caller-service on localhost. What is worth mentioning Skaffold supports Kind, so you don’t have to do any additional steps to deploy applications there. Finally, let’s test the communication by calling the HTTP endpoint exposed by the caller-service.

$ curl http://localhost:8080/caller/ping

Final Thoughts

It was not really hard to configure Kubernetes multicluster with Cilium and Kind. I had just a problem with their test connectivity tool that doesn’t work for cluster mesh. However, my simple test with two applications running on different Kubernetes clusters and communicating via HTTP was successful.

The post Kubernetes Multicluster with Kind and Cilium appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/10/25/kubernetes-multicluster-with-kind-and-cilium/feed/ 4 10151
Kubernetes Multicluster with Kind and Submariner https://piotrminkowski.com/2021/07/08/kubernetes-multicluster-with-kind-and-submariner/ https://piotrminkowski.com/2021/07/08/kubernetes-multicluster-with-kind-and-submariner/#comments Thu, 08 Jul 2021 12:41:46 +0000 https://piotrminkowski.com/?p=9881 In this article, you will learn how to create multiple Kubernetes clusters locally and establish direct communication between them with Kind and Submariner. Kind (Kubernetes in Docker) is a tool for running local Kubernetes clusters using Docker containers. Each Kubernetes node is a separated Docker container. All these containers are running in the same Docker network […]

The post Kubernetes Multicluster with Kind and Submariner appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create multiple Kubernetes clusters locally and establish direct communication between them with Kind and Submariner. Kind (Kubernetes in Docker) is a tool for running local Kubernetes clusters using Docker containers. Each Kubernetes node is a separated Docker container. All these containers are running in the same Docker network kind.

Our goal in this article is to establish direct communication between pods running in two different Kubernetes clusters created with Kind. Of course, it is not possible by default. We should treat such clusters as two Kubernetes clusters running in different networks. Here comes Submariner. It is a tool originally created by Rancher. It enables direct networking between pods and services in different Kubernetes clusters, either on-premises or in the cloud.

Let’s perform a quick brief of our architecture. We have two applications caller-service and callme-service. Also, there are two Kubernetes clusters c1 and c2 created using Kind. The caller-service application is running on the c1 cluster, while the callme-service application is running on the c2 cluster. The caller-service application communicates with the callme-service application directly without using Kubernetes Ingress.

kubernetes-submariner-arch2

Architecture – Submariner on Kubernetes

Let me say some words about Submariner. Since it is a relatively new tool you may have no touch with it. It runs a single, central broker and then joins several members to this broker. Basically, a member is a Kubernetes cluster that is a part of the Submariner cluster. All the members may communicate directly with each other. The Broker component facilitates the exchange of metadata information between Submariner gateways deployed in participating Kubernetes clusters.

The architecture of our example system is visible below. We run the Submariner Broker on the c1 cluster. Then we run Submariner “agents” on both clusters. Service discovery is based on the Lighthouse project. It provides DNS discovery for Kubernetes clusters connected by Submariner. You may read more details about it here.

kubernetes-submariner-arch1

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

Both applications are configured to be deployed with Skaffold. In that case, you just need to download Skaffold CLI following the instructions available here. Of course, you also need to have Java and Maven available on your PC.

If you are interested in more about using Skaffold to build and deploy Java applications you can read my article Local Java Development on Kubernetes.

Create Kubernetes clusters with Kind

Firstly, let’s create two Kubernetes clusters using Kind. Each cluster consists of a control plane and a worker node. Since we are going to install Calico as a networking plugin on Kubernetes, we will disable a default CNI plugin on Kind. Finally, we need to configure CIDRs for pods and services. The IP pool should be unique per both clusters. Here’s the Kind configuration manifest for the first cluster. It is available in the project repository under the path k8s/kind-cluster-c1.yaml.

kind: Cluster
name: c1
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
networking:
  podSubnet: 10.240.0.0/16
  serviceSubnet: 10.110.0.0/16
  disableDefaultCNI: true

Then, let’s create the first cluster using the configuration manifest visible above.

$ kind create cluster --config k8s/kind-cluster-c1.yaml

We have a similar configuration manifest for a second cluster. The only difference is in the name of the cluster and CIDRs for Kubernetes pods and services. It is available in the project repository under the path k8s/kind-cluster-c2.yaml.

kind: Cluster
name: c2
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
networking:
  podSubnet: 10.241.0.0/16
  serviceSubnet: 10.111.0.0/16
  disableDefaultCNI: true

After that, let’s create the second cluster using the configuration manifest visible above.

$ kind create cluster --config k8s/kind-cluster-c2.yaml

Once the clusters have been successfully created we can verify them using the following command.

$ kind get clusters
c1
c2

Kind automatically creates two Kubernetes contexts for those clusters. We can switch between the kind-c1 and kind-c2 context.

Install Calico on Kubernetes

We will use the Tigera operator to install Calico as a default CNI on Kubernetes. It is possible to use different installation methods, but that with operator is the simplest one. Firstly, let’s switch to the kind-c1 context.

$ kubectx kind-c1

I’m using the kubectx tool for switching between Kubernetes contexts and namespaces. You can download the latest version of this tool from the following site: https://github.com/ahmetb/kubectx/releases.

In the first step, we install the Tigera operator on the cluster.

$ kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml

After that, we need to create the Installation CRD object responsible for installing Calico on Kubernetes. We can configure all the required parameters inside a single file. It is important to set the same CIDRs as pods CIDRs inside the Kind configuration file. Here’s the manifest for the first cluster.

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  calicoNetwork:
    ipPools:
      - blockSize: 26
        cidr: 10.240.0.0/16
        encapsulation: VXLANCrossSubnet
        natOutgoing: Enabled
        nodeSelector: all()

The manifest is available in the repository as the k8s/tigera-c1.yaml file. Let’s apply it.

$ kubectl apply -f k8s/tigera-c1.yaml

Then, we may switch to the kind-c2 context and create a similar manifest with the Calico installation.

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  calicoNetwork:
    ipPools:
      - blockSize: 26
        cidr: 10.241.0.0/16
        encapsulation: VXLANCrossSubnet
        natOutgoing: Enabled
        nodeSelector: all()

Finally, let’s apply it to the second cluster using the k8s/tigera-c2.yaml file.

$ kubectl apply -f k8s/tigera-c2.yaml

We may verify the installation of Calico by listing running pods in the calico-system namespace. Here’s the result on my local Kubernetes cluster.

$ kubectl get pod -n calico-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-696ffc7f48-86rfz   1/1     Running   0          75s
calico-node-nhkn5                          1/1     Running   0          76s
calico-node-qkkqk                          1/1     Running   0          76s
calico-typha-6d6c85c77b-ffmt5              1/1     Running   0          70s
calico-typha-6d6c85c77b-w8x6t              1/1     Running   0          76s

By default, Kind uses a simple networking implementation – Kindnetd. However, this CNI plugin is not tested with Submariner. Therefore, we need to change it to one of the already supported ones like Calico.

Install Submariner on Kubernetes

In order to install Submariner on our Kind cluster, we first need to download CLI.

$ curl -Ls https://get.submariner.io | bash
$ export PATH=$PATH:~/.local/bin

Submariner subctl CLI requires xz-utils. So, first, you need to install this package by executing the following command: apt update -y && apt install xz-utils -y.

After that, we can use the subctl binary to deploy the Submarine Broker. If you use Docker on Mac or Windows (like me) you need to perform those operations inside a container with the Kind control plane. So first, let’s get inside the control plane container. Kind automatically sets the name of that container as a conjunction of a cluster name and -control-plane suffix.

$ docker exec -it c1-control-plane /bin/bash

That container has already kubectl been installed. The only thing we need to do is to add the context of the second Kubernetes cluster kind-c2. I just copied it from my local Kube config file, which contains the right data. It has been added by Kind during Kubernetes cluster creation. You can check out the location of the Kubernetes config inside the c1-control-plane container by displaying the KUBECONFIG environment variable.

$ echo $KUBECONFIG
/etc/kubernetes/admin.conf

If you are copying data from your local Kube config file you just need to change the address of your Kubernetes cluster. Instead of the external IP and port, you have to set the internal Docker container IP and port.

We should use the following IP address for internal communication between both clusters.

Now, we can deploy the Submariner Broker on the c1 cluster. After running the following command Submariner installs an operator on Kubernetes and generates the broker-info.subm file. That file is then used to join members to the Submariner cluster.

$ subctl deploy-broker

Enable direct communication between Kubernetes clusters with Submariner

Let’s clarify some things before proceeding. We have already created a Submariner Broker on the c1 cluster. To simplify the process I’m using the same Kubernetes cluster as a Submariner Broker and Member. We also use subctl CLI to add members to a Submariner cluster. One of the essential components that have to be installed is a Submariner Gateway Engine. It is deployed as a DaemonSet that is configured to run on nodes labelled with submariner.io/gateway=true. So, in the first step, we will set this label on both Kubernetes worker nodes of c1 and c2 clusters.

$ kubectl label node c1-worker submariner.io/gateway=true
$ kubectl label node c2-worker submariner.io/gateway=true --context kind-c2

Just to remind you – we are still inside the c1-control-plane container. Now we can add a first member to our Submariner cluster. To do that, we still use subctl CLI command as shown below. With the join command, we need to the broker-info.subm file already generated while running the deploy-broker command. We will also disable NAT traversal for IPsec.

$ subctl join broker-info.subm --natt=false --clusterid kind-c1

After that, we may add a second member to our cluster.

$ subctl join broker-info.subm --natt=false --clusterid kind-c2 --kubecontext kind-c2

The Submariner operator creates several deployments in the submariner-operator namespace. Let’s display a list of pods running there.

$ kubectl get pod -n submariner-operator
NAME                                             READY   STATUS    RESTARTS   AGE
submariner-gateway-kd6zs                         1/1     Running   0          5m50s
submariner-lighthouse-agent-b798b8987-f6zvl      1/1     Running   0          5m48s
submariner-lighthouse-coredns-845c9cdf6f-8qhrj   1/1     Running   0          5m46s
submariner-lighthouse-coredns-845c9cdf6f-xmd6q   1/1     Running   0          5m46s
submariner-operator-586cb56578-qgwh6             1/1     Running   1          6m17s
submariner-routeagent-fcptn                      1/1     Running   0          5m49s
submariner-routeagent-pn54f                      1/1     Running   0          5m49s

We can also use some subctl commands. Let’s display a list of Submariner gateways.

$ subctl show gateways 

Showing information for cluster "kind-c2":
NODE                            HA STATUS       SUMMARY                         
c2-worker                       active          All connections (1) are established

Showing information for cluster "c1":
NODE                            HA STATUS       SUMMARY                         
c1-worker                       active          All connections (1) are established

Or a list of Submariner connections.

$ subctl show connections

Showing information for cluster "c1":
GATEWAY    CLUSTER  REMOTE IP   NAT  CABLE DRIVER  SUBNETS                       STATUS     RTT avg.    
c2-worker  kind-c2  172.20.0.5  no   libreswan     10.111.0.0/16, 10.241.0.0/16  connected  384.957µs   

Showing information for cluster "kind-c2":
GATEWAY    CLUSTER  REMOTE IP   NAT  CABLE DRIVER  SUBNETS                       STATUS     RTT avg.    
c1-worker  kind-c1  172.20.0.2  no   libreswan     10.110.0.0/16, 10.240.0.0/16  connected  592.728µs

Deploy applications on Kubernetes and expose them with Submariner

Since we have already installed Submariner on both clusters we can deploy our sample applications. Let’s begin with caller-service. Make sure you are in the kind-c1 context. Then go to the caller-service directory and deploy the application using Skaffold as shown below.

$ cd caller-service
$ skaffold dev --port-forward

Then, you should switch to the kind-c2 context. Now, deploy the callme-service application.

$ cd callme-service
$ skaffold run

In the next step, we need to expose our service to Submariner. To do that you have to execute the following command with subctl.

$ subctl export service --namespace default callme-service

Submariner exposes services on the domain clusterset.local. So, our service is now available under the URL callme-service.default.svc.clusterset.local. Here’s a part of a code in caller-service responsible for communication with callme-service through the Submariner DNS.

@GetMapping("/ping")
public String ping() {
   LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), version);
   String response = restTemplate
         .getForObject("http://callme-service.default.svc.clusterset.local:8080/callme/ping", String.class);
   LOGGER.info("Calling: response={}", response);
   return "I'm caller-service " + version + ". Calling... " + response;
}

In order to analyze what happened let’s display some CRD objects created by Submariner. Firstly, it created ServiceExport on the cluster with the exposed service. In our case, it is the kind-c2 cluster.

$ kubectl get ServiceExport        
NAME             AGE
callme-service   15s

Once we exposed the service it is automatically imported on the second cluster. We need to switch to the kind-c1 cluster and then display the ServiceImport object.

$ kubectl get ServiceImport -n submariner-operator
NAME                             TYPE           IP                  AGE
callme-service-default-kind-c2   ClusterSetIP   ["10.111.176.50"]   4m55s

The ServiceImport object stores the IP address of Kubernetes Service callme-service.

$ kubectl get svc --context kind-c2
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
callme-service   ClusterIP   10.111.176.50   <none>        8080/TCP   31m
kubernetes       ClusterIP   10.111.0.1      <none>        443/TCP    74m

Finally, we may test a connection between clusters by calling the following endpoint. The caller-service calls the GET /callme/ping endpoint exposed by callme-service. Thanks to enabling the port-forward option on the Skaffold command we may access the service locally on port 8080.

$ curl http://localhost:8080/caller/ping

The post Kubernetes Multicluster with Kind and Submariner appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/07/08/kubernetes-multicluster-with-kind-and-submariner/feed/ 8 9881