submariner Archives - Piotr's TechBlog https://piotrminkowski.com/tag/submariner/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 15 Jan 2024 08:55:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 submariner Archives - Piotr's TechBlog https://piotrminkowski.com/tag/submariner/ 32 32 181738725 OpenShift Multicluster with Advanced Cluster Management for Kubernetes and Submariner https://piotrminkowski.com/2024/01/15/openshift-multicluster-with-advanced-cluster-management-for-kubernetes-and-submariner/ https://piotrminkowski.com/2024/01/15/openshift-multicluster-with-advanced-cluster-management-for-kubernetes-and-submariner/#respond Mon, 15 Jan 2024 08:55:03 +0000 https://piotrminkowski.com/?p=14792 This article will teach you how to connect multiple Openshift clusters with Submariner and Advanced Cluster Management for Kubernetes. Submariner allows us to configure direct networking between pods and services in different Kubernetes clusters, either on-premises or in the cloud. It operates at the L3 layer. It establishes a secure tunnel between two clusters and […]

The post OpenShift Multicluster with Advanced Cluster Management for Kubernetes and Submariner appeared first on Piotr's TechBlog.

]]>
This article will teach you how to connect multiple Openshift clusters with Submariner and Advanced Cluster Management for Kubernetes. Submariner allows us to configure direct networking between pods and services in different Kubernetes clusters, either on-premises or in the cloud. It operates at the L3 layer. It establishes a secure tunnel between two clusters and provides service discovery. I have already described how to install and manage it on Kubernetes mostly with the subctl CLI in the following article.

Today we will focus on the integration between Submariner and OpenShift through the Advanced Cluster Management for Kubernetes (ACM). ACM is a tool dedicated to OpenShift. It allows to control of clusters and applications from a single console, with built-in security policies. You can find several articles about it on my blog. For example, the following one describes how to use ACM together with Argo CD in the GitOps approach.

Source Code

This time we won’t work much with a source code. However, if you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should follow my further instructions.

Architecture

Our architecture consists of three Openshift clusters: a single hub cluster and two managed clusters. The hub cluster aims to create new managed clusters and establish a secure connection between them using Submariner. So, in the initial state, there is just a hub cluster with the Advanced Cluster Management for Kubernetes (ACM) operator installed on it. With ACM we will create two new Openshift clusters on the target infrastructure (AWS) and install Submariner on them. Finally, we are going to deploy two sample Spring Boot apps. The callme-service app exposes a single GET /callme/ping endpoint and runs on ocp2. We will expose it through Submariner to the ocp1 cluster. On the ocp1 cluster, there is the second app caller-service that invokes the endpoint exposed by the callme-service app. Here’s the diagram of our architecture.

openshift-submariner-arch

Install Advanced Cluster Management on OpenShift

In the first step, we must install the Advanced Cluster Management for Kubernetes (ACM) on OpenShift using an operator. The default installation namespace is open-cluster-management. We won’t change it.

Once we install the operator, we need to initialize the ACM we have to create the MultiClusterHub object. Once again, we will use the open-cluster-management for that. Here’s the object declaration. We don’t need to specify any more advanced settings.

apiVersion: operator.open-cluster-management.io/v1
kind: MultiClusterHub
metadata:
  name: multiclusterhub
  namespace: open-cluster-management
spec: {}

We can do the same thing graphically in the OpenShift Dashboard. Just click the “Create MultiClusterHub” button and then accept the action on the next page. Probably it will take some time to complete the installation since there are several pods to run.

openshift-submariner-acm

Once the installation is completed, you will see the new menu item at the top of the dashboard allowing you to switch to the “All Clusters” view. Let’s do it. After that, we can proceed to the next step.

Create OpenShift Clusters with ACM

Advanced Cluster Management for Kubernetes allows us to import the existing clusters or create new ones on the target infrastructure. In this exercise, you see how to leverage the cloud provider account for that. Let’s just click the “Connect your cloud provider” tile on the welcome screen.

Provide Cloud Credentials

I’m using my already existing account on AWS. ACM will ask us to provide the appropriate credentials for the AWS account. In the first form, we should provide the name and namespace of our secret with credentials and a default base DNS domain.

openshift-submariner-cluster-create

Then, the ACM wizard will redirect us to the next steps. We have to provide AWS access key ID and secret, OpenShift pull secret, and also the SSH private/public keys. Of course, we can create the required Kubernetes Secret without a wizard, just by applying the similar YAML manifest:

apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: aws
  namespace: open-cluster-management
  labels:
    cluster.open-cluster-management.io/type: aws
    cluster.open-cluster-management.io/credentials: ""
stringData:
  aws_access_key_id: AKIAXBLSZLXZJWT3KFPM
  aws_secret_access_key: "********************"
  baseDomain: sandbox2746.opentlc.com
  pullSecret: "********************"
  ssh-privatekey: "********************"
  ssh-publickey: "********************"
  httpProxy: ""
  httpsProxy: ""
  noProxy: ""
  additionalTrustBundle: ""

Provision the Cluster

After that, we can prepare the ACM cluster set. The cluster set feature allows us to group OpenShift clusters. It is the required prerequisite for Submariner installation. Here’s the ManagedClusterSet object. The name is arbitrary. We can set it e.g. as the submariner.

apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSet
metadata:
  name: submariner
spec: {}

Finally, we can create two OpenShift clusters on AWS from the ACM dashboard. Go to the Infrastructure -> Clusters -> Cluster list and click the “Create cluster” button. Then, let’s choose the “Amazon Web Services” tile with already created credentials.

In the “Cluster Details” form we should set the name (ocp1 and then ocp2 for the second cluster) and version of the OpenShift cluster (the “Release image” field). We should also assign it to the submariner cluster set.

Let’s take a look at the “Networking” form. We won’t change anything here intentionally. We will set the same IP address ranges for both the ocp1 and ocp2 clusters. In the default settings, Submariner requires non-overlapping Pod and Service CIDRs between the interconnected clusters. This approach prevents routing conflicts. We are going to break those rules, which results in conflicts in the internal IP addresses between the ocp1 and ocp2 clusters. We will see how Submariner helps to resolve such an issue.

It will take around 30-40 minutes to create both clusters. ACM will connect directly to our AWS and create all the required resources there. As a result, our environment is ready. Let’s take how it looks from the ACM dashboard perspective:

openshift-submariner-clusters

There is a single management (hub) cluster and two managed clusters. Both managed clusters are assigned to the submariner cluster set. If you have the same result as me, you can proceed to the next step.

Enable Submariner for OpenShift clusters with ACM

Install in the Target Managed Cluster Set

Submariner is available on OpenShift in the form of an add-on to ACM. As I mentioned before, it requires ACM ManagedClusterSet objects for grouping clusters that should be connected. In order to enable Submariner for the specific cluster set, we need to view its details and switch to the “Submariner add-ons” tab. Then, we need to click the “Install Submariner add-ons” button. In the installation form, we have to choose the target clusters and enable the “Globalnet” feature to resolve an issue related to the Pod and Service CIDR overlapping. The default value of the “Globalnet” CIDR is 242.0.0.0/8. If it’s fine for us we can leave the empty value in the text field and proceed to the next step.

openshift-submariner-install

In the next form, we are configuring Submariner installation in each OpenShift cluster. We don’t have to change any value there. ACM will create an additional node on the OpenShift cluster using the c5d.large VM type. It will use that node for installing Multus CNI. Multus is a CNI plugin for Kubernetes that enables attaching multiple network interfaces to pods. It is responsible for enabling the Submariner “Globalnet” feature and giving a subnet from this virtual Global Private Network, configured as a new cluster parameter GlobalCIDR. We will run a single instance of the Submariner gateway and leave the default libreswan cable driver.

Of course, we can also provide that configuration as YAML manifests. With that approach, we need to create the ManagedClusterAddOn and SubmarinerConfig objects on both ocp1 and ocp2 clusters through the ACM engine. The Submariner Broker object has to be created on the hub cluster.

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: submariner
  namespace: ocp2
spec:
  installNamespace: submariner-operator
---
apiVersion: submarineraddon.open-cluster-management.io/v1alpha1
kind: SubmarinerConfig
metadata:
  name: submariner
  namespace: ocp2
spec:
  gatewayConfig:
    gateways: 1
    aws:
      instanceType: c5d.large
  IPSecNATTPort: 4500
  airGappedDeployment: false
  NATTEnable: true
  cableDriver: libreswan
  globalCIDR: ""
  credentialsSecret:
    name: ocp2-aws-creds
---
apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: submariner
  namespace: ocp1
spec:
  installNamespace: submariner-operator
---
apiVersion: submarineraddon.open-cluster-management.io/v1alpha1
kind: SubmarinerConfig
metadata:
  name: submariner
  namespace: ocp1
spec:
  gatewayConfig:
    gateways: 1
    aws:
      instanceType: c5d.large
  IPSecNATTPort: 4500
  airGappedDeployment: false
  NATTEnable: true
  cableDriver: libreswan
  globalCIDR: ""
  credentialsSecret:
    name: ocp1-aws-creds
---
apiVersion: submariner.io/v1alpha1
kind: Broker
metadata:
  name: submariner-broker
  namespace: submariner-broker
  labels:
    cluster.open-cluster-management.io/backup: submariner
spec:
  globalnetEnabled: true
  globalnetCIDRRange: 242.0.0.0/8

Verify the Status of Submariner Network

After installing the Submariner Add-on in the target cluster set, you should see the same statuses for both ocp1 and ocp2 clusters.

openshift-submariner-status

Assuming that you are logged in to all the clusters with the oc CLI, we can the detailed status of the Submariner network with the subctl CLI. In order to do that, we should execute the following command:

$ subctl show all

It examines all the clusters one after the other and prints all key Submariner components installed there. Let’s begin with the command output for the hub cluster. As you see, it runs the Submariner Broker component in the submariner-broker namespace:

Here’s the output for the ocp1 managed cluster. The global CIDR for that cluster is 242.1.0.0/16. This IP range will be used for exposing services to other clusters inside the same Submariner network.

On the other hand, here’s the output for the ocp2 managed cluster. The global CIDR for that cluster is 242.0.0.0/16. The connection between ocp1 and ocp2 clusters is established. Therefore we can proceed to the last step in our exercise. Let’s run the sample apps on our OpenShift clusters!

Export App to the Remote Cluster

Since we already installed Submariner on both OpenShift clusters we can deploy our sample applications. Let’s begin with caller-service. We will run it in the demo-apps namespace. Make sure you are in the ocp1 Kube context. Here’s the YAML manifest with the Deployment and Service definitions for our app:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        env:
          - name: VERSION
            value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: caller-service
  labels:
    app: caller-service
spec:
  type: ClusterIP
  ports:
    - port: 8080
      name: http
  selector:
    app: caller-service

Then go to the caller-service directory and deploy the application using Skaffold as shown below. We can also expose the service outside the cluster using the OpenShift Route object:

$ cd caller-service
$ oc project demo-apps
$ skaffold run
$ oc expose svc/caller-service

Let’s switch to the callme-service app. Make sure you are in the ocp2 Kube context. Here’s the YAML manifest with the Deployment and Service definitions for our second app:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
  template:
    metadata:
      labels:
        app: callme-service
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

Once again, we can deploy the app on OpenShift using Skaffold.

$ cd callme-service
$ oc project demo-apps
$ skaffold run

This time, instead of exposing the service outside of the cluster, we will export it to the Submariner network. Thanks to that, the caller-service app will be able to call directly through the IPSec tunnel established between the clusters. We can do it using the subctl CLI command:

$ subctl export service callme-service

That command creates the ServiceExport CRD object provided by the Submariner operator. We can apply the following YAML definition as well:

apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
  name: callme-service
  namespace: demo-apps

We can verify if everything turned out okay by checking out the ServiceExport object status:

Submariner creates an additional Kubernetes Service with the IP address from the “Globalnet” CIDR pool to avoid services IP overlapping.

Then, let’s switch to the ocp1 cluster. After exporting the Service from the ocp2 cluster Submariner automatically creates the ServiceImport object on the connected clusters.

apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
  name: callme-service
  namespace: demo-apps
spec:
  ports:
    - name: http
      port: 8080
      protocol: TCP
  type: ClusterSetIP
status:
  clusters:
    - cluster: ocp2

Submariner exposes services on the domain clusterset.local. So, our service is now available under the URL callme-service.demo-apps.svc.clusterset.local. We can verify it by executing the following curl command inside the caller-service container. As you see, it uses the external IP address allocated by the Submariner within the “Globalnet” subnet.

Here’s the implementation of @RestController responsible for handling requests coming to the caller-service service. As you see, it uses Spring RestTemplate client to call the remote service using the callme-service.demo-apps.svc.clusterset.local URL provided by Submariner.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallerController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.or(Optional::empty), version);
      String response = restTemplate
         .getForObject("http://callme-service.demo-apps.svc.clusterset.local:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }
}

Let’s just make a final test using the OpenShift caller-service Route and the GET /caller/ping endpoint. As you see it calls the callme-service app successfully through the Submariner tunnel.

openshift submariner-tes-

Final Thoughts

In this article, we analyzed the scenario where we are interconnecting two OpenShift clusters with overlapping CIDRs. I also showed you how to leverage the ACM dashboard to simplify the installation and configuration of Submariner on the managed clusters. It is worth mentioning, that there are some other ways to interconnect multiple OpenShift clusters. For example, we can use Red Hat Service Interconnect based on the open-source project Skupper for that. In order to read more about it, you can refer to the following article on my blog.

The post OpenShift Multicluster with Advanced Cluster Management for Kubernetes and Submariner appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/01/15/openshift-multicluster-with-advanced-cluster-management-for-kubernetes-and-submariner/feed/ 0 14792
Kubernetes Multicluster with Kind and Submariner https://piotrminkowski.com/2021/07/08/kubernetes-multicluster-with-kind-and-submariner/ https://piotrminkowski.com/2021/07/08/kubernetes-multicluster-with-kind-and-submariner/#comments Thu, 08 Jul 2021 12:41:46 +0000 https://piotrminkowski.com/?p=9881 In this article, you will learn how to create multiple Kubernetes clusters locally and establish direct communication between them with Kind and Submariner. Kind (Kubernetes in Docker) is a tool for running local Kubernetes clusters using Docker containers. Each Kubernetes node is a separated Docker container. All these containers are running in the same Docker network […]

The post Kubernetes Multicluster with Kind and Submariner appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create multiple Kubernetes clusters locally and establish direct communication between them with Kind and Submariner. Kind (Kubernetes in Docker) is a tool for running local Kubernetes clusters using Docker containers. Each Kubernetes node is a separated Docker container. All these containers are running in the same Docker network kind.

Our goal in this article is to establish direct communication between pods running in two different Kubernetes clusters created with Kind. Of course, it is not possible by default. We should treat such clusters as two Kubernetes clusters running in different networks. Here comes Submariner. It is a tool originally created by Rancher. It enables direct networking between pods and services in different Kubernetes clusters, either on-premises or in the cloud.

Let’s perform a quick brief of our architecture. We have two applications caller-service and callme-service. Also, there are two Kubernetes clusters c1 and c2 created using Kind. The caller-service application is running on the c1 cluster, while the callme-service application is running on the c2 cluster. The caller-service application communicates with the callme-service application directly without using Kubernetes Ingress.

kubernetes-submariner-arch2

Architecture – Submariner on Kubernetes

Let me say some words about Submariner. Since it is a relatively new tool you may have no touch with it. It runs a single, central broker and then joins several members to this broker. Basically, a member is a Kubernetes cluster that is a part of the Submariner cluster. All the members may communicate directly with each other. The Broker component facilitates the exchange of metadata information between Submariner gateways deployed in participating Kubernetes clusters.

The architecture of our example system is visible below. We run the Submariner Broker on the c1 cluster. Then we run Submariner “agents” on both clusters. Service discovery is based on the Lighthouse project. It provides DNS discovery for Kubernetes clusters connected by Submariner. You may read more details about it here.

kubernetes-submariner-arch1

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

Both applications are configured to be deployed with Skaffold. In that case, you just need to download Skaffold CLI following the instructions available here. Of course, you also need to have Java and Maven available on your PC.

If you are interested in more about using Skaffold to build and deploy Java applications you can read my article Local Java Development on Kubernetes.

Create Kubernetes clusters with Kind

Firstly, let’s create two Kubernetes clusters using Kind. Each cluster consists of a control plane and a worker node. Since we are going to install Calico as a networking plugin on Kubernetes, we will disable a default CNI plugin on Kind. Finally, we need to configure CIDRs for pods and services. The IP pool should be unique per both clusters. Here’s the Kind configuration manifest for the first cluster. It is available in the project repository under the path k8s/kind-cluster-c1.yaml.

kind: Cluster
name: c1
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
networking:
  podSubnet: 10.240.0.0/16
  serviceSubnet: 10.110.0.0/16
  disableDefaultCNI: true

Then, let’s create the first cluster using the configuration manifest visible above.

$ kind create cluster --config k8s/kind-cluster-c1.yaml

We have a similar configuration manifest for a second cluster. The only difference is in the name of the cluster and CIDRs for Kubernetes pods and services. It is available in the project repository under the path k8s/kind-cluster-c2.yaml.

kind: Cluster
name: c2
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
networking:
  podSubnet: 10.241.0.0/16
  serviceSubnet: 10.111.0.0/16
  disableDefaultCNI: true

After that, let’s create the second cluster using the configuration manifest visible above.

$ kind create cluster --config k8s/kind-cluster-c2.yaml

Once the clusters have been successfully created we can verify them using the following command.

$ kind get clusters
c1
c2

Kind automatically creates two Kubernetes contexts for those clusters. We can switch between the kind-c1 and kind-c2 context.

Install Calico on Kubernetes

We will use the Tigera operator to install Calico as a default CNI on Kubernetes. It is possible to use different installation methods, but that with operator is the simplest one. Firstly, let’s switch to the kind-c1 context.

$ kubectx kind-c1

I’m using the kubectx tool for switching between Kubernetes contexts and namespaces. You can download the latest version of this tool from the following site: https://github.com/ahmetb/kubectx/releases.

In the first step, we install the Tigera operator on the cluster.

$ kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml

After that, we need to create the Installation CRD object responsible for installing Calico on Kubernetes. We can configure all the required parameters inside a single file. It is important to set the same CIDRs as pods CIDRs inside the Kind configuration file. Here’s the manifest for the first cluster.

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  calicoNetwork:
    ipPools:
      - blockSize: 26
        cidr: 10.240.0.0/16
        encapsulation: VXLANCrossSubnet
        natOutgoing: Enabled
        nodeSelector: all()

The manifest is available in the repository as the k8s/tigera-c1.yaml file. Let’s apply it.

$ kubectl apply -f k8s/tigera-c1.yaml

Then, we may switch to the kind-c2 context and create a similar manifest with the Calico installation.

apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  calicoNetwork:
    ipPools:
      - blockSize: 26
        cidr: 10.241.0.0/16
        encapsulation: VXLANCrossSubnet
        natOutgoing: Enabled
        nodeSelector: all()

Finally, let’s apply it to the second cluster using the k8s/tigera-c2.yaml file.

$ kubectl apply -f k8s/tigera-c2.yaml

We may verify the installation of Calico by listing running pods in the calico-system namespace. Here’s the result on my local Kubernetes cluster.

$ kubectl get pod -n calico-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-696ffc7f48-86rfz   1/1     Running   0          75s
calico-node-nhkn5                          1/1     Running   0          76s
calico-node-qkkqk                          1/1     Running   0          76s
calico-typha-6d6c85c77b-ffmt5              1/1     Running   0          70s
calico-typha-6d6c85c77b-w8x6t              1/1     Running   0          76s

By default, Kind uses a simple networking implementation – Kindnetd. However, this CNI plugin is not tested with Submariner. Therefore, we need to change it to one of the already supported ones like Calico.

Install Submariner on Kubernetes

In order to install Submariner on our Kind cluster, we first need to download CLI.

$ curl -Ls https://get.submariner.io | bash
$ export PATH=$PATH:~/.local/bin

Submariner subctl CLI requires xz-utils. So, first, you need to install this package by executing the following command: apt update -y && apt install xz-utils -y.

After that, we can use the subctl binary to deploy the Submarine Broker. If you use Docker on Mac or Windows (like me) you need to perform those operations inside a container with the Kind control plane. So first, let’s get inside the control plane container. Kind automatically sets the name of that container as a conjunction of a cluster name and -control-plane suffix.

$ docker exec -it c1-control-plane /bin/bash

That container has already kubectl been installed. The only thing we need to do is to add the context of the second Kubernetes cluster kind-c2. I just copied it from my local Kube config file, which contains the right data. It has been added by Kind during Kubernetes cluster creation. You can check out the location of the Kubernetes config inside the c1-control-plane container by displaying the KUBECONFIG environment variable.

$ echo $KUBECONFIG
/etc/kubernetes/admin.conf

If you are copying data from your local Kube config file you just need to change the address of your Kubernetes cluster. Instead of the external IP and port, you have to set the internal Docker container IP and port.

We should use the following IP address for internal communication between both clusters.

Now, we can deploy the Submariner Broker on the c1 cluster. After running the following command Submariner installs an operator on Kubernetes and generates the broker-info.subm file. That file is then used to join members to the Submariner cluster.

$ subctl deploy-broker

Enable direct communication between Kubernetes clusters with Submariner

Let’s clarify some things before proceeding. We have already created a Submariner Broker on the c1 cluster. To simplify the process I’m using the same Kubernetes cluster as a Submariner Broker and Member. We also use subctl CLI to add members to a Submariner cluster. One of the essential components that have to be installed is a Submariner Gateway Engine. It is deployed as a DaemonSet that is configured to run on nodes labelled with submariner.io/gateway=true. So, in the first step, we will set this label on both Kubernetes worker nodes of c1 and c2 clusters.

$ kubectl label node c1-worker submariner.io/gateway=true
$ kubectl label node c2-worker submariner.io/gateway=true --context kind-c2

Just to remind you – we are still inside the c1-control-plane container. Now we can add a first member to our Submariner cluster. To do that, we still use subctl CLI command as shown below. With the join command, we need to the broker-info.subm file already generated while running the deploy-broker command. We will also disable NAT traversal for IPsec.

$ subctl join broker-info.subm --natt=false --clusterid kind-c1

After that, we may add a second member to our cluster.

$ subctl join broker-info.subm --natt=false --clusterid kind-c2 --kubecontext kind-c2

The Submariner operator creates several deployments in the submariner-operator namespace. Let’s display a list of pods running there.

$ kubectl get pod -n submariner-operator
NAME                                             READY   STATUS    RESTARTS   AGE
submariner-gateway-kd6zs                         1/1     Running   0          5m50s
submariner-lighthouse-agent-b798b8987-f6zvl      1/1     Running   0          5m48s
submariner-lighthouse-coredns-845c9cdf6f-8qhrj   1/1     Running   0          5m46s
submariner-lighthouse-coredns-845c9cdf6f-xmd6q   1/1     Running   0          5m46s
submariner-operator-586cb56578-qgwh6             1/1     Running   1          6m17s
submariner-routeagent-fcptn                      1/1     Running   0          5m49s
submariner-routeagent-pn54f                      1/1     Running   0          5m49s

We can also use some subctl commands. Let’s display a list of Submariner gateways.

$ subctl show gateways 

Showing information for cluster "kind-c2":
NODE                            HA STATUS       SUMMARY                         
c2-worker                       active          All connections (1) are established

Showing information for cluster "c1":
NODE                            HA STATUS       SUMMARY                         
c1-worker                       active          All connections (1) are established

Or a list of Submariner connections.

$ subctl show connections

Showing information for cluster "c1":
GATEWAY    CLUSTER  REMOTE IP   NAT  CABLE DRIVER  SUBNETS                       STATUS     RTT avg.    
c2-worker  kind-c2  172.20.0.5  no   libreswan     10.111.0.0/16, 10.241.0.0/16  connected  384.957µs   

Showing information for cluster "kind-c2":
GATEWAY    CLUSTER  REMOTE IP   NAT  CABLE DRIVER  SUBNETS                       STATUS     RTT avg.    
c1-worker  kind-c1  172.20.0.2  no   libreswan     10.110.0.0/16, 10.240.0.0/16  connected  592.728µs

Deploy applications on Kubernetes and expose them with Submariner

Since we have already installed Submariner on both clusters we can deploy our sample applications. Let’s begin with caller-service. Make sure you are in the kind-c1 context. Then go to the caller-service directory and deploy the application using Skaffold as shown below.

$ cd caller-service
$ skaffold dev --port-forward

Then, you should switch to the kind-c2 context. Now, deploy the callme-service application.

$ cd callme-service
$ skaffold run

In the next step, we need to expose our service to Submariner. To do that you have to execute the following command with subctl.

$ subctl export service --namespace default callme-service

Submariner exposes services on the domain clusterset.local. So, our service is now available under the URL callme-service.default.svc.clusterset.local. Here’s a part of a code in caller-service responsible for communication with callme-service through the Submariner DNS.

@GetMapping("/ping")
public String ping() {
   LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), version);
   String response = restTemplate
         .getForObject("http://callme-service.default.svc.clusterset.local:8080/callme/ping", String.class);
   LOGGER.info("Calling: response={}", response);
   return "I'm caller-service " + version + ". Calling... " + response;
}

In order to analyze what happened let’s display some CRD objects created by Submariner. Firstly, it created ServiceExport on the cluster with the exposed service. In our case, it is the kind-c2 cluster.

$ kubectl get ServiceExport        
NAME             AGE
callme-service   15s

Once we exposed the service it is automatically imported on the second cluster. We need to switch to the kind-c1 cluster and then display the ServiceImport object.

$ kubectl get ServiceImport -n submariner-operator
NAME                             TYPE           IP                  AGE
callme-service-default-kind-c2   ClusterSetIP   ["10.111.176.50"]   4m55s

The ServiceImport object stores the IP address of Kubernetes Service callme-service.

$ kubectl get svc --context kind-c2
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
callme-service   ClusterIP   10.111.176.50   <none>        8080/TCP   31m
kubernetes       ClusterIP   10.111.0.1      <none>        443/TCP    74m

Finally, we may test a connection between clusters by calling the following endpoint. The caller-service calls the GET /callme/ping endpoint exposed by callme-service. Thanks to enabling the port-forward option on the Skaffold command we may access the service locally on port 8080.

$ curl http://localhost:8080/caller/ping

The post Kubernetes Multicluster with Kind and Submariner appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/07/08/kubernetes-multicluster-with-kind-and-submariner/feed/ 8 9881