kubernetes development Archives - Piotr's TechBlog https://piotrminkowski.com/tag/kubernetes-development/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 01 Sep 2023 12:13:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 kubernetes development Archives - Piotr's TechBlog https://piotrminkowski.com/tag/kubernetes-development/ 32 32 181738725 Local Application Development on Kubernetes with Gefyra https://piotrminkowski.com/2023/09/01/local-application-development-on-kubernetes-with-gefyra/ https://piotrminkowski.com/2023/09/01/local-application-development-on-kubernetes-with-gefyra/#comments Fri, 01 Sep 2023 12:13:48 +0000 https://piotrminkowski.com/?p=14479 In this article, you will learn how to simplify and speed up your local application development on Kubernetes with Gefyra. Gefyra provides several useful features for developers. First of all, it allows to run containers and interact with internal services on an external Kubernetes cluster. Moreover, we can overlay Kubernetes cluster-internal services with the container […]

The post Local Application Development on Kubernetes with Gefyra appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to simplify and speed up your local application development on Kubernetes with Gefyra. Gefyra provides several useful features for developers. First of all, it allows to run containers and interact with internal services on an external Kubernetes cluster. Moreover, we can overlay Kubernetes cluster-internal services with the container running on the local Docker. Thanks to that we may leverage the single development cluster across multiple developers at the same time.

If you are looking for similar articles in the area of Kubernetes app development you can read my post about Telepresence and Skaffold. Gefyra is the alternative to Telepresence. However, there are some significant differences between those two tools. Gefyra comes with Docker as a required dependency., while with Telepresence, Docker is optional. On the other hand, Telepresence uses a sidecar pattern to inject the proxy container to intercept the traffic. Gefyra just replaces the image with the “carrier” image. You can find more details in the docs. Enough with the theory, let’s get to practice.

Prerequisites

In order to start the exercise, we need to have a running Kubernetes cluster. It can be a local instance or a remote cluster managed by the cloud provider. In this exercise, I’m using Kubernetes on the Docker Desktop.

$ kubectx -c
docker-desktop

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The code used in this article is available in the dev branch. Then you should just follow my instructions ๐Ÿ™‚

Install Gefyra

In the first step, we need to install the gefyra CLI. There are the installation instructions for the different environments in the docs. Once you install the CLI you can verify it with the following command:

$ gefyra version
[INFO] Gefyra client version: 1.1.2

After that, we can install Gefyra on our Kubernetes cluster. Here’s the command for installing on Docker Desktop Kubernetes:

$ gefyra up --host=kubernetes.docker.internal

It will install Gefyra using the operator. Let’s verify a list of running pods in the gefyra namespace:

$  kubectl get po -n gefyra
NAME                               READY   STATUS    RESTARTS   AGE
gefyra-operator-7ff447866b-7gzkd   1/1     Running   0          1h
gefyra-stowaway-bb96bccfd-xg7ds    1/1     Running   0          1h

If you see the running pods, it means that the tool has been successfully installed. Now, we can use Gefyra in our app development on Kubernetes.

Use Case on Kubernetes for Gefyra

We will use exactly the same set of apps and the use case as in the article about Telepresence and Skaffold. Firstly, letโ€™s analyze that case. There are three microservices: first-servicecaller-service and callme-service. All of them expose a single REST endpoint GET /ping, which returns basic information about each microservice. In order to create applications, Iโ€™m using the Spring Boot framework. Our architecture is visible in the picture below. The first-service is calling the endpoint exposed by the caller-service. Then the caller-service is calling the endpoint exposed by the callme-service. Of course, we are going to deploy all the microservices on the Kubernetes cluster.

Now, let’s assume we are implementing a new version of the caller-service. We want to easily test with two other apps running on the cluster. Therefore, our goal is to forward the traffic that is sent to the caller-service on the Kubernetes cluster to our local instance running on our Docker. On the other hand, the local instance of the caller-service should call the endpoint exposed by the instance of the callme-service running on the Kubernetes cluster.

kubernetes-gefyra-arch

Build and Deploy Apps with Skaffold and Jib

Before we start development of the new version of caller-service we will deploy all three sample apps. To simplify the process we will use Skaffold and Jib Maven Plugin. Thanks to that you deploy all the using a single command. Here’s the configuration of the Skaffold in the repository root directory:

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: simple-istio-services
build:
  artifacts:
    - image: piomin/first-service
      jib:
        project: first-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/caller-service
      jib:
        project: caller-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/callme-service
      jib:
        project: callme-service
        args:
          - -Pjib
          - -DskipTests
  tagPolicy:
    gitCommit: {}
manifests:
  rawYaml:
    - '*/k8s/deployment.yaml'
deploy:
  kubectl: {}

For more details about the deployment process, you may refer once again to my previous article. We will deploy apps in the demo-1 namespace. Here’s the skaffold command used for that:

$ skaffold run --tail -n demo-1

Once you run the command you will deploy all apps and see their logs in the console. These are very simple Spring Boot apps, which just expose a single REST endpoint and print a log message after receiving the request. Here’s the @RestController of callme-service:

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = 
       LoggerFactory.getLogger(CallmeController.class);

   @Autowired
   BuildProperties buildProperties;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      return "I'm callme-service " + version;
   }
}

And here’s the controller of caller-service. We will modify it during our development. It calls the endpoint exposed by the callme-service using its internal Kubernetes address http://callme-service:8080.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = 
      LoggerFactory.getLogger(CallerController.class);

   @Autowired
   BuildProperties buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      String response = restTemplate
         .getForObject("http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }
}

Here’s a list of deployed apps in the demo-1 namespace:

$ kubectl get deploy -n demo-1              
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
caller-service   1/1     1            1           68m
callme-service   1/1     1            1           68m
first-service    1/1     1            1           68m

Development on Kubernetes with Gefyra

Connect to services running on Kubernetes

Now, I will change the code in the CallerController class. Here’s the latest development version:

kubernetes-gefyra-dev-code

Let’s build the app on the local Docker daemon. We will leverage the Jib Maven plugin once again. We need to go to the caller-service directory and build the image using the jib goal.

$ cd caller-service
$ mvn clean package -DskipTests -Pjib jib:dockerBuild

Here’s the result. The image is available on the local Docker daemon as caller-service:1.1.0.

After that, we may run the container with the app locally using the gefyra command. We use several parameters in the command visible below. Firstly, we need to set the Docker image name using the -i parameter. We simulate running the app in the demo-1 Kubernetes namespace with the -n option. Then, we set a new value for the environment variable used by the into v2 and expose the app container port outside as 8090.

$ gefyra run --rm -i caller-service:1.1.0 \
    -n demo-1 \
    -N caller-service \
    --env VERSION=v2 \
    --expose 8090:8080

Gefyra starts our dev container on the on Docker:

docker ps -l
CONTAINER ID   IMAGE                  COMMAND                  CREATED              STATUS              PORTS                    NAMES
7fec52bed474   caller-service:1.1.0   "java -cp @/app/jib-โ€ฆ"   About a minute ago   Up About a minute   0.0.0.0:8090->8080/tcp   caller-service

Now, let’s try to call the endpoint exposed under the local port 8090:

$ curl http://localhost:8090/caller/ping
I'm a local caller-service v2. Calling on k8s... I'm callme-service v1

Here are the logs from our local container. As you see, it successfully connected to the callme-service app running on the Kubernetes cluster:

kubernetes-gefyra-docker-logs

Let’s switch to the window with the skaffold run --tail command. It displays the logs for our three apps running on Kubernetes. As expected, there are no logs for the caller-service pod since traffic was forwarded to the local container.

kubernetes-gefyra-skaffold-logs

Intercept the traffic sent to Kubernetes

Now, let’s do another try. This time, we will call the first-service running on Kubernetes. In order to do that, we will enable port-forward for the default port.

$ kubectl port-forward svc/first-service -n demo-1 8091:8080

We can the first-service running on Kubernetes using the local port 8091. As you see, all the calls are propagated inside the Kubernetes since the caller-service version is v1.

$ curl http://localhost:8091/first/ping 
I'm first-service v1. Calling... I'm caller-service v1. Calling... I'm callme-service v1

Just to ensure let’s switch to logs printed by skaffold:

In order to intercept the traffic to a container running on Kubernetes and send it to the development container, we need to run the gefyra bridge command. In that command, we have to set the name of the container running in Gefyra using the -N parameter (it was previously set in the gefyra run command). The command will intercept the traffic sent to the caller-service pod (--target parameter) from the demo-1 namespace (-n parameter).

$ gefyra bridge -N caller-service \
    -n demo-1 \
    --port 8080:8080 \
    --target deploy/caller-service/caller-service

You should have a similar output if the bridge has been successfully established:

Let’s call the first-service via the forwarded port once again. Pay attention to the number of the caller-service version.

$ curl http://localhost:8091/first/ping
I'm first-service v1. Calling... I'm a local caller-service v2. Calling on k8s... I'm callme-service v1

Let’s do a double in the logs. Here are the logs from Kubernetes. As you see, there no caller-service logs, but just for the first-service and callme-service.

Of course, the request is forwarded to caller-service running on the local Docker, and then caller-service invokes endpoint exposed by the callme-service running on the cluster.

Once we finish the development we may remove all the bridges:

$ gefyre unbridge -A

The post Local Application Development on Kubernetes with Gefyra appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/09/01/local-application-development-on-kubernetes-with-gefyra/feed/ 2 14479
Development with OpenShift Dev Spaces https://piotrminkowski.com/2022/11/17/development-with-openshift-dev-spaces/ https://piotrminkowski.com/2022/11/17/development-with-openshift-dev-spaces/#respond Thu, 17 Nov 2022 16:59:49 +0000 https://piotrminkowski.com/?p=13724 In this article, you will learn how to use OpenShift Dev Spaces to simplify the development of containerized apps. OpenShift Dev Spaces is a Red Hat product based on the open-source Eclipse Che project optimized for running on OpenShift. Eclipse Che allows you to use your favorite IDE directly on Kubernetes. However, it is not […]

The post Development with OpenShift Dev Spaces appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use OpenShift Dev Spaces to simplify the development of containerized apps. OpenShift Dev Spaces is a Red Hat product based on the open-source Eclipse Che project optimized for running on OpenShift. Eclipse Che allows you to use your favorite IDE directly on Kubernetes. However, it is not just a web-based IDE running in containers. It is also a concept that helps to organize software-defined developer environments inside your Kubernetes cluster.

If you are interested in similar articles about the differences between OpenShift and vanilla Kubernetes you can read my previous post about GitOps and multi-cluster environments. In the current article, we will also discuss the odo tool. If you need more information about it go the following post.

Introduction

Eclipse Che is a Kubernetes-native IDE and developer collaboration platform. OpenShift Dev Spaces is built on top of Eclipse Che and allows you to run it easily on OpenShift. We can install Dev Spaces using an operator. After that, you will get a ready platform that automatically integrates with the OpenShift authorization mechanism.

In this article, I’ll show you step-by-step how to install Dev Spaces on the OpenShift platform. However, you can as well try a hosted option. By default, OpenShift Dev Spaces runs as part of the Developer Sandbox. Developer Sandbox gives you immediate access to the cloud-managed OpenShift cluster for 30 days. You don’t have to install and configure anything there. Since everything is ready for use, you will just access your Dev Spaces dashboard to start development in one of the available IDEs including Theia, IntelliJ, and Visual Studio.

The picture visible below shows the architecture of our solution. Let’s imagine there are many developers working with our instance of OpenShift. Firstly, they need to login into the OpenShift cluster. Once they do it they can access a dashboard of Dev Spaces. Dev Spaces automatically creates a namespace for a developer based on username. It also automatically starts a pod containing our IDE after we choose a git repository with the app source code. Then we can use OpenShift developer tools to easily build the app from the source code and deploy it in the current namespace.

openshift-dev-spaces-arch

Prerequisites

In order to do the whole exercise with me, you need to have a running instance of OpenShift. Of course, you can use a developer sandbox, but you may have only a single user there. There are various methods of running OpenShift instance including a local instance or cloud-managed instances on AWS or Azure. You can find a detailed information about all available installation methods here. For running on the local computer use OpenShift Local.

Install and Configure Dev Spaces on OpenShift

Once we have a running instance of OpenShift, we may proceed to the Dev Spaces installation. You need to go the “Operator Hub” in OpenShift Console and choose the Red Hat OpenShift Dev Spaces operator. You can install it using default settings. That operator will also automatically install another operator – DevWorkspace. After you install Dev Spaces you would have to create an instance of CheCluster. You can find a link in the “Provided APIs” section.

openshift-dev-spaces-install

Then, you need to click the “Create CheCluster” button. You will be redirected into the creation form. You can also leave defaults there. I create my instance of CheCluster in the spaces namespace.

After creating CheCluster we will switch to the spaces namespace for a moment just to verify if everything works fine.

$ oc project spaces

You should have a similar list of pods as me:

$ oc get pod
NAME                                   READY   STATUS    RESTARTS   AGE
che-gateway-548fdd95b5-zhczp           4/4     Running   0          1m
devfile-registry-6cbbc6c87b-hzdcb      1/1     Running   0          1m
devspaces-86cfb5b664-bqs7l             1/1     Running   0          1m
devspaces-dashboard-56b68b4649-xlrgc   1/1     Running   0          1m
plugin-registry-89f7d7684-pw9wg        1/1     Running   0          1m
postgres-6cb6cb646f-6dvbq              1/1     Running   0          1m

You can easily access Dev Spaces dashboard through the DNS address using the OpenShift Route object.

Use Dev Spaces on OpenShift

I have already created three users on OpenShift: user1, user2 and user3. We can use a simple htpasswd authentication for that. At the beginning those users do not have access to any project (or namespace) on OpenShift. I also have the admin user for managing installation and viewing the status across all the namespaces.

Now, we will access Dev Spaces dashboard using each user one by one. Let’s see how it looks for the first user – user1. You can just put an address of the Git repository with the app source code and create a workspace. There are also some example repositories available, but you can use my repository containing some simple Quarkus apps.

openshift-dev-spaces-empty-workspace

By default, OpenShift Dev Spaces runs Theia as developer IDE. Since we would like to use IntelliJ, we need to customize a worspace creation command. We could pass the name of our IDE using the devfile.yaml file in the root directory of our repository. But we can as well pass it in the URL. The picture visible below illustrates the algorithm used for customizing workspace creation using URL. We just need to pass the repository URL and set the che-editor parameter with the che-incubator/che-idea/latest value.

Finally, we will start our new workspace. We need to wait a moment until the pod with our IDE starts.

Once it is ready for use, you will be redirected into the URL with your IDE. Now, you can start development ๐Ÿ™‚

If you back for a moment to the Dev Spaces dashboard you will see the new workspace on the list of available workspaces for user1. You can do some actions on workspaces like restart or deletion.

Now, we will repeat exactly the same steps for user2 and user3. All these users will have their own instances of Dev Spaces in the separated namespaces <username>-devspaces. Let’s display a list of the DevWorkspaces objects across all the namespaces. They represent all the existing workspaces inside the whole OpenShift clusters. We can verify a status of the workspace (Running, Starting, Stopped) and URL.

$ oc get devworkspaces -A
NAMESPACE            NAME                          DEVWORKSPACE ID             PHASE      INFO
user1-devspaces      sample-quarkus-applications   workspace4b02dc6434b54a0e   Running    https://devspaces.apps.cluster-rh520e.gcp.redhatworkshops.io/workspace4b02dc6434b54a0e/idea-rhel8/8887/?backgroundColor=434343&wss
user2-devspaces      sample-quarkus-applications   workspaceadfbc8426d774988   Running    https://devspaces.apps.cluster-rh520e.gcp.redhatworkshops.io/workspaceadfbc8426d774988/idea-rhel8/8887/?backgroundColor=434343&wss
user3-devspaces      sample-quarkus-applications   workspace810c8d6cdb1a4c7d   Starting   Waiting for workspace deployment

Use IntelliJ on OpenShift

After running IntelliJ instance on OpenShift we can verify some settings. Of course, you can do everything the same as in standard IntelliJ on your computer. But what is important here, our development environment is preconfigured and ready for work. There are OpenJDK, Maven and oc client installed and configured. Moreover, there is also odo client, which is used to build and deploy the directly from local version of source code. The user is currently logged in into the OpenShift cluster. Since we are inside the cluster, we can act with it using internal networking. If we still need to install some additional components, we can prepare our own version of the devfile.yaml and put it e.g. in the Git repository root directory.

openshift-dev-spaces-oc

One of the most important thing here is that you are interacting with OpenShift cluster internally. That has a huge impact on deployment time when using inner-development loop tools like odo. That’s because you don’t have to upload the source code over the network. Let’s just try it. As I mentioned before odo is by default installed in your workspace. So now, the only thing you need to do is to choose one app from our sample Quarkus Git repository. For me, it is person-service.

$ cd person-service

In the first step, we need to create an app with odo. There are several components available depending on the language or even framework. You can list all of them by running the following command: odo catalog list components. Since our code is written in Quarkus we will choose the java-quarkus component.

$ odo create java-quarkus person

In order to build and deploy app on OpenShift just run the following command.

$ odo push

Let’s analyze what happened. Here is the output from the odo push command. It automatically creates a Route to expose app outside the cluster. After performing the Maven build it finally pushes the app with the name person to OpenShift.

To view the status of the cluster we can install the OpenShift Toolkit plugin in IntelliJ.

Let’s display a list of deployments in the user1-devspaces namespace. As you see our Quarkus app is deployed under the person-app name. I also had to deploy Postgresql (person-db) on OpenShift since the our apps connects to the database.

openshift-dev-spaces-status

Finally, if you want want do have an inner-development loop with odo and Dev Spaces just run the command odo watch in the IntelliJ terminal.

Customize OpenShift Dev Spaces

We can customize the behaviour of Dev Spaces by modifying the CheCluster object. Here are the default settings. We can override e.g. :

  • The namespace name template (1)
  • The duration after which a workspace will be idled if there is no activity (2).
  • The maximum duration a workspace runs (3).
  • Storage option from per user into the mode where each workspace has its own individual PVC (4).
apiVersion: org.eclipse.che/v2
kind: CheCluster
metadata:
  name: devspaces
  namespace: spaces
spec:
  components:
    cheServer:
      debug: false
      logLevel: INFO
    database:
      credentialsSecretName: postgres-credentials
      externalDb: false
      postgresDb: dbche
      postgresHostName: postgres
      postgresPort: '5432'
      pvc:
        claimSize: 1Gi
    imagePuller:
      enable: false
    metrics:
      enable: true
  devEnvironments:
    defaultNamespace:
      template: <username>-devspaces # (1)
    secondsOfInactivityBeforeIdling: 1800 # (2)
    secondsOfRunBeforeIdling: -1 # (3)
    storage:
      pvcStrategy: per-user # (4)
  networking:
    auth:
      gateway:
        configLabels:
          app: che
          component: che-gateway-config

So, if there is no activity Dev Spaces should automatically destroy the pod with your IDE after 30 minutes. Of course, you can change the value of that timeout. However, you can modify the YAML of each DevWorkspace object to shut it down manually. You just need to set the spec.started parameter value to false.

Let’s verify the status of DevWorkspace objects after disabling all of them manually.

$ oc get devworkspace -A
NAMESPACE            NAME                          DEVWORKSPACE ID             PHASE     INFO
user1-devspaces      sample-quarkus-applications   workspace4b02dc6434b54a0e   Stopped   Stopped
user2-devspaces      sample-quarkus-applications   workspaceadfbc8426d774988   Stopped   Stopped
user3-devspaces      sample-quarkus-applications   workspace810c8d6cdb1a4c7d   Stopped   Stopped

Final Thoughts

OpenShift Dev Spaces helps you to standardize development process across the whole organization on OpenShift. Thanks to that tool you can accelerate project and developer onboarding. As a zero-install development environment that runs in your browser, it makes it easy for anyone to join your team and contribute to a project. It may be especially useful for enabling fast inner-development loop with the remote Kubernetes or OpenShift clusters.

The post Development with OpenShift Dev Spaces appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/11/17/development-with-openshift-dev-spaces/feed/ 0 13724
Development on Kubernetes Multicluster with Devtron https://piotrminkowski.com/2022/11/02/development-on-kubernetes-multicluster-with-devtron/ https://piotrminkowski.com/2022/11/02/development-on-kubernetes-multicluster-with-devtron/#respond Wed, 02 Nov 2022 09:45:47 +0000 https://piotrminkowski.com/?p=13579 In this article, you will learn how to use Devtron for app development on Kubernetes in a multi-cluster environment. Devtron comes with tools for building, deploying, and managing microservices. It simplifies deployment on Kubernetes by providing intuitive UI and Helm charts support. Today, we will run a sample Spring Boot app using our custom Helm […]

The post Development on Kubernetes Multicluster with Devtron appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use Devtron for app development on Kubernetes in a multi-cluster environment. Devtron comes with tools for building, deploying, and managing microservices. It simplifies deployment on Kubernetes by providing intuitive UI and Helm charts support. Today, we will run a sample Spring Boot app using our custom Helm chart. We will deploy it in different namespaces across multiple Kubernetes clusters. Our sample app connects to the database, which runs on Kubernetes and has been deployed using the Devtron Helm chart support.

It’s not my first article about Devtron. You can read more about the GitOps approach with Devtron in this article. Today, I’m going to focus more on the developer-friendly features around Helm charts support.

Install Devtron on Kubernetes

In the first step, we will install Devtron on Kubernetes. There are two options for installation: with CI/CD module or without it. We won’t build a CI/CD process today, but there are some important features for our scenario included in this module. Firstly, let’s add the Devtron Helm repository:

$ helm repo add devtron https://helm.devtron.ai

Then, we have to execute the following Helm command:

$ helm install devtron devtron/devtron-operator \
    --create-namespace --namespace devtroncd \
    --set installer.modules={cicd}

For detailed installation instructions please refer to the Devtron documentation available here.

Create Kubernetes Cluster with Kind

In order to prepare a multi-cluster environment on the local machine, we will use Kind. Let’s create the second Kubernetes cluster c1 by executing the following command:

$ kind create cluster --name c1

The second cluster is available as the kind-c1 context. It becomes a default context after you create a Kind cluster.

Now, our goal is to add the newly created Kind cluster as a managed cluster in Devtron. A single instance of Devtron can manage multiple Kubernetes clusters. Of course, by default, it just manages a local cluster. Before we add our Kind cluster to the Devtron dashboard, we should first configure privileges on that cluster. The following script will generate a bearer token for authentication purposes so that Devtron is able to communicate with the target cluster:

$ curl -O https://raw.githubusercontent.com/devtron-labs/utilities/main/kubeconfig-exporter/kubernetes_export_sa.sh && bash kubernetes_export_sa.sh cd-user devtroncd https://raw.githubusercontent.com/devtron-labs/utilities/main/kubeconfig-exporter/clusterrole.yaml

The bearer token is printed in the output of that command. Just copy it.

We will also have to provide an URL of the master API of a target cluster. Since I’m running Kubernetes on Kind I need to get an internal address of the Docker container that contains Kind. In order to obtain it we need to run the following command:

$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' c1-control-plane

Here’s the IP address of my Kind cluster:

Now, we have all the required data to add a new managed cluster in the Devtron dashboard. In order to do that let’s navigate to the “Global Configuration” section. Then we need to choose the “Clusters and Environments” item and click the “Add cluster” button. We need to put the Kind cluster URL and previously generated bearer token.

If everything works fine, you should see the second cluster on the managed clusters list. Now, you also need to install the Devtron agent on Kind according to the message visible below:

devtron-development-agent

Create Environments

In the next step, we will define three environments. In Devtron environment is assigned to the cluster. We will create a single environment on the local cluster (local), and another two on the Kind cluster (remote-dev, remote-devqa). Each environment has a target namespace. In order to simplify, the name of the namespace is the same as the name environment. Of course, you may set any names you want.

devtron-development-clusters

Now, let’s switch to the “Clusters” view.

As you see there are two clusters connected to Devtron:

devtron-development-cluster-list

We can take a look at the details of each cluster. Here you can see a detailed view for the kind-c1 cluster:

Add Custom Helm Repository

One of the most important Devtron features is support for Helm charts. We can deploy charts individually or by creating a group of charts. By default, there are several Helm repositories available in Devtron including bitnami or elastic. It is also possible to add a custom repository. That’s something that we are going to do. We have our own custom Helm repository with a chart for deploying the Spring Boot app. I have already published it on GitHub under the address https://piomin.github.io/helm-charts/. The name of our chart is spring-boot-api-app, and the latest version is 0.3.2.

In order to add the custom repository in Devtron, we need to go to the “Global Configurations” section once again. Then go to the “Chart repositories” menu item, and click the “Add repository” button. As you see below, I added a new repository under the name piomin.

devtron-development-helm

Once you created a repository you can go to the “Chart Store” section to verify if the new chart is available.

devtron-development-helm-chart

Deploy the Spring Boot App with Devtron

Now, we can proceed to the most important part of our exercise – application deployment. Our sample Spring Boot app is available in the following repository on GitHub. It is a simple REST app written in Kotlin. It exposes some HTTP endpoints for adding and returning persons and uses an in-memory store. Here’s our Spring @RestController:

@RestController
@RequestMapping("/persons")
class PersonController(val repository: PersonRepository) {

   val log: Logger = LoggerFactory.getLogger(PersonController::class.java)

   @GetMapping("/{id}")
   fun findById(@PathVariable id: Int): Person? {
      log.info("findById({})", id)
      return repository.findById(id)
   }

   @GetMapping("/age/{age}")
   fun findByAge(@PathVariable age: Int): List<Person> {
      log.info("findByAge({})", age)
      return repository.findByAge(age)
   }

   @GetMapping
   fun findAll(): List<Person> = repository.findAll()

   @PostMapping
   fun add(@RequestBody person: Person): Person = repository.save(person)

   @PutMapping
   fun update(@RequestBody person: Person): Person = repository.update(person)

   @DeleteMapping("/{id}")
   fun remove(@PathVariable id: Int): Boolean = repository.removeById(id)

}

Let’s imagine we are just working on the latest version of that, and we want to deploy it on Kubernetes to perform some development tests. In the first step, we will build the app locally and push the image to the container registry using Jib Maven Plugin. Here’s the required configuration:

<plugin>
  <groupId>com.google.cloud.tools</groupId>
  <artifactId>jib-maven-plugin</artifactId>
  <version>3.3.0</version>
  <configuration>
    <to>
      
      <tags>
        <tag>1.1</tag>
      </tags>
    </to>
    <container>
      <user>999</user>
    </container>
  </configuration>
</plugin>

Let’s build and push the image to the container registry using the following command:

$ mvn clean compile jib:build -Pjib,tomcat

Besides YAML templates our Helm repository also contains a JSON schema for values.yaml validation. Thanks to that schema we would be able to take advantage of Devtron GUI for creating apps from the chart. Let’s see how it works. Once you click on our custom chart you will be redirected to the page with the details. The latest version of the chart is 0.3.2. Just click the Deploy button.

On the next page, we need to provide a configuration of our app. The target environment is local, which exists on the main cluster. Thanks to Devtron support for Helm values.schema.json we define all values using the GUI form. For example, we can increase change the value of the image to the latest – 1.1.

devtron-development-deploy-app

Once we deploy the app we may verify its status:

devtron-development-app-status

Let’s make some test calls. Our sample Spring Boot exposes Swagger UI, so we can easily send HTTP requests. To interact with the app running on Kubernetes we should enable port-forwarding for our service kubectl port-forward svc/sample-spring-boot-api 8080:8080. After executing that command you can access the Swagger UI under the address http://localhost:8080/swagger-ui.html.

Devtron allows us to view pod logs. We can “grep” them with our criteria. Let’s display the logs related to our test calls.

Deploy App to the Remote Cluster

Now, we will deploy our sample Spring Boot app to the remote cluster. In order to do that go to the same page as before, but instead of the local environment choose remote-dev. It is related to the kind-c1 cluster.

devtron-development-remote

Now, there are two same applications running on two different clusters. We can do the same thing for the app running on the Kind cluster as for the local cluster, e.g. verify its status or check logs.

Deploy Group of Apps

Let’s assume we would like to deploy the app that connects to the database. We can do it in a single step using the Devtron feature called “Chart Group”. With that feature, we can place our Helm chart for Spring Boot and the chart for e.g. Postgres inside the same logical group. Then, we can just deploy the whole group into the target environment. In order to create a chart group go to the Chart Store menu and then click the “Create Group” button. You should set the name of the group and choose the charts that will be included. For me, these are bitnami/postgresql and my custom Helm chart.

devtron-development-chart-group

After creating a group you will see it on the main “Chart Store” page. Now, just click on it to deploy the apps.

After you click the tile with the chart group, you will be predicted to the deploy page.

After you click the “Deploy to…” button Devtron will redirect you to the next page. You can set there a target project and environment for all member charts of the group. We will deploy them to the remote-devqa environment from the kind-c1 cluster. We can use the image from my Docker account: piomin/person:1.1. By default, it tries to connect to the database postgres on the postgres host. The only thing we need to inject into the app container is the postgres user password. It is available inside the postgresql Secret generated by the Bitnami Helm chart. To inject envs defined in that secret use the extraEnvVarsSecret parameter in our custom Spring Boot chart. Finally, let’s deploy both Spring Boot and Postgres in the remove-devqa namespace by clicking the “Deploy” button.

Here’s the final list of apps we have already deployed during this exercise:

Final Thoughts

With Devtron you can easily deploy applications across multiple Kubernetes clusters using Helm chart support. Devtron simplifies development on Kubernetes. You can deploy all required applications just with a “single click” with the chart group feature. Then you can manage and monitor them using a GUI dashboard. In general, you can do everything in the dashboard without passing any YAML manifests by yourself or executing kubectl commands.

The post Development on Kubernetes Multicluster with Devtron appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/11/02/development-on-kubernetes-multicluster-with-devtron/feed/ 0 13579