Jib Archives - Piotr's TechBlog https://piotrminkowski.com/tag/jib/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 02 Feb 2024 09:45:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Jib Archives - Piotr's TechBlog https://piotrminkowski.com/tag/jib/ 32 32 181738725 Kubernetes Testing with CircleCI, Kind, and Skaffold https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/ https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/#respond Tue, 28 Nov 2023 13:04:18 +0000 https://piotrminkowski.com/?p=14706 In this article, you will learn how to use tools like Kind or Skaffold to build integration tests on CircleCI for apps running on Kubernetes. Our main goal in this exercise is to build the app image and verify the Deployment on Kubernetes in the CircleCI pipeline. Skaffold and Jib Maven plugin build the image […]

The post Kubernetes Testing with CircleCI, Kind, and Skaffold appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use tools like Kind or Skaffold to build integration tests on CircleCI for apps running on Kubernetes. Our main goal in this exercise is to build the app image and verify the Deployment on Kubernetes in the CircleCI pipeline. Skaffold and Jib Maven plugin build the image from the source and deploy it on Kind using YAML manifests. Finally, we will run some load tests on the deployed app using the Grafana k6 tool and its integration with CircleCI.

If you want to build and run tests against Kubernetes, you can read my article about integration tests with JUnit. On the other hand, if you are looking for other testing tools for testing in a Kubernetes-native environment you can refer to that article about Testkube.

Introduction

Before we start, let’s do a brief introduction. There are three simple Spring Boot apps that communicate with each other. The first-service app calls the endpoint exposed by the caller-service app, and then the caller-service app calls the endpoint exposed by the callme-service app. The diagram visible below illustrates that architecture.

kubernetes-circleci-arch

So in short, our goal is to deploy all the sample apps on Kind during the CircleCI build and then test the communication by calling the endpoint exposed by the first-service through the Kubernetes Service.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. It contains three apps: first-service, caller-service, and callme-service. The main Skaffold config manifest is available in the project root directory. Required Kubernetes YAML manifests are always placed inside the k8s directory. Once you take a look at the source code, you should just follow my instructions. Let’s begin.

Our sample Spring Boot apps are very simple. They are exposing a single “ping” endpoint over HTTP and call “ping” endpoints exposed by other apps. Here’s the @RestController in the first-service app:

@RestController
@RequestMapping("/first")
public class FirstController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(FirstController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() 
         ? buildProperties.get().getName() : "first-service", version);
      String response = restTemplate.getForObject(
         "http://caller-service:8080/caller/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm first-service " + version + ". Calling... " + response;
   }

}

Here’s the @RestController inside the caller-service app. The endpoint is called by the first-service app through the RestTemplate bean.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallerController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.or(Optional::empty), version);
      String response = restTemplate.getForObject(
         "http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }

}

Finally, here’s the @RestController inside the callme-service app. It also exposes a single GET /callme/ping endpoint called by the caller-service app:

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallmeController.class);
   private static final String INSTANCE_ID = UUID.randomUUID().toString();
   private Random random = new Random();

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() 
         ? buildProperties.get().getName() : "callme-service", version);
      return "I'm callme-service " + version;
   }

}

Build and Deploy Images with Skaffold and Jib

Firstly, let’s take a look at the main Maven pom.xml in the project root directory. We use the latest version of Spring Boot and the latest LTS version of Java for compilation. All three app modules inherit settings from the parent pom.xml. In order to build the image with Maven we are including jib-maven-plugin. Since it is still using Java 17 in the default base image, we need to override this behavior with the <from>.
            </from>
          </configuration>
        </plugin>
      </plugins>
    </build>
  </profile>
</profiles>

Now, let’s take a look at the main skaffold.yaml file. Skaffold builds the image using Jib support and deploys all three apps on Kubernetes using manifests available in the k8s/deployment.yaml file inside each app module. Skaffold disables JUnit tests for Maven and activates the jib profile. It is also able to deploy Istio objects after activating the istio Skaffold profile. However, we won’t use it today.

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: simple-istio-services
build:
  artifacts:
    - image: piomin/first-service
      jib:
        project: first-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/caller-service
      jib:
        project: caller-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/callme-service
      jib:
        project: callme-service
        args:
          - -Pjib
          - -DskipTests
  tagPolicy:
    gitCommit: {}
manifests:
  rawYaml:
    - '*/k8s/deployment.yaml'
deploy:
  kubectl: {}
profiles:
  - name: istio
    manifests:
      rawYaml:
        - k8s/istio-*.yaml
        - '*/k8s/deployment-versions.yaml'
        - '*/k8s/istio-*.yaml'

Here’s the typical Deployment for our apps. The app is running on the 8080 port.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: first-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: first-service
  template:
    metadata:
      labels:
        app: first-service
    spec:
      containers:
        - name: first-service
          image: piomin/first-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

For testing purposes, we need to expose the first-service outside of the Kind cluster. In order to do that, we will use the Kubernetes NodePort Service. Our app will be available under the 30000 port.

apiVersion: v1
kind: Service
metadata:
  name: first-service
  labels:
    app: first-service
spec:
  type: NodePort
  ports:
  - port: 8080
    name: http
    nodePort: 30000
  selector:
    app: first-service

Note that all other Kubernetes services (“caller-service” and “callme-service”) are exposed only internally using a default ClusterIP type.

How It Works

In this section, we will discuss how we would run the whole process locally. Of course, our goal is to configure it as the CircleCI pipeline. In order to expose the Kubernetes Service outside Kind we need to define the externalPortMappings section in the configuration manifest. As you probably remember, we are exposing our app under the 30000 port. The following file is available in the repository under the k8s/kind-cluster-test.yaml path:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
  - role: control-plane
    extraPortMappings:
      - containerPort: 30000
        hostPort: 30000
        listenAddress: "0.0.0.0"
        protocol: tcp

Assuming we already installed kind CLI on our machine, we need to execute the following command to create a new cluster:

$ kind create cluster --name c1 --config k8s/kind-cluster-test.yaml

You should have the same result as visible on my screen:

We have a single-node Kind cluster ready. There is a single c1-control-plane container running on Docker. As you see, it exposes 30000 port outside of the cluster:

The Kubernetes context is automatically switched to kind-c1. So now, we just need to run the following command from the repository root directory to build and deploy the apps:

$ skaffold run

If you see a similar output in the skaffold run logs, it means that everything works fine.

kubernetes-circleci-skaffold

We can verify a list of Kubernetes services. The first-service is exposed under the 30000 port as expected.

$ kubectl get svc
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
caller-service   ClusterIP   10.96.47.193   <none>        8080/TCP         2m24s
callme-service   ClusterIP   10.96.98.53    <none>        8080/TCP         2m24s
first-service    NodePort    10.96.241.11   <none>        8080:30000/TCP   2m24s

Assuming you have already installed the Grafana k6 tool locally, you may run load tests using the following command:

$ k6 run first-service/src/test/resources/k6/load-test.js

That’s all. Now, let’s define the same actions with the CircleCI workflow.

Test Kubernetes Deployment with the CircleCI Workflow

The CircleCI config.yml file should be placed in the .circle directory. We are doing two things in our pipeline. In the first step, we are executing Maven unit tests without the Kubernetes cluster. That’s why we need a standard executor with OpenJDK 21 and the maven ORB. In order to run Kind during the CircleCI build, we need to have access to the Docker daemon. Therefore, we use the latest version of the ubuntu-2204 machine.

version: 2.1

orbs:
  maven: circleci/maven@1.4.1

executors:
  jdk:
    docker:
      - image: 'cimg/openjdk:21.0'
  machine_executor_amd64:
    machine:
      image: ubuntu-2204:2023.10.1
    environment:
      architecture: "amd64"
      platform: "linux/amd64"

After that, we can proceed to the job declaration. The name of our job is deploy-k8s. It uses the already-defined machine executor. Let’s discuss the required steps after running a standard checkout command:

  1. We need to install the kubectl CLI and copy it to the /usr/local/bin directory. Skaffold uses kubectl to interact with the Kubernetes cluster.
  2. After that, we have to install the skaffold CLI
  3. Our job also requires the kind CLI to be able to create or delete Kind clusters on Docker…
  4. … and the Grafana k6 CLI to run load tests against the app deployed on the cluster
  5. There is a good chance that this step won’t required once CircleCI releases a new version of ubuntu-2204 machine (probably 2024.1.1 according to the release strategy). For now, ubuntu-2204 provides OpenJDK 17, so we need to install OpenJDK 17 to successfully build the app from the source code
  6. After installing all the required tools we can create a new Kubernetes with the kind create cluster command.
  7. Once a cluster is ready, we can deploy our apps using the skaffold run command.
  8. Once the apps are running on the cluster, we can proceed to the tests phase. We are running the test defined inside the first-service/src/test/resources/k6/load-test.js file.
  9. After doing all the required steps, it is important to remove the Kind cluster
 jobs:
  deploy-k8s:
    executor: machine_executor_amd64
    steps:
      - checkout
      - run: # (1)
          name: Install Kubectl
          command: |
            curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
            chmod +x kubectl
            sudo mv ./kubectl /usr/local/bin/kubectl
      - run: # (2)
          name: Install Skaffold
          command: |
            curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64
            chmod +x skaffold
            sudo mv skaffold /usr/local/bin
      - run: # (3)
          name: Install Kind
          command: |
            [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
            chmod +x ./kind
            sudo mv ./kind /usr/local/bin/kind
      - run: # (4)
          name: Install Grafana K6
          command: |
            sudo gpg -k
            sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
            echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
            sudo apt-get update
            sudo apt-get install k6
      - run: # (5)
          name: Install OpenJDK 21
          command: |
            java -version
            sudo apt-get update && sudo apt-get install openjdk-21-jdk
            sudo update-alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
            sudo update-alternatives --set javac /usr/lib/jvm/java-21-openjdk-amd64/bin/javac
            java -version
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
      - run: # (6)
          name: Create Kind Cluster
          command: |
            kind create cluster --name c1 --config k8s/kind-cluster-test.yaml
      - run: # (7)
          name: Deploy to K8s
          command: |
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
            skaffold run
      - run: # (8)
          name: Run K6 Test
          command: |
            kubectl get svc
            k6 run first-service/src/test/resources/k6/load-test.js
      - run: # (9)
          name: Delete Kind Cluster
          command: |
            kind delete cluster --name c1

Here’s the definition of our load test. It has to be written in JavaScript. It defines some thresholds like a % of maximum failed requests or maximum response time for 95% of requests. As you see, we are testing the http://localhost:30000/first/ping endpoint:

import { sleep } from 'k6';
import http from 'k6/http';

export const options = {
  duration: '60s',
  vus: 10,
  thresholds: {
    http_req_failed: ['rate<0.25'],
    http_req_duration: ['p(95)<1000'],
  },
};

export default function () {
  http.get('http://localhost:30000/first/ping');
  sleep(2);
}

Finally, the last part of the CircleCI config file. It defines pipeline workflow. In the first step, we are running tests with Maven. After that, we proceeded to the deploy-k8s job.

workflows:
  build-and-deploy:
    jobs:
      - maven/test:
          name: test
          executor: jdk
      - deploy-k8s:
          requires:
            - test

Once we push a change to the sample Git repository we trigger a new CircleCI build. You can verify it by yourself here in my CircleCI project page.

As you see all the pipeline steps have been finished successfully.

kubernetes-circleci-build

We can display logs for every single step. Here are the logs from the k6 load test step.

There were some errors during the warm-up. However, the test shows that our scenario works on the Kubernetes cluster.

Final Thoughts

CircleCI is one of the most popular CI/CD platforms. Personally, I’m using it for running builds and tests for all my demo repositories on GitHub. For the sample projects dedicated to the Kubernetes cluster, I want to verify such steps as building images with Jib, Kubernetes deployment scripts, or Skaffold configuration. This article shows how to easily perform such tests with CircleCI and Kubernetes cluster running on Kind. Hope it helps 🙂

The post Kubernetes Testing with CircleCI, Kind, and Skaffold appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/feed/ 0 14706
Local Application Development on Kubernetes with Gefyra https://piotrminkowski.com/2023/09/01/local-application-development-on-kubernetes-with-gefyra/ https://piotrminkowski.com/2023/09/01/local-application-development-on-kubernetes-with-gefyra/#comments Fri, 01 Sep 2023 12:13:48 +0000 https://piotrminkowski.com/?p=14479 In this article, you will learn how to simplify and speed up your local application development on Kubernetes with Gefyra. Gefyra provides several useful features for developers. First of all, it allows to run containers and interact with internal services on an external Kubernetes cluster. Moreover, we can overlay Kubernetes cluster-internal services with the container […]

The post Local Application Development on Kubernetes with Gefyra appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to simplify and speed up your local application development on Kubernetes with Gefyra. Gefyra provides several useful features for developers. First of all, it allows to run containers and interact with internal services on an external Kubernetes cluster. Moreover, we can overlay Kubernetes cluster-internal services with the container running on the local Docker. Thanks to that we may leverage the single development cluster across multiple developers at the same time.

If you are looking for similar articles in the area of Kubernetes app development you can read my post about Telepresence and Skaffold. Gefyra is the alternative to Telepresence. However, there are some significant differences between those two tools. Gefyra comes with Docker as a required dependency., while with Telepresence, Docker is optional. On the other hand, Telepresence uses a sidecar pattern to inject the proxy container to intercept the traffic. Gefyra just replaces the image with the “carrier” image. You can find more details in the docs. Enough with the theory, let’s get to practice.

Prerequisites

In order to start the exercise, we need to have a running Kubernetes cluster. It can be a local instance or a remote cluster managed by the cloud provider. In this exercise, I’m using Kubernetes on the Docker Desktop.

$ kubectx -c
docker-desktop

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The code used in this article is available in the dev branch. Then you should just follow my instructions 🙂

Install Gefyra

In the first step, we need to install the gefyra CLI. There are the installation instructions for the different environments in the docs. Once you install the CLI you can verify it with the following command:

$ gefyra version
[INFO] Gefyra client version: 1.1.2

After that, we can install Gefyra on our Kubernetes cluster. Here’s the command for installing on Docker Desktop Kubernetes:

$ gefyra up --host=kubernetes.docker.internal

It will install Gefyra using the operator. Let’s verify a list of running pods in the gefyra namespace:

$  kubectl get po -n gefyra
NAME                               READY   STATUS    RESTARTS   AGE
gefyra-operator-7ff447866b-7gzkd   1/1     Running   0          1h
gefyra-stowaway-bb96bccfd-xg7ds    1/1     Running   0          1h

If you see the running pods, it means that the tool has been successfully installed. Now, we can use Gefyra in our app development on Kubernetes.

Use Case on Kubernetes for Gefyra

We will use exactly the same set of apps and the use case as in the article about Telepresence and Skaffold. Firstly, let’s analyze that case. There are three microservices: first-servicecaller-service and callme-service. All of them expose a single REST endpoint GET /ping, which returns basic information about each microservice. In order to create applications, I’m using the Spring Boot framework. Our architecture is visible in the picture below. The first-service is calling the endpoint exposed by the caller-service. Then the caller-service is calling the endpoint exposed by the callme-service. Of course, we are going to deploy all the microservices on the Kubernetes cluster.

Now, let’s assume we are implementing a new version of the caller-service. We want to easily test with two other apps running on the cluster. Therefore, our goal is to forward the traffic that is sent to the caller-service on the Kubernetes cluster to our local instance running on our Docker. On the other hand, the local instance of the caller-service should call the endpoint exposed by the instance of the callme-service running on the Kubernetes cluster.

kubernetes-gefyra-arch

Build and Deploy Apps with Skaffold and Jib

Before we start development of the new version of caller-service we will deploy all three sample apps. To simplify the process we will use Skaffold and Jib Maven Plugin. Thanks to that you deploy all the using a single command. Here’s the configuration of the Skaffold in the repository root directory:

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: simple-istio-services
build:
  artifacts:
    - image: piomin/first-service
      jib:
        project: first-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/caller-service
      jib:
        project: caller-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/callme-service
      jib:
        project: callme-service
        args:
          - -Pjib
          - -DskipTests
  tagPolicy:
    gitCommit: {}
manifests:
  rawYaml:
    - '*/k8s/deployment.yaml'
deploy:
  kubectl: {}

For more details about the deployment process, you may refer once again to my previous article. We will deploy apps in the demo-1 namespace. Here’s the skaffold command used for that:

$ skaffold run --tail -n demo-1

Once you run the command you will deploy all apps and see their logs in the console. These are very simple Spring Boot apps, which just expose a single REST endpoint and print a log message after receiving the request. Here’s the @RestController of callme-service:

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = 
       LoggerFactory.getLogger(CallmeController.class);

   @Autowired
   BuildProperties buildProperties;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      return "I'm callme-service " + version;
   }
}

And here’s the controller of caller-service. We will modify it during our development. It calls the endpoint exposed by the callme-service using its internal Kubernetes address http://callme-service:8080.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = 
      LoggerFactory.getLogger(CallerController.class);

   @Autowired
   BuildProperties buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      String response = restTemplate
         .getForObject("http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }
}

Here’s a list of deployed apps in the demo-1 namespace:

$ kubectl get deploy -n demo-1              
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
caller-service   1/1     1            1           68m
callme-service   1/1     1            1           68m
first-service    1/1     1            1           68m

Development on Kubernetes with Gefyra

Connect to services running on Kubernetes

Now, I will change the code in the CallerController class. Here’s the latest development version:

kubernetes-gefyra-dev-code

Let’s build the app on the local Docker daemon. We will leverage the Jib Maven plugin once again. We need to go to the caller-service directory and build the image using the jib goal.

$ cd caller-service
$ mvn clean package -DskipTests -Pjib jib:dockerBuild

Here’s the result. The image is available on the local Docker daemon as caller-service:1.1.0.

After that, we may run the container with the app locally using the gefyra command. We use several parameters in the command visible below. Firstly, we need to set the Docker image name using the -i parameter. We simulate running the app in the demo-1 Kubernetes namespace with the -n option. Then, we set a new value for the environment variable used by the into v2 and expose the app container port outside as 8090.

$ gefyra run --rm -i caller-service:1.1.0 \
    -n demo-1 \
    -N caller-service \
    --env VERSION=v2 \
    --expose 8090:8080

Gefyra starts our dev container on the on Docker:

docker ps -l
CONTAINER ID   IMAGE                  COMMAND                  CREATED              STATUS              PORTS                    NAMES
7fec52bed474   caller-service:1.1.0   "java -cp @/app/jib-…"   About a minute ago   Up About a minute   0.0.0.0:8090->8080/tcp   caller-service

Now, let’s try to call the endpoint exposed under the local port 8090:

$ curl http://localhost:8090/caller/ping
I'm a local caller-service v2. Calling on k8s... I'm callme-service v1

Here are the logs from our local container. As you see, it successfully connected to the callme-service app running on the Kubernetes cluster:

kubernetes-gefyra-docker-logs

Let’s switch to the window with the skaffold run --tail command. It displays the logs for our three apps running on Kubernetes. As expected, there are no logs for the caller-service pod since traffic was forwarded to the local container.

kubernetes-gefyra-skaffold-logs

Intercept the traffic sent to Kubernetes

Now, let’s do another try. This time, we will call the first-service running on Kubernetes. In order to do that, we will enable port-forward for the default port.

$ kubectl port-forward svc/first-service -n demo-1 8091:8080

We can the first-service running on Kubernetes using the local port 8091. As you see, all the calls are propagated inside the Kubernetes since the caller-service version is v1.

$ curl http://localhost:8091/first/ping 
I'm first-service v1. Calling... I'm caller-service v1. Calling... I'm callme-service v1

Just to ensure let’s switch to logs printed by skaffold:

In order to intercept the traffic to a container running on Kubernetes and send it to the development container, we need to run the gefyra bridge command. In that command, we have to set the name of the container running in Gefyra using the -N parameter (it was previously set in the gefyra run command). The command will intercept the traffic sent to the caller-service pod (--target parameter) from the demo-1 namespace (-n parameter).

$ gefyra bridge -N caller-service \
    -n demo-1 \
    --port 8080:8080 \
    --target deploy/caller-service/caller-service

You should have a similar output if the bridge has been successfully established:

Let’s call the first-service via the forwarded port once again. Pay attention to the number of the caller-service version.

$ curl http://localhost:8091/first/ping
I'm first-service v1. Calling... I'm a local caller-service v2. Calling on k8s... I'm callme-service v1

Let’s do a double in the logs. Here are the logs from Kubernetes. As you see, there no caller-service logs, but just for the first-service and callme-service.

Of course, the request is forwarded to caller-service running on the local Docker, and then caller-service invokes endpoint exposed by the callme-service running on the cluster.

Once we finish the development we may remove all the bridges:

$ gefyre unbridge -A

The post Local Application Development on Kubernetes with Gefyra appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/09/01/local-application-development-on-kubernetes-with-gefyra/feed/ 2 14479
Best Practices for Java Apps on Kubernetes https://piotrminkowski.com/2023/02/13/best-practices-for-java-apps-on-kubernetes/ https://piotrminkowski.com/2023/02/13/best-practices-for-java-apps-on-kubernetes/#comments Mon, 13 Feb 2023 16:18:43 +0000 https://piotrminkowski.com/?p=13990 In this article, you will read about the best practices for running Java apps on Kubernetes. Most of these recommendations will also be valid for other languages. However, I’m considering all the rules in the scope of Java characteristics and also showing solutions and tools available for JVM-based apps. Some of these Kubernetes recommendations are […]

The post Best Practices for Java Apps on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will read about the best practices for running Java apps on Kubernetes. Most of these recommendations will also be valid for other languages. However, I’m considering all the rules in the scope of Java characteristics and also showing solutions and tools available for JVM-based apps. Some of these Kubernetes recommendations are forced by design when using the most popular Java frameworks like Spring Boot or Quarkus. I’ll show you how to effectively leverage them to simplify developer life.

I’m writing a lot about topics related to both Kubernetes and Java. You can find many practical examples on my blog. Some time ago I published a similar article to that one – but mostly focused on best practices for microservices-based apps. You can find it here.

Don’t Set Limits Too Low

Should we set limits for Java apps on Kubernetes or not? The answer seems to be obvious. There are many tools that validate your Kubernetes YAML manifests, and for sure they will print a warning if you don’t set CPU or memory limits. However, there are some “hot discussions” in the community about that. Here’s an interesting article that does not recommend setting any CPU limits. Here’s another article written as a counterpoint to the previous one. They consider CPU limits, but we could as well begin a similar discussion for memory limits. Especially in the context of Java apps 🙂

However, for memory management, the proposition seems to be quite different. Let’s read the other article – this time about memory limits and requests. In short, it recommends always setting the memory limit. Moreover, the limit should be the same as the request. In the context of Java apps, it is also important that we may limit the memory with JVM parameters like -Xmx, -XX:MaxMetaspaceSize or -XX:ReservedCodeCacheSize. Anyway, from the Kubernetes perspective, the pod receives the resources it requests. The limit has nothing to do with it.

It all leads me to the first recommendation today – don’t set your limits too low. Even if you set a CPU limit, it shouldn’t impact your app. For example, as you probably know, even if your Java app doesn’t consume much CPU in normal work, it requires a lot of CPU to start fast. For my simple Spring Boot app that connects MongoDB on Kubernetes, the difference between no limit and even 0.5 core is significant. Normally it starts below 10 seconds:

kubernetes-java-startup

With the CPU limit set to 500 millicores, it starts ~30 seconds:

Of course, we could find some examples. But we will discuss them also in the next sections.

Beginning from the 1.27 version of Kubernetes you may take advantage of the feature called “In-Place Vertical Pod Scaling”. It allows users to resize CPU/memory resources allocated to pods without restarting the containers. Such an approach may help us to speed up Java startup on Kubernetes and keep adequate resource limits (especially CPU limits) for the app at the same time. You can read more about that in the following article: https://piotrminkowski.com/2023/08/22/resize-cpu-limit-to-speed-up-java-startup-on-kubernetes/.

Consider Memory Usage First

Let’s focus just on the memory limit. If you run a Java app on Kubernetes, you have two levels of limiting maximum usage: container and JVM. However, there are also some defaults if you don’t specify any settings for JVM. JVM sets its maximum heap size to approximately 25% of the available RAM in case you don’t set the -Xmx parameter. This value is counted based on the memory visible inside the container. Once you won’t set a limit at the container level, JVM will see the whole memory of the node.

Before running the app on Kubernetes, you should at least measure how much memory it consumes at the expected load. Fortunately, there are tools that may optimize memory configuration for Java apps running in containers. For example, Paketo Buildpacks comes with a built-in memory calculator. It calculates the -Xmx JVM flag using the formula Heap = Total Container Memory - Non-Heap - Headroom. On the other hand, the non-heap value is calculated using the following formula: Non-Heap = Direct Memory + Metaspace + Reserved Code Cache + (Thread Stack * Thread Count).

Paketo Buildpacks is currently a default option for building Spring Boot apps (with the mvn spring-boot:build-image command). Let’s try it for our sample app. Assuming we will set the memory limit to 512M it will calculate -Xmx at a level of 130M.

kubernetes-java-memory

Is it fine for my app? I should at least perform some load tests to verify how my app performs under heavy traffic. But once again – don’t set the limits too low. For example, with the 1024M limit, the -Xmx equals 650M.

As you see we take care of memory usage with JVM parameters. It prevents us from OOM kills described in the article mentioned in the first section. Therefore, setting the request at the same level as the limit does not make much sense. I would recommend setting it a little higher than normal usage – let’s say 20% more.

Proper Liveness and Readiness Probes

Introduction

It is essential to understand the difference between liveness and readiness probes in Kubernetes. If both these probes are not implemented carefully, they can degrade the overall operation of a service, for example by causing unnecessary restarts. The third type of probe, the startup probe, is a relatively new feature in Kubernetes. It allows us to avoid setting initialDelaySeconds on liveness or readiness probes and therefore is especially useful if your app startup takes a lot of time. For more details about Kubernetes probes in general and best practices, I can recommend that very interesting article.

A liveness probe is used to decide whether to restart the container or not. If an application is unavailable for any reason, restarting the container sometimes can make sense. On the other hand, a readiness probe is used to decide if a container can handle incoming traffic. If a pod has been recognized as not ready, it is removed from load balancing. Failure of the readiness probe does not result in pod restart. The most typical liveness or readiness probe for web applications is realized via an HTTP endpoint.

Since subsequent failures of the liveness probe result in pod restart, it should not check the availability of your app integrations. Such things should be verified by the readiness probe.

Configuration Details

The good news is that the most popular Java frameworks like Spring Boot or Quarkus provide an auto-configured implementation of both Kubernetes probes. They follow best practices, so we usually don’t have to take about basics. However, in Spring Boot besides including the Actuator module you need to enable them with the following property:

management:
  endpoint: 
    health:
      probes:
        enabled: true

Since Spring Boot Actuator provides several endpoints (e.g. metrics, traces) it is a good idea to expose it on a different port than a default (usually 8080). Of course, the same rule applies to other popular Java frameworks. On the other hand, a good practice is to check your main app port – especially in the readiness probe. Since it defines if our app is ready to process incoming requests, it should listen also on the main port. It looks just the opposite with the liveness probe. If let’s say the whole working thread pool is busy, I don’t want to restart my app. I just don’t want to receive incoming traffic for some time.

We can also customize other aspects of Kubernetes probes. Let’s say that our app connects to the external system, but we don’t verify that integration in our readiness probe. It is not critical and doesn’t have a direct impact on our operational status. Here’s a configuration that allows us to include in the probe only the selected set of integrations (1) and also exposes readiness on the main server port (2).

spring:
  application:
    name: sample-spring-boot-on-kubernetes
  data:
    mongodb:
      host: ${MONGO_URL}
      port: 27017
      username: ${MONGO_USERNAME}
      password: ${MONGO_PASSWORD}
      database: ${MONGO_DATABASE}
      authentication-database: admin

management:
  endpoint.health:
    show-details: always
    group:
      readiness:
        include: mongo # (1)
        additional-path: server:/readiness # (2)
    probes:
      enabled: true
  server:
    port: 8081

Hardly ever our application is able to exist without any external solutions like databases, message brokers, or just other applications. When configuring the readiness probe we should consider connection settings to that system carefully. Firstly you should consider the situation when external service is not available. How you will handle it? I suggest decreasing these timeouts to lower values as shown below.

spring:
  application:
    name: sample-spring-kotlin-microservice
  datasource:
    url: jdbc:postgresql://postgres:5432/postgres
    username: postgres
    password: postgres123
    hikari:
      connection-timeout: 2000
      initialization-fail-timeout: 0
  jpa:
    database-platform: org.hibernate.dialect.PostgreSQLDialect
  rabbitmq:
    host: rabbitmq
    port: 5672
    connection-timeout: 2000

Choose The Right JDK

If you have already built images with Dockerfile it is possible that you were using the official OpenJDK base image from the Docker Hub. However, currently, the announcement on the image site says that it is officially deprecated and all users should find suitable replacements. I guess it may be quite confusing, so you will find a detailed explanation of the reasons here.

All right, so let’s consider which alternative we should choose. Different vendors provide several replacements. If you are looking for a detailed comparison between them you should go to the following site. It recommends using Eclipse Temurin in the 21 version.

On the other hand, the most popular image build tools like Jib or Cloud Native Buildpacks automatically choose a vendor for you. By default, Jib uses Eclipse Temurin, while Paketo Buildpacks uses Bellsoft Liberica implementation. Of course, you can easily override these settings. I think it might make sense if you, for example, run your app in the environment matched to the JDK provider, like AWS and Amazon Corretto.

Let’s say we use Paketo Buildpacks and Skaffold for deploying Java apps on Kubernetes. In order to replace a default Bellsoft Liberica buildpack with another one we just need to set it literally in the buildpacks section. Here’s an example that leverages the Amazon Corretto buildpack.

apiVersion: skaffold/v2beta22
kind: Config
metadata:
  name: sample-spring-boot-on-kubernetes
build:
  artifacts:
    - image: piomin/sample-spring-boot-on-kubernetes
      buildpacks:
        builder: paketobuildpacks/builder:base
        buildpacks:
          - paketo-buildpacks/amazon-corretto
          - paketo-buildpacks/java
        env:
          - BP_JVM_VERSION=21

We can also easily test the performance of our apps with different JDK vendors. If you are looking for an example of such a comparison you can read my article describing such tests and results. I measured the different JDK performance for the Spring Boot 3 app that interacts with the Mongo database using several available Paketo Java Buildpacks.

Consider Migration To Native Compilation 

Native compilation is a real “game changer” in the Java world. But I can bet that not many of you use it – especially on production. Of course, there were (and still are) numerous challenges in the migration of existing apps into the native compilation. The static code analysis performed by the GraalVM during build time can result in errors like ClassNotFound, as or MethodNotFound. To overcome these challenges, we need to provide several hints to let GraalVM know about the dynamic elements of the code. The number of those hints usually depends on the number of libraries and the general number of language features used in the app.

The Java frameworks like Quarkus or Micronaut try to address challenges related to a native compilation by design. For example, they are avoiding using reflection wherever possible. Spring Boot has also improved native compilation support a lot through the Spring Native project. So, my advice in this area is that if you are creating a new application, prepare it the way it will be ready for native compilation. For example, with Quarkus you can simply generate a Maven configuration that contains a dedicated profile for building a native executable.

<profiles>
  <profile>
    <id>native</id>
    <activation>
      <property>
        <name>native</name>
      </property>
    </activation>
    <properties>
      <skipITs>false</skipITs>
      <quarkus.package.type>native</quarkus.package.type>
    </properties>
  </profile>
</profiles>

Once you add it you can just a native build with the following command:

$ mvn clean package -Pnative

Then you can analyze if there are any issues during the build. Even if you do not run native apps on production now (for example your organization doesn’t approve it), you should place GraalVM compilation as a step in your acceptance pipeline. You can easily build the Java native image for your app with the most popular frameworks. For example, with Spring Boot you just need to provide the following configuration in your Maven pom.xml as shown below:

<plugin>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-maven-plugin</artifactId>
  <executions>
    <execution>
      <goals>
        <goal>build-info</goal>
        <goal>build-image</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    
  </configuration>
</plugin>

Configure Logging Properly

Logging is probably not the first thing you are thinking about when writing your Java apps. However, at the global scope, it becomes very important since we need to be able to collect, store data, and finally search and present the particular entry quickly. The best practice is to write your application logs to the standard output (stdout) and standard error (stderr) streams.

Fluentd is a popular open-source log aggregator that allows you to collect logs from the Kubernetes cluster, process them, and then ship them to a data storage backend of your choice. It integrates seamlessly with Kubernetes deployments. Fluentd tries to structure data as JSON to unify logging across different sources and destinations. Assuming that probably the best way is to prepare logs in this format. With JSON format we can also easily include additional fields for tagging logs and then easily search them in the visual tool with various criteria.

In order to format our logs to JSON readable by Fluentd we may include the Logstash Logback Encoder library in Maven dependencies.

<dependency>
   <groupId>net.logstash.logback</groupId>
   <artifactId>logstash-logback-encoder</artifactId>
   <version>7.2</version>
</dependency>

Then we just need to set a default console log appender for our Spring Boot application in the file logback-spring.xml.

<configuration>
    <appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>
    <logger name="jsonLogger" additivity="false" level="DEBUG">
        <appender-ref ref="consoleAppender"/>
    </logger>
    <root level="INFO">
        <appender-ref ref="consoleAppender"/>
    </root>
</configuration>

Should we avoid using additional log appenders and just print logs just to the standard output? From my experience, the answer is – no. You can still alternative mechanisms for sending the logs. Especially if you use more than one tool for collecting logs in your organization – for example internal stack on Kubernetes and global stack outside. Personally, I’m using a tool that helps me to resolve performance problems e.g. message broker as a proxy. In Spring Boot we easily use RabbitMQ for that. Just include the following starter:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-amqp</artifactId>
</dependency>

Then you need to provide a similar appender configuration in the logback-spring.xml:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

  <springProperty name="destination" source="app.amqp.url" />

  <appender name="AMQP"
		class="org.springframework.amqp.rabbit.logback.AmqpAppender">
    <layout>
      <pattern>
{
  "time": "%date{ISO8601}",
  "thread": "%thread",
  "level": "%level",
  "class": "%logger{36}",
  "message": "%message"
}
      </pattern>
    </layout>

    <addresses>${destination}</addresses>	
    <applicationId>api-service</applicationId>
    <routingKeyPattern>logs</routingKeyPattern>
    <declareExchange>true</declareExchange>
    <exchangeName>ex_logstash</exchangeName>

  </appender>

  <root level="INFO">
    <appender-ref ref="AMQP" />
  </root>

</configuration>

Create Integration Tests

Ok, I know – it’s not directly related to Kubernetes. However, since we use Kubernetes to manage and orchestrate containers, we should also run integration tests on the containers. Fortunately, with Java frameworks, we can simplify that process a lot. For example, Quarkus allows us to annotate the test with @QuarkusIntegrationTest. It is a really powerful solution in conjunction with the Quarkus containers build feature. We can run the tests against an already-built image containing the app. First, let’s include the Quarkus Jib module:

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-container-image-jib</artifactId>
</dependency>

Then we have to enable container build by setting the quarkus.container-image.build property to true in the application.properties file. In the test class, we can use @TestHTTPResource and @TestHTTPEndpoint annotations to inject the test server URL. Then we are creating a client with the RestClientBuilder and call the service started on the container. The name of the test class is not accidental. In order to be automatically detected as the integration test, it has the IT suffix.

@QuarkusIntegrationTest
public class EmployeeControllerIT {

    @TestHTTPEndpoint(EmployeeController.class)
    @TestHTTPResource
    URL url;

    @Test
    void add() {
        EmployeeService service = RestClientBuilder.newBuilder()
                .baseUrl(url)
                .build(EmployeeService.class);
        Employee employee = new Employee(1L, 1L, "Josh Stevens", 
                                         23, "Developer");
        employee = service.add(employee);
        assertNotNull(employee.getId());
    }

    @Test
    public void findAll() {
        EmployeeService service = RestClientBuilder.newBuilder()
                .baseUrl(url)
                .build(EmployeeService.class);
        Set<Employee> employees = service.findAll();
        assertTrue(employees.size() >= 3);
    }

    @Test
    public void findById() {
        EmployeeService service = RestClientBuilder.newBuilder()
                .baseUrl(url)
                .build(EmployeeService.class);
        Employee employee = service.findById(1L);
        assertNotNull(employee.getId());
    }
}

You can find more details about that process in my previous article about advanced testing with Quarkus. The final effect is visible in the picture below. When we run the tests during the build with the mvn clean verify command, our test is executed after building the container image.

kubernetes-java-integration-tests

That Quarkus feature is based on the Testcontainers framework. We can also use Testcontainers with Spring Boot. Here’s the sample test of the Spring REST app and its integration with the PostgreSQL database.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerTests {

    @Autowired
    TestRestTemplate restTemplate;

    @Container
    static PostgreSQLContainer<?> postgres = 
       new PostgreSQLContainer<>("postgres:15.1")
            .withExposedPorts(5432);

    @DynamicPropertySource
    static void registerMySQLProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.datasource.url", postgres::getJdbcUrl);
        registry.add("spring.datasource.username", postgres::getUsername);
        registry.add("spring.datasource.password", postgres::getPassword);
    }

    @Test
    @Order(1)
    void add() {
        Person person = Instancio.of(Person.class)
                .ignore(Select.field("id"))
                .create();
        person = restTemplate.postForObject("/persons", person, Person.class);
        Assertions.assertNotNull(person);
        Assertions.assertNotNull(person.getId());
    }

    @Test
    @Order(2)
    void updateAndGet() {
        final Integer id = 1;
        Person person = Instancio.of(Person.class)
                .set(Select.field("id"), id)
                .create();
        restTemplate.put("/persons", person);
        Person updated = restTemplate.getForObject("/persons/{id}", Person.class, id);
        Assertions.assertNotNull(updated);
        Assertions.assertNotNull(updated.getId());
        Assertions.assertEquals(id, updated.getId());
    }

}

Final Thoughts

I hope that this article will help you avoid some common pitfalls when running Java apps on Kubernetes. Treat it as a summary of other people’s recommendations I found in similar articles and my private experience in that area. Maybe you will find some of those rules are quite controversial. Feel free to share your opinions and feedback in the comments. It will also be valuable for me. If you like this article, once again, I recommend reading another one from my blog – more focused on running microservices-based apps on Kubernetes – Best Practices For Microservices on Kubernetes. But it also contains several useful (I hope) recommendations.

The post Best Practices for Java Apps on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/02/13/best-practices-for-java-apps-on-kubernetes/feed/ 10 13990
Vault on Kubernetes with Spring Cloud https://piotrminkowski.com/2021/12/30/vault-on-kubernetes-with-spring-cloud/ https://piotrminkowski.com/2021/12/30/vault-on-kubernetes-with-spring-cloud/#comments Thu, 30 Dec 2021 13:47:51 +0000 https://piotrminkowski.com/?p=10399 In this article, you will learn how to run Vault on Kubernetes and integrate it with your Spring Boot application. We will use the Spring Cloud Vault project in order to generate database credentials dynamically and inject them into the application. Also, we are going to use a mechanism that allows authenticating against Vault using […]

The post Vault on Kubernetes with Spring Cloud appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to run Vault on Kubernetes and integrate it with your Spring Boot application. We will use the Spring Cloud Vault project in order to generate database credentials dynamically and inject them into the application. Also, we are going to use a mechanism that allows authenticating against Vault using a Kubernetes service account token. If this topic seems to be interesting for you it is worth reading one of my previous articles about how to run Vault on a quite similar platform as Kubernetes – Nomad. You may find it here.

Why Spring Cloud Vault on Kubernetes?

First of all, let me explain why I decided to use Spring Cloud instead of Hashicorp’s Vault Agent. It is important to know that Vault Agent is always injected as a sidecar container into the application pod. So even if we have a single secret in Vault and we inject it once on startup there is always one additional container running. I’m not saying it’s wrong, since it is a standard approach on Kubernetes. However, I’m not very happy with it. I also had some problems in troubleshooting with Vault Agent. To be honest, it wasn’t easy to find my mistake in configuration based just on its logs. Anyway, Spring Cloud is an interesting alternative to the solution provided by Hashicorp. It allows you to easily integrate Spring Boot configuration properties with the Vault Database engine. In fact, you just need to include a single dependency to use it.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. To see the sample application go to the kubernetes/sample-db-vault directory. Then you should just follow my instructions 🙂

Prerequisites

Before we start, there are some required tools. Of course, we need to have a Kubernetes cluster locally or remotely. Personally, I use Docker Desktop, but you may use any other option you prefer. In order to run Vault on Kubernetes, we need to install Helm.

If you would like to build the application from the source code you need to have Skaffold, Java 17, and Maven. Alternatively, you may use a ready image from my Docker Hub account piomin/sample-app.

Install Vault on Kubernetes with Helm

The recommended way to run Vault on Kubernetes is via the Helm chart. Helm installs and configures all the necessary components to run Vault in several different modes. Firstly, let’s add the HashiCorp Helm repository.

$ helm repo add hashicorp https://helm.releases.hashicorp.com

Before proceeding it is worth updating all the repositories to ensure helm uses the latest versions of the components.

$ helm repo update

Since I will run Vault in the dedicated namespace, we first need to create it.

$ kubectl create ns vault

Finally, we can install the latest version of the Vault server and run it in development mode.

$ helm install vault hashicorp/vault \
    --set "server.dev.enabled=true" \
    -n vault

We can verify the installation by displaying a list of running pods in the vault namespace. As you see the Vault Agent is installed by the Helm Chart, so you can try using it as well. If you wish to just go to this tutorial prepared by HashiCorp.

$ kubectl get pod -n vault
NAME                                    READY   STATUS     RESTARTS   AGE
vault-0                                 1/1     Running    0          1h
vault-agent-injector-678dc584ff-wc2r7   1/1     Running    0          1h

Access Vault on Kubernetes

Before we run our application on Kubernetes, we need to configure several things on Vault. I’ll show you how to do it using the vault CLI. The simplest way to use CLI on Kubernetes is just by getting a shell of a running Vault container:

$ kubectl exec -it vault-0 -n vault -- /bin/sh

Alternatively, we can use Vault Web Console available at the 8200 port. To access it locally we should first enable port forwarding:

$ kubectl port-forward service/vault 8200:8200 -n vault

Now, you access it locally in your web browser at http://localhost:8200. In order to log in there use the Token method (a default token value is root). Then you may do the same as with the vault CLI but with the nice UI.

vault-kubernetes-ui-login

Configure Kubernetes authentication

Vault provides a Kubernetes authentication method that enables clients to authenticate with a Kubernetes service account token. This token is available to every single pod. Assuming you have already started an interactive shell session on the vault-0 pod just execute the following command:

$ vault auth enable kubernetes

In the next step, we are going to configure the Kubernetes authentication method. We need to set the location of the Kubernetes API, the service account token, its certificate, and the name of the Kubernetes service account issuer (required for Kubernetes 1.21+).

$ vault write auth/kubernetes/config \
    kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
    token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
    kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
    issuer="https://kubernetes.default.svc.cluster.local"

Ok, now very important. You need to understand what happened here. We need to create a Vault policy that allows us to generate database credentials dynamically. We will enable the Vault database engine in the next section. For now, we are just creating a policy that will be assigned to the authentication role. The name of our Vault policy is internal-app:

$ vault policy write internal-app - <<EOF
path "database/creds/default" {
  capabilities = ["read"]
}
EOF

The next important thing is related to the Kubernetes RBAC. Although the Vault server is running in the vault namespace our sample application will be running in the default namespace. Therefore, the service account used by the application is also in the default namespace. Let’s create ServiceAccount for the application:

$ kubectl create sa internal-app

Now, we have everything to do the last step in this section. We need to create a Vault role for the Kubernetes authentication method. In this role, we set the name and location of the Kubernetes ServiceAccount and the Vault policy created in the previous step.

$ vault write auth/kubernetes/role/internal-app \
    bound_service_account_names=internal-app \
    bound_service_account_namespaces=default \
    policies=internal-app \
    ttl=24h

After that, we may proceed with the next steps. Let’s enable the Vault database engine.

Enable Vault Database Engine

Just to clarify, we are still inside the vault-0 pod. Let’s enable the Vault database engine.

$ vault secrets enable database

Of course, we need to run a database on Kubernetes. We will PostgreSQL since it is supported by Vault. The full deployment manifest is available on my GitHub repository in /kubernetes/k8s/postgresql-deployment.yaml. Here’s just the Deployment object:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:latest
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: POSTGRES_PASSWORD
                  name: postgres-secret
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgredb
      volumes:
        - name: postgredb
          persistentVolumeClaim:
            claimName: postgres-claim

Let’s apply the whole manifest to deploy Postgres in the default namespace:

$ kubectl apply -f postgresql-deployment.yaml

Following Vault documentation, we first need to configure a plugin for the PostgreSQL database and then provide connection settings and credentials:

$ vault write database/config/postgres \
    plugin_name=postgresql-database-plugin \
    allowed_roles="default" \
    connection_url="postgresql://{{username}}:{{password}}@postgres.default:5432?sslmode=disable" \
    username="postgres" \
    password="admin123"

I have disabled SSL for connection with Postgres by setting the property sslmode=disable. There is only one role allowed to use the Vault PostgresSQL plugin: default. The name of the role should be the same as the name passed in the field allowed_roles in the previous step. We also have to set a target database name and SQL statement that creates users with privileges. We set the max TTL of the lease to 10 minutes just to present revocation and renewal features of Spring Cloud Vault. It means that 10 minutes after your application has started it can no longer authenticate with the database.

$ vault write database/roles/default db_name=postgres \
    creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE,  SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";" \
    default_ttl="1m" \
    max_ttl="10m"

And that’s all on the Vault server side. Now, we can test our configuration using a vault CLI as shown below. You can log in to the database using returned credentials. By default, they are valid for one minute (the default_ttl parameter in the previous command).

$ vault read database/creds/default

We can also verify a connection to the instance of PostgreSQL in Vault UI:

Now, we can generate new credentials just by renewing the Vault lease (vault lease renew LEASE_ID). Hopefully, Spring Cloud Vault does it automatically for our app. Let’s see how it works.

Use Spring Cloud Vault on Kubernetes

For the purpose of this demo, I created a simple Spring Boot application. It exposes REST API and connects to the PostgreSQL database. It uses Spring Data JPA to interact with the database. However, the most important thing here are the following two dependencies:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-bootstrap</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-vault-config-databases</artifactId>
</dependency>

The first of them enables bootstrap.yml processing on the application startup. The second of them include Spring Cloud Vault Database engine support.

The only thing we need to do is to provide the right configuration settings Here’s the minimal set of the required dependencies to make it work without any errors. The following configuration is provided in the bootstrap.yml file:

spring:
  application:
    name: sample-db-vault
  datasource:
    url: jdbc:postgresql://postgres:5432/postgres #(1)
  jpa:
    hibernate:
      ddl-auto: update
  cloud:
    vault:
      config.lifecycle: #(2)
        enabled: true
        min-renewal: 10s
        expiry-threshold: 30s
      kv.enabled: false #(3)
      uri: http://vault.vault:8200 #(4)
      authentication: KUBERNETES #(5)
      postgresql: #(6)
        enabled: true
        role: default
        backend: database
      kubernetes: #(7)
        role: internal-app

Let’s analyze the configuration visible above in the details:

(1) We need to set the database connection URI, but WITHOUT any credentials. Assuming our application uses standard properties for authentication against the database (spring.datasource.username and spring.datasource.password) we don’t need to anything else

(2) As you probably remember, the max TTL for the database lease is 10 minutes. We enable lease renewal every 30 seconds. Just for the demo purpose. You will see that Spring Cloud Vault will create new credentials in PostgreSQL every 30 seconds, and the application still works without any errors

(3) Vault KV is not needed here, since I’m using only the database engine

(4) The application is going to be deployed in the default namespace, while Vault is running in the vault namespace. So, the address of Vault should include the namespace name

(5) (7) Our application uses the Kubernetes authentication method to access Vault. We just need to set the role name, which is internal-app. All other settings should be left with the default values

(6) We also need to enable postgres database backend support. The name of the backend in Vault is database and the name of Vault role used for that engine is default.

Run Spring Boot application on Kubernetes

The Deployment manifest is rather simple. But what is important here – we need to use the ServiceAccount internal-app used by the Vault Kubernetes authentication method.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app-deployment
spec:
  selector:
    matchLabels:
      app: sample-app
  template:
    metadata:
      labels:
        app: sample-app
    spec:
      containers:
      - name: sample-app
        image: piomin/sample-app
        ports:
        - containerPort: 8080
      serviceAccountName: internal-app

Our application requires Java 17. Since I’m using Jib Maven Plugin for building images I also have to override the default base image. Let’s use openjdk:17.0.1-slim-buster.

<plugin>
  <groupId>com.google.cloud.tools</groupId>
  <artifactId>jib-maven-plugin</artifactId>
  <version>3.1.4</version>
  <configuration>
    <from>
      
    </from>
  </configuration>
</plugin>

The repository is configured to easily deploy the application with Skaffold. Just go to the /kubernetes/sample-db-vault directory and run the following command in order to build and deploy our sample application on Kubernetes:

$ skaffold dev --port-forward

After that, you can call one of the REST endpoints to test if the application works properly:

$ curl http://localhost:8080/persons

Everything works fine? In the background, Spring Cloud Vault creates new credentials every 30 seconds. You can easily verify it inside the PostgreSQL container. Just connect to the postgres pod and run the psql process:

$ kubectl exec svc/postgres -i -t -- psql -U postgres

Now you can list users with the \du command. Repeat the command several times to see if the credentials have been regenerated. Of course, the application is able to renew the lease until the max TTL (10 minutes) is not exceeded.

The post Vault on Kubernetes with Spring Cloud appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/12/30/vault-on-kubernetes-with-spring-cloud/feed/ 8 10399
Development on Kubernetes with Telepresence and Skaffold https://piotrminkowski.com/2021/12/21/development-on-kubernetes-with-telepresence-and-skaffold/ https://piotrminkowski.com/2021/12/21/development-on-kubernetes-with-telepresence-and-skaffold/#respond Tue, 21 Dec 2021 10:41:29 +0000 https://piotrminkowski.com/?p=10351 In this article, you will learn how to use Telepresence and Skaffold to improve development workflow on Kubernetes. In order to simplify the build of our Java applications, we will also use the Jib Maven plugin. All those tools give you great power to speed up your development process. That’s not my first article about […]

The post Development on Kubernetes with Telepresence and Skaffold appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use Telepresence and Skaffold to improve development workflow on Kubernetes. In order to simplify the build of our Java applications, we will also use the Jib Maven plugin. All those tools give you great power to speed up your development process. That’s not my first article about Skaffold. If you are not familiar with this tool it is worth my article about it. Today I’ll focus on Telepresence, which is in fact one of my favorite tools. I hope, that after reading this article, you will say it back 🙂

Introduction

What’s Telepresence? It’s a very simple and powerful CLI tool for fast and local development for Kubernetes. Why it is simple? Because you can do almost everything using a single command. Telepresence is a CNCF sandbox project originally created by Ambassador Labs. It lets you run and test microservices locally against a remote Kubernetes cluster. It intercepts remote traffic and sends it to your local running instance. I won’t focus on the technical aspects. If you want to read more about it, you can refer to the following link.

Firstly, let’s analyze our case. There are three microservices: first-service, caller-service and callme-service. All of them expose a single REST endpoint GET /ping, which returns basic information about each microservice. In order to create applications, I’m using the Spring Boot framework. Our architecture is visible in the picture below. The first-service is calling the endpoint exposed by the caller-service. Then the caller-service is calling the endpoint exposed by the callme-service. Of course, we are going to deploy all the microservices on the remote Kubernetes cluster.

Assuming we have something to do with the caller-service, we are also running it locally. So finally, our goal is to forward the traffic that is sent to the caller-service on the Kubernetes cluster to our local instance. On the other hand, the local instance of the caller-service should call the instance of the callme-service running on the remote cluster. Looks hard? Let’s check it out!

telepresence-kubernetes-arch

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The code used in this article is available in the dev branch. Then you should just follow my instructions 🙂

Prerequisites

Before we start, we need to install several tools. Of course, we also need to have a running Kubernetes cluster (or e.g. OpenShift). We will use the following CLI tools:

  1. kubectl – to interact with the Kubernetes cluster. It is also used by Skaffold
  2. skaffoldhere are installation instructions. It works perfectly fine on Linux, Macos as well as Windows too
  3. telepresencehere are installation instruction. I’m not sure about Windows since it is in developer preview there. However, I’m using it on Macos without any problems
  4. Maven + JDK11 – of course we need to build applications locally before deploying to Kubernetes.

Build and deploy applications on Kubernetes with Skaffold and Jib

Our applications are as simple as possible. Let’s take a look at the callme-service REST endpoint implementation. It just returns the name of the microservice and its version (v1 everywhere in this article):

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = 
       LoggerFactory.getLogger(CallmeController.class);

   @Autowired
   BuildProperties buildProperties;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      return "I'm callme-service " + version;
   }
}

The endpoint visible above is called by the caller-service. The same as before it also prints the name of the microservice and its version. But also, it appends the result received from the callme-service. It calls the callme-service endpoint using the Spring RestTemplate and the name of the Kubernetes Service.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = 
      LoggerFactory.getLogger(CallerController.class);

   @Autowired
   BuildProperties buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      String response = restTemplate
         .getForObject("http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }
}

Finally, let’s take a look at the implementation of the first-service @RestController. It calls the caller-service endpoint visible above.

@RestController
@RequestMapping("/first")
public class FirstController {

   private static final Logger LOGGER = 
      LoggerFactory.getLogger(FirstController.class);

   @Autowired
   BuildProperties buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      String response = restTemplate
         .getForObject("http://caller-service:8080/caller/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm first-service " + version + ". Calling... " + response;
   }

}

Here’s a definition of the Kubernetes Service for e.g. callme-service:

apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

You can see Kubernetes YAML manifests inside the k8s directory for every single microservice. Let’s take a look at the example Deployment manifest for the callme-service.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

We can deploy each microservice independently, or all of them at once. Here’s a global Skaffold configuration for the whole project. You can find it in the root directory. As you see it uses Jib as a build tool and tries to find manifests inside k8s directory of every single module.

apiVersion: skaffold/v2beta22
kind: Config
metadata:
  name: simple-istio-services
build:
  artifacts:
    - image: piomin/first-service
      jib:
        project: first-service
    - image: piomin/caller-service
      jib:
        project: caller-service
    - image: piomin/callme-service
      jib:
        project: callme-service
  tagPolicy:
    gitCommit: {}
deploy:
  kubectl:
    manifests:
      - '*/k8s/*.yaml'

Each Maven module has to include Jib Maven plugin.

<plugin>
  <groupId>com.google.cloud.tools</groupId>
  <artifactId>jib-maven-plugin</artifactId>
  <version>3.1.1</version>
</plugin>

Finally, we can deploy all our microservices on Kubernetes with Skaffold. Jib is working in Dockerless mode, so you don’t have to run Docker on your machine. By default, it uses adoptopenjdk:11-jre as a base following Java version defined in Maven pom.xml. If you want to observe logs after running applications on Kubernetes just activate the --tail option.

$ skaffold run --tail

Let’s just display a list of running pods to verify if the deployment was successful:

$ kubectl get pod
NAME                              READY   STATUS    RESTARTS   AGE
caller-service-688bd76c98-2m4gp   1/1     Running   0          3m1s
callme-service-75c7cf5bf-rfx69    1/1     Running   0          3m
first-service-7698465bcb-rvf77    1/1     Running   0          3m

Using Telepresence with Kubernetes

Let the party begin! After running all microservices on Kubernetes we will connect Telepresence to our cluster. The following command will run Telepresence deamon on your machine and connect it to the Kubernetes cluster (from current Kubecontext).

$ telepresence connect

If you see a similar result it means everything goes well.

telepresence-kubernetes-connect

Telepresence has already connected to your Kubernetes cluster, but it still not intercepting any traffic from the pods. You can verify it with the following command:

$ telepresence list
caller-service: ready to intercept (traffic-agent not yet installed)
callme-service: ready to intercept (traffic-agent not yet installed)
first-service : ready to intercept (traffic-agent not yet installed)

Ok, so now let’s intercept the traffic from the caller-service.

$ telepresence intercept caller-service --port 8080:8080

Here’s my result after running the command visible above.

telepresence-kubernetes-intercept

Now, the only thing we need to do is to run the caller-service on the local machine. By default, it listens on the port 8080:

$ mvn clean spring-boot:run

We can do it event smarter with the single Telepresence command instead of running them separately:

$ telepresence intercept caller-service --port 8080:8080 -- mvn clean spring-boot:run

Before we send a test request let’s analyze what happened. After running the telepresence intercept command Telepresence injects a sidecar container into the application pod. The name of this container is traffic-agent. It is responsible for intercepting the traffic that comes to the caller-service.

$ kubectl get pod caller-service-7577b9f6fd-ww7nv \
  -o jsonpath='{.spec.containers[*].name}'
caller-service traffic-agent

Ok, now let’s just call the first-service running on the remote Kubernetes cluster. I deployed it on OpenShift, so I can easily call it externally using the Route object. If you run it on other plain Kubernetes you can create Ingress or just run the kubectl port-forward command. Alternatively, you may also enable port forwarding on the skaffold run command (--port-forward option). Anyway, let’s call the first-service /ping endpoint. Here’s my result.

Here are the application logs from the Kubernetes cluster printed by the skaffold run command. As you see it just prints the logs from callme-service and first-service:

Now, let’s take a look at the logs from the local instance of the caller-service. Telepresence intercepts the traffic and sends it to the local instance of the application. Then this instance call the callme-service on the remote cluster 🙂

Cleanup Kubernetes environment after using Telepresence

To clean up the environment just run the following command. It will remove the sidecar container from your application pod. However, if you start your Spring Boot application within the telepresence intercept command you just need to kill the local process with CTRL+C (however the traffic-agent container is still inside the pod).

$ telepresence uninstall --agent caller-service

After that, you can call the first-service once again. Now, all the requests are not going out of the cluster.

In order to shutdown the Telepresence daemon and disconnect from the Kubernetes cluster just run the following command:

$ telepresence quit

Final Thoughts

You can also easily debug your microservices locally, just by running the same telepresence intercept command and your application in debug mode. What’s important in this scenario – Telepresence does not force you to use any other particular tools or IDE. You can do everything the same as would you normally run or debug the application locally. I hope you will like that tool the same as me 🙂

The post Development on Kubernetes with Telepresence and Skaffold appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/12/21/development-on-kubernetes-with-telepresence-and-skaffold/feed/ 0 10351
Spring Boot on Knative https://piotrminkowski.com/2021/03/01/spring-boot-on-knative/ https://piotrminkowski.com/2021/03/01/spring-boot-on-knative/#respond Mon, 01 Mar 2021 11:28:54 +0000 https://piotrminkowski.com/?p=9506 In this article, I’ll explain what is Knative and how to use it with Spring Boot. Although Knative is a serverless platform, we can run there any type of application (not just function). Therefore, we are going to run there a standard Spring Boot application that exposes REST API and connects to a database. Knative […]

The post Spring Boot on Knative appeared first on Piotr's TechBlog.

]]>
In this article, I’ll explain what is Knative and how to use it with Spring Boot. Although Knative is a serverless platform, we can run there any type of application (not just function). Therefore, we are going to run there a standard Spring Boot application that exposes REST API and connects to a database.

Knative introduces a new way of managing your applications on Kubernetes. It extends Kubernetes to add some new key features. One of the most significant of them is a “Scale to zero”. If Knative detects that a service is not used, it scales down the number of running instances to zero. Consequently, it provides a built-in autoscaling feature based on a concurrency or a number of requests per second. We may also take advantage of revision tracking, which is responsible for switching from one version of your application to another. With Knative you just have to focus on your core logic.

All the features I described above are provided by the component called “Knative Serving”. There are also two other components: Eventing” and “Build”. The Build component is deprecated and has been replaced by Tekton. The Eventing component requires attention. However, I’ll discuss it in more detail in the separated article.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂

I used the same application as the example in some of my previous articles about Spring Boot and Kubernetes. I just wanted to focus that you don’t have to change anything in the source code to run it also on Knative. The only required change will be in the YAML manifest.

Since Knative provides built-in autoscaling you may want to compare it with the horizontal pod autoscaler (HPA) on Kubernetes. To do that you may read the article Spring Boot Autoscaling on Kubernetes. If you are interested in how to easily deploy applications on Kubernetes read the following article about the Okteto platform.

Install Knative on Kubernetes

Of course, before we start Spring Boot development we need to install Knative on Kubernetes. We can do it using the kubectl CLI or an operator. You can find the detailed installation instruction here. I decided to try it on OpenShift. It is obviously the fastest way. I could do it with one click using the OpenShift Serverless Operator. No matter which type of installation you choose, the further steps will apply everywhere.

Using Knative CLI

This step is optional. You can deploy and manage applications on Knative with CLI. To download CLI do to the site https://knative.dev/docs/install/install-kn/. Then you can deploy the application using the Docker image.

$ kn service create sample-spring-boot-on-kubernetes \
   --image piomin/sample-spring-boot-on-kubernetes:latest

We can also verify a list of running services with the following command.

$ kn service list

For more advanced deployments it will be more suitable to use the YAML manifest. We will start the build from the source code build with Skaffold and Jib. Firstly, let’s take a brief look at our Spring Boot application.

Spring Boot application for Knative

As I mentioned before, we are going to create a typical Spring Boot REST-based application that connects to a Mongo database. The database is deployed on Kubernetes. Our model class uses the person collection in MongoDB. Let’s take a look at it.

@Document(collection = "person")
@Getter
@Setter
@AllArgsConstructor
@NoArgsConstructor
public class Person {

   @Id
   private String id;
   private String firstName;
   private String lastName;
   private int age;
   private Gender gender;
}

We use Spring Data MongoDB to integrate our application with the database. In order to simplify this integration we take advantage of its “repositories” feature.

public interface PersonRepository extends CrudRepository<Person, String> {
   Set<Person> findByFirstNameAndLastName(String firstName, String lastName);
   Set<Person> findByAge(int age);
   Set<Person> findByAgeGreaterThan(int age);
}

Our application exposes several REST endpoints for adding, searching and updating data. Here’s the controller class implementation.

@RestController
@RequestMapping("/persons")
public class PersonController {

   private PersonRepository repository;
   private PersonService service;

   PersonController(PersonRepository repository, PersonService service) {
      this.repository = repository;
      this.service = service;
   }

   @PostMapping
   public Person add(@RequestBody Person person) {
      return repository.save(person);
   }

   @PutMapping
   public Person update(@RequestBody Person person) {
      return repository.save(person);
   }

   @DeleteMapping("/{id}")
   public void delete(@PathVariable("id") String id) {
      repository.deleteById(id);
   }

   @GetMapping
   public Iterable<Person> findAll() {
      return repository.findAll();
   }

   @GetMapping("/{id}")
   public Optional<Person> findById(@PathVariable("id") String id) {
      return repository.findById(id);
   }

   @GetMapping("/first-name/{firstName}/last-name/{lastName}")
   public Set<Person> findByFirstNameAndLastName(@PathVariable("firstName") String firstName,
			@PathVariable("lastName") String lastName) {
      return repository.findByFirstNameAndLastName(firstName, lastName);
   }

   @GetMapping("/age-greater-than/{age}")
   public Set<Person> findByAgeGreaterThan(@PathVariable("age") int age) {
      return repository.findByAgeGreaterThan(age);
   }

   @GetMapping("/age/{age}")
   public Set<Person> findByAge(@PathVariable("age") int age) {
      return repository.findByAge(age);
   }

}

We inject database connection settings and credentials using environment variables. Our application exposes endpoints for liveness and readiness health checks. The readiness endpoint verifies a connection with the Mongo database. Of course, we use the built-in feature from Spring Boot Actuator for that.

spring:
  application:
    name: sample-spring-boot-on-kubernetes
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

management:
  endpoints:
    web:
      exposure:
        include: "*"
  endpoint.health:
      show-details: always
      group:
        readiness:
          include: mongo
      probes:
        enabled: true

Defining Knative Service in YAML

Firstly, we need to define a YAML manifest with a Knative service definition. It sets an autoscaling strategy using the Knative Pod Autoscaler (KPA). In order to do that we have to add annotation autoscaling.knative.dev/target with the number of simultaneous requests that can be processed by each instance of the application. By default, it is 100. We decrease that limit to 20 requests. Of course, we need to set liveness and readiness probes for the container. Also, we refer to the Secret and ConfigMap to inject MongoDB settings.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: sample-spring-boot-on-kubernetes
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/target: "20"
    spec:
      containers:
        - image: piomin/sample-spring-boot-on-kubernetes
          livenessProbe:
            httpGet:
              path: /actuator/health/liveness
          readinessProbe:
            httpGet:
              path: /actuator/health/readiness
          env:
            - name: MONGO_DATABASE
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-name
            - name: MONGO_USERNAME
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-user
            - name: MONGO_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-password

Configure Skaffold and Jib for Knative deployment

We will use Skaffold to automate the deployment of our application on Knative. Skaffold is a command-line tool that allows running the application on Kubernetes using a single command. You may read more about it in the article Local Java Development on Kubernetes. It may be easily integrated with the Jib Maven plugin. We just need to set jib as a default option in the build section of the Skaffold configuration. We can also define a list of YAML scripts executed during the deploy phase. The skaffold.yaml file should be placed in the project root directory. Here’s a current Skaffold configuration. As you see, it runs the script with the Knative Service definition.

apiVersion: skaffold/v2beta5
kind: Config
metadata:
  name: sample-spring-boot-on-kubernetes
build:
  artifacts:
    - image: piomin/sample-spring-boot-on-kubernetes
      jib:
        args:
          - -Pjib
  tagPolicy:
    gitCommit: {}
deploy:
  kubectl:
    manifests:
      - k8s/mongodb-deployment.yaml
      - k8s/knative-service.yaml

Skaffold activates the jib profile during the build. Within this profile, we will place a jib-maven-plugin. Jib is useful for building images in dockerless mode.

<profile>
   <id>jib</id>
   <activation>
      <activeByDefault>false</activeByDefault>
   </activation>
   <build>
      <plugins>
         <plugin>
            <groupId>com.google.cloud.tools</groupId>
            <artifactId>jib-maven-plugin</artifactId>
            <version>2.8.0</version>
         </plugin>
      </plugins>
   </build>
</profile>

Finally, all we need to do is to run the following command. It builds our application, creates and pushes a Docker image, and run it on Knative using knative-service.yaml.

$ skaffold run

Verify Spring Boot deployment on Knative

Now, we can verify our deployment on Knative. To do that let’s execute the command kn service list as shown below. We have a single Knative Service with the name sample-spring-boot-on-kubernetes.

spring-boot-knative-services

Then, let’s imagine we deploy three versions (revisions) of our application. To do that let’s just provide some changes in the source and redeploy our service using skaffold run. It creates new revisions of our Knative Service. However, the whole traffic is forwarded to the latest revision (with -vlskg suffix).

spring-boot-knative-revisions

With Knative we can easily split traffic between multiple revisions of the single service. To do that we need to add a traffic section in the Knative Service YAML configuration. We define a percent of the whole traffic per a single revision as shown below.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: sample-spring-boot-on-kubernetes
  spec:
    template:
      ...
    traffic:
      - latestRevision: true
        percent: 60
        revisionName: sample-spring-boot-on-kubernetes-vlskg
      - latestRevision: false
        percent: 20
        revisionName: sample-spring-boot-on-kubernetes-t9zrd
      - latestRevision: false
        percent: 20
        revisionName: sample-spring-boot-on-kubernetes-9xhbw

Let’s take a look at the graphical representation of our current architecture. 60% of traffic is forwarded to the latest revision, while both previous revisions receive 20% of traffic.

spring-boot-knative-openshift

Autoscaling and scale to zero

By default, Knative supports autoscaling. We may choose between two types of targets: concurrency and requests-per-second (RPS). The default target is concurrency. As you probably remember, I have overridden this default value to 20 with the autoscaling.knative.dev/target annotation. So, our goal now is to verify autoscaling. To do that we need to send many simultaneous requests to the application. Of course, the incoming traffic is distributed across three different revisions of Knative Service.

Fortunately, we may easily simulate a large traffic with the siege tool. We will call the GET /persons endpoint that returns all available persons. We are sending 150 concurrent requests with the command visible below.

$ siege http://sample-spring-boot-on-kubernetes-pminkows-serverless.apps.cluster-7260.7260.sandbox1734.opentlc.com/persons \
   -i -v -r 1000  -c 150 --no-parser

Under the hood, Knative still creates a Deployment and scales down or scales up the number of running pods. So, if you have three revisions of a single Service, there are three different deployments created. Finally, I have 10 running pods for the latest deployment that receives 60% of traffic. There are also 3 and 2 running pods for the previous revisions.

What will happen if there is no traffic coming to the service? Knative will scale down the number of running pods for all the deployments to zero.

Conclusion

In this article, you learned how to deploy the Spring Boot application as a Knative service using Skaffold and Jib. I explained with the examples how to create a new revision of the Service, and distribute traffic across those revisions. We also test the scenario with autoscaling based on concurrent requests and scale to zero in case of no incoming traffic. You may expect more articles about Knative soon! Not only with Spring Boot 🙂

The post Spring Boot on Knative appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/03/01/spring-boot-on-knative/feed/ 0 9506
Continuous Integration with Jenkins on Kubernetes https://piotrminkowski.com/2020/11/10/continuous-integration-with-jenkins-on-kubernetes/ https://piotrminkowski.com/2020/11/10/continuous-integration-with-jenkins-on-kubernetes/#comments Tue, 10 Nov 2020 14:26:32 +0000 https://piotrminkowski.com/?p=9091 Although Jenkins is a mature solution, it still can be the first choice for building CI on Kubernetes. In this article, I’ll show how to install Jenkins on Kubernetes, and use it for building a Java application with Maven. You will learn how to use and customize the Helm for this installation. We will implement […]

The post Continuous Integration with Jenkins on Kubernetes appeared first on Piotr's TechBlog.

]]>
Although Jenkins is a mature solution, it still can be the first choice for building CI on Kubernetes. In this article, I’ll show how to install Jenkins on Kubernetes, and use it for building a Java application with Maven. You will learn how to use and customize the Helm for this installation. We will implement typical steps for building and deploying Java applications on Kubernetes.

Here’s the architecture of our solution.

Clone the source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-spring-boot-on-kubernetes. Then you should just follow my instructions 🙂

Deploy Jenkins on Kubernetes with Helm

We may install Jenkins server on Kubernetes using the Helm package manager. The official Helm chart spawns agents on Kubernetes and utilizes Jenkins Kubernetes Plugin. It comes with a default configuration. However, we may override all the properties using the JCaSC Plugin. It is worth reading more about this plugin before continuing. You can find out more about it here.

Firstly, we will add a new Helm repository and update it.

$ helm repo add jenkins https://charts.jenkins.io
$ helm repo update

Then, we may install Jenkins by executing the following command. The Jenkins instance is running in the dedicated namespace. So, before installing it we need to create a new namespace with the kubectl create ns jenkins command. In order to include an additional configuration, we need to create a YAML file and pass it with -f option.

$ helm install -f k8s/jenkins-helm-config.yaml jenkins jenkins/jenkins -n jenkins

Customize Jenkins

We need to configure several things before creating a build pipeline. Let’s consider the following list of tasks:

  1. Customize the default agent – we need to mount a volume into the default agent pod to store workspace files and share them with the other agents.
  2. Create the maven agent – we need to define a new agent able to perform a Maven build. It should use the same persistent volume as the default agent. Also, it should use the JDK 11, because our application is compiled with that version of Java. Finally, we will increase the default CPU and memory limits for the agent pods.
  3. Create GitHub credentials – the Jenkins pipeline needs to be able to clone the source code from GitHub
  4. Install the Kubernetes Continuous Deploy plugin – we will use this plugin to deploy resource configurations to a Kubernetes cluster.
  5. Create kubeconfig credentials – we have to provide a configuration of our Kubernetes context.

Let’s take a look at the whole Jenkins configuration file. Consequently, it contains agent, additionalAgents sections, and defines a JCasC script with the credentials definition.

agent:
  podName: default
  customJenkinsLabels: default
  volumes:
    - type: PVC
      claimName: jenkins-agent
      mountPath: /home/jenkins
      readOnly: false

additionalAgents:
  maven:
    podName: maven
    customJenkinsLabels: maven
    image: jenkins/jnlp-agent-maven
    tag: jdk11
    volumes:
      - type: PVC
        claimName: jenkins-agent
        mountPath: /home/jenkins
        readOnly: false
    resources:
      limits:
        cpu: "1"
        memory: "2048Mi"

master:
  JCasC:
    configScripts:
      creds: |
        credentials:
          system:
            domainCredentials:
              - domain:
                  name: "github.com"
                  description: "GitHub domain"
                  specifications:
                    - hostnameSpecification:
                        includes: "github.com"
                credentials:
                  - usernamePassword:
                      scope: GLOBAL
                      id: github_credentials
                      username: piomin
                      password: ${GITHUB_PASSWORD}
              - credentials:
                  - kubeconfig:
                      id: "docker-desktop"
                      kubeconfigSource:
                        directEntry:
                          content: |-
                            apiVersion: v1
                            kind: Config
                            preferences: {}
                            clusters:
                            - cluster:
                                certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01URXdPVEE1TkRjd04xb1hEVE13TVRFd056QTVORGN3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTStrCnhKd3l6UVEydXNvRHh5RmwxREdwZWZQTVA0RGFVaVJsK01SQ1p1S0NFWUFkL0ZQOWtFS0RlVXMydmVURi9jMXYKUjZpTDlsMVMvdmN6REoyRXRuZUd0TXVPdWFXNnFCWkN5OFJ2NmFReHd0UEpnWVZGTHBNM2dXYmhqTHp3RXplOApEQlhxekZDZkNobXl3SkdORVdWV0s4VnBuSlpMbjRVbUZKcE5RQndsSXZwRC90UDJVUVRiRGNHYURqUE5vY2c0Cms1SmNOc3h3SDV0NkhIN0JZMW9jTEFLUUhsZ2V4V2ZObWdRRkM2UUcrRVNsWkpDVEtNdVVuM2JsRWVlYytmUWYKaVk3YmdOb3BjRThrS0lUY2pzMG95dGVyNmxuY2ZLTVBqSnc2RTNXMmpXRThKU2Z2WDE2dGVhZUZISDEyTmRqWgpWTER2ZWc3eVBsTlRmRVJld25FQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMWjUzVEhBSXp0bHljV0NrS1hhY2l4K0Y5a1FNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBZWllMTRoSlZkTHF3bUY0SGVPS0ZhNXVUYUt6aXRUSElJNHJhU3cxTUppZElQTmZERwprRk5KeXM1M2I4MHMveWFXQ3BPbXdCK1dEak9hWmREQjFxcXVxU1FxeGhkNWMxU2FBZ1VXTGp5OXdLa1dPMzBTCjB3RTRlVkY3Q1c5VGpSMEoyVXV6UEVXdFBKRWF4U2xKMGhuZlQyeFYvb0N5OE9kMm9EZjZkSFRvbE5UTUEyalcKZjRZdXl3U1Z5a2RNaXZYMU5xZzdmK3RrcEVwb25PdkQ4ZmFEL2dXZmpaWHNFdHo4NXRNcTVLd2NQNUh2ZDJ0ZgoyKzBSbEtFT0pyY1dyL1lEc2w3dWdDdkFJTVk4WGdJL1E5dTZZTjAzTngzWXdSS2UrMElpSzcyOHVuNVJaVEVXCmNZNHc0YkpudlN6WWpKeUJIaHNiQVNTNzN6NndXVEo4REhKSwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
                                server: https://kubernetes.default
                              name: docker-desktop
                            contexts:
                            - context:
                                cluster: docker-desktop
                                user: docker-desktop
                              name: docker-desktop
                            current-context: docker-desktop
                            users:
                            - name: docker-desktop
                              user:
                                client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGVENDQWYyZ0F3SUJBZ0lJRnh2QzMyK2tPMEl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURFeE1Ea3dPVFEzTURkYUZ3MHlNVEV4TURrd09UUTNNekZhTURZeApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sc3dHUVlEVlFRREV4SmtiMk5yWlhJdFptOXlMV1JsCmMydDBiM0F3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ3M0TXdUU3ByMkRoOTMKTlpERldsNWQyaWgwbllBdTJmTk1RYjZ2ZHR5RUVpTUVpNk5BM05qRGM4OWl5WUhOU2J4YmVNNlNUMzRlTFIwaQpXbHJJSlhhVjNBSXhnbFo4SkdqczVUSHRlM1FjNXZVSkJJWXhndFJFTFlJMGlJekpZdEhoU1NwMFU0eWNjdzl5CnVGSm1YTHVBRVdXR0tTcitVd2Y3RWtuWmJoaFRNQWI0RUF1NlR6dkpyRHhpTDAzU0UrSWhJMTJDV1Y3cVRqZ1gKdGI1OXdKcWkwK3ZJSDBSc3dxOUpnemtQTUhLNkFBZkgxbmFmZ3VCQjM2VEppcUR6YWFxV2VZTmludlIrSkVHMAptakV3NWlFN3JHdUgrZVBxSklvdTJlc1YvN1hyYWx2UEl2Zng2ajFvRWI4NWtna2RuV0JiQlNmTmJCdnhHQU1uCmdnLzdzNHdoQWdNQkFBR2pTREJHTUE0R0ExVWREd0VCL3dRRUF3SUZvREFUQmdOVkhTVUVEREFLQmdnckJnRUYKQlFjREFqQWZCZ05WSFNNRUdEQVdnQlMyZWQweHdDTTdaY25GZ3BDbDJuSXNmaGZaRURBTkJna3Foa2lHOXcwQgpBUXNGQUFPQ0FRRUFpbUg1c1JqcjB6WjlDRkU2SVVwNVRwV2pBUXhma29oQkpPOUZmRGE3N2kvR1NsYm1jcXFrCldiWUVYRkl3MU9EbFVjUy9QMXZ5d3ZwV0Y0VXNXTGhtYkF5ZkZGbXZRWFNEZHhYbTlkamI2OEVtRWFPSlI1VTYKOHJOTkR0TUVEY25sbFd2Qk1CRXBNbkNtcm9KcXo3ZzVzeDFQSmhwcDBUdUZDQTIwT2FXb3drTUNNUXRIZlhLQgpVUDA2eGxRU2o1SGNOS1BSQWFyQzBtSzZPVUhybExBcUIvOCtDQlowVUY2MXhTTGN1WFJvYU52S1ZDWHZnQy9kCkQ4ckxuWXFmbWl6WHMvcHJ3dEhsaVFBR2lmemU1MmttbTkyR2RrS2V1SmFRbmM5RWwrd2RZaUVBSHVKU1YvK04Kc2VRelpTa0ZmT2ozbHUxdWtoSDg4dGcxUUp2TkpuM1FhQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
                                client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBck9ETUUwcWE5ZzRmZHpXUXhWcGVYZG9vZEoyQUx0bnpURUcrcjNiY2hCSWpCSXVqClFOell3M1BQWXNtQnpVbThXM2pPa2s5K0hpMGRJbHBheUNWMmxkd0NNWUpXZkNSbzdPVXg3WHQwSE9iMUNRU0cKTVlMVVJDMkNOSWlNeVdMUjRVa3FkRk9NbkhNUGNyaFNabHk3Z0JGbGhpa3EvbE1IK3hKSjJXNFlVekFHK0JBTAp1azg3eWF3OFlpOU4waFBpSVNOZGdsbGU2azQ0RjdXK2ZjQ2FvdFByeUI5RWJNS3ZTWU01RHpCeXVnQUh4OVoyCm40TGdRZCtreVlxZzgybXFsbm1EWXA3MGZpUkJ0Sm94TU9ZaE82eHJoL25qNmlTS0x0bnJGZisxNjJwYnp5TDMKOGVvOWFCRy9PWklKSFoxZ1d3VW56V3diOFJnREo0SVArN09NSVFJREFRQUJBb0lCQVFDWEZZTGtYVEFlVit0aAo2RnRVVG96b0lxOTJjdXRDaHRHZFZGdk14dWtqTnlLSloydk9WUFBQcE5lYXN4YVFqWjlpcGFxS3JaUS8xUmVBCkhVejNXOTVPUzg5UzYyQ2Y3OFlQT3FLdXRGU2VxYTErS3drSUhobGFXQmRSeUFDYVE1VysrSTEweWt1NXNzak8KYm8zOHpaQkQ5WEF2bHF6dlJTdFZYZjlTV1doQzBlWnRKTm84QU4yZnpkdkRjUUgwOVRsejh1S05EaUNra2RYQQpHTTdZTUdoQktYWGd6YlcxSUVMejRlRUpDZDh0dklReitwcWtxRktIcHRjNnVJY1hLQjFxUGVGRDRSMm9iNUlNCnl5MUpBWlZyR0JHaUk5d1p5OFU1a253UW93emwwUTEwZXlRdUkwTG42SWthZG5SQktMRHcrczRGaE1UQVViOWYKT1NBR3JaVnRBb0dCQU9RTDJzSEN3T25KOW0xYmRiSlVLeTFsRHRsTmV4aDRKOGNERVZKR3NyRVNndlgrSi9ZZQpXb0NHZXc3cGNXakFWaWJhOUMzSFBSbEtOV2hXOExOVlpvUy9jQUxCa1lpSUZNdDlnT1NpZmtCOFhmeVJQT3hJCmNIc2ZjOXZ2OEFJcmxZVVpCNjI1ak8rUFZTOXZLOXZXUGtka3d0MlFSUHlqYlEwVS9mZmdvUWVIQW9HQkFNSVIKd0lqL3RVbmJTeTZzL1JXSlFiVmxPb1VPVjFpb0d4WmNOQjBkaktBV3YwUksvVHdldFBRSXh2dmd1d2RaVFdiTApSTGk5M3RPY3U0UXhNOFpkdGZhTnh5dWQvM2xXSHhPYzRZa0EwUHdSTzY4MjNMcElWNGxrc0tuNUN0aC80QTFJCmw3czV0bHVEbkI3dFdYTFM4MHcyYkE4YWtVZXlBbkZmWXpTTUR1a1hBb0dBSkRFaGNialg1d0t2Z21HT2gxUEcKV25qOFowNWRwOStCNkpxN0NBVENYVW5qME9pYUxQeGFQcVdaS0IreWFQNkZiYnM0SDMvTVdaUW1iNzNFaTZHVgpHS0pOUTVLMjV5VTVyNlhtYStMQ0NMZjBMcDVhUGVHdFFFMFlsU0k2UkEzb3Qrdm1CUk02bzlacW5aR1dNMWlJCkg4cUZCcWJiM0FDUDBSQ3cwY01ycTBjQ2dZRUFvMWM5cmhGTERMYStPTEx3OE1kdHZyZE00ZUNJTTk2SnJmQTkKREtScVQvUFZXQzJscG94UjBYUHh4dDRIak0vbERiZllSNFhIbm1RMGo3YTUxU1BhbTRJSk9QVHFxYjJLdW44NApkSTl6VmpWSy90WTJRYlBSdVpvOTkxSGRod3RhRU5RZ29UeVo5N3gyRXJIQ3I1cE5uTC9SZzRUZzhtOHBEek14CjFIQnR2RkVDZ1lFQTF5aHNPUDBRb3F2SHRNOUFuWVlua2JzQU12L2dqT3FrWUk5cjQ2c1V3Mnc3WHRJb1NTYlAKU0hmbGRxN0IxVXJoTmRJMFBXWXRWZ3kyV1NrN0FaeG8vUWtLZGtPbTAxS0pCY2xteW9JZDE0a0xCVkZxbUV6Rgp1c2l4MmpwdTVOTWhjUWo4aFY2Sk42aXdraHJkYjByZVpuMGo4MG1ZRE96d3hjMmpvTmxSWjN3PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
                      scope: GLOBAL

Before installing Jenkins we need to create the PersistentVolumeClaim object. This volume is used by the agents to store workspace files.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: jenkins-agent
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: hostpath

Finally, let’s create it inside the jenkins namespace.

$ kubectl create -f k8s\jenkins-agent-pvc.yaml

Explore Jenkins on Kubernetes

After starting the Jenkins instance, we may log in to its web management console. The default root username is admin. In order to obtain the password, we should execute the following command.

$ kubectl get secret --namespace jenkins jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode
YdeQuJZHa1

The Jenkins instance is available on port 8080 inside the Kubernetes cluster. Let’s expose it on the local port with the kubectl port-forward command. Now, we can access it on the address http://localhost:8080.

$ kubectl port-forward service/jenkins 8080:8080 -n jenkins

Let’s log in to the Jenkins console. After that, we can verify the correctness of our installation. Firstly, we need to navigate to the “Manage Jenkins”, and then to the “Manage Credentials” section. As you can see below, there are two credentials there: github_credentials and docker-desktop.

jenkins-on-kubernetes-credentials

Then, let’s move back to the “Manage Jenkins”, and go to the “Manage Nodes and Clouds” section. In the “Configure Clouds” tab, there is the Kubernetes configuration as shown below. It contains two pod templates: default and maven.

jenkins-on-kubernetes-cloud

Explore a sample application

The sample application is built on top of Spring Boot. We use Maven for building it. On the other hand, we use the Jib plugin for creating a Docker image. Thanks to that, we won’t have to install any other tools to build Docker images with Jenkins.

<properties>
   <java.version>11</java.version>
</properties>
<build>
   <plugins>
      <plugin>
         <groupId>org.springframework.boot</groupId>
         <artifactId>spring-boot-maven-plugin</artifactId>
         <executions>
            <execution>
               <goals>
                  <goal>build-info</goal>
               </goals>
            </execution>
         </executions>
         <configuration>
            <excludeDevtools>false</excludeDevtools>
         </configuration>
      </plugin>
   </plugins>
</build>
<profiles>
   <profile>
      <id>jib</id>
      <activation>
         <activeByDefault>false</activeByDefault>
      </activation>
      <build>
         <plugins>
            <plugin>
               <groupId>com.google.cloud.tools</groupId>
               <artifactId>jib-maven-plugin</artifactId>
               <version>2.4.0</version>
               <configuration>
                  <to>piomin/sample-spring-boot-on-kubernetes</to>
               </configuration>
            </plugin>
         </plugins>
      </build>
   </profile>
</profiles>

The Deployment manifest contains the PIPELINE_NAMESPACE parameter. Our pipeline replaces it with the target namespace name. Therefore we may deploy our application in the multiple Kubernetes namespaces.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-boot-on-kubernetes-deployment
  namespace: ${PIPELINE_NAMESPACE}
spec:
  selector:
    matchLabels:
      app: sample-spring-boot-on-kubernetes
  template:
    metadata:
      labels:
        app: sample-spring-boot-on-kubernetes
    spec:
      containers:
      - name: sample-spring-boot-on-kubernetes
        image: piomin/sample-spring-boot-on-kubernetes
        ports:
        - containerPort: 8080
        env:
          - name: MONGO_DATABASE
            valueFrom:
              configMapKeyRef:
                name: mongodb
                key: database-name
          - name: MONGO_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-user
          - name: MONGO_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-password

Create Jenkins pipeline on Kubernetes

Finally, we may create the Jenkins pipeline for our application. It consists of six steps. Firstly, we clone the source code repository from GitHub. We use the default Jenkins agent for it. In the next three stages we use the maven agent. In the second stage, we build the application. Then, we run JUnit tests. After that, we build a Docker image in the “dockerless” mode using the Maven Jib plugin. In the last two stages, we can take an advantage of the Kubernetes Continuous Deploy plugin. We use an already created kubeconfig credentials, and deployment-template.yaml file from the source code. We just need to set PIPELINE_NAMESPACE environment variable.

pipeline {
   agent {
      label "default"
   }
   stages {
      stage('Checkout') {
         steps {
            script {
               git url: 'https://github.com/piomin/sample-spring-boot-on-kubernetes.git', credentialsId: 'github_credentials'
            }
         }
      }
      stage('Build') {
         agent {
            label "maven"
         }
         steps {
            sh 'mvn clean compile'
         }
      }
      stage('Test') {
         agent {
            label "maven"
         }
         steps {
            sh 'mvn test'
         }
      }
      stage('Image') {
         agent {
            label "maven"
         }
         steps {
            sh 'mvn -P jib -Djib.to.auth.username=${DOCKER_LOGIN} -Djib.to.auth.password=${DOCKER_PASSWORD} compile jib:build'
         }
      }
      stage('Deploy on test') {
         steps {
            script {
               env.PIPELINE_NAMESPACE = "test"
               kubernetesDeploy kubeconfigId: 'docker-desktop', configs: 'k8s/deployment-template.yaml'
            }
         }
      }
      stage('Deploy on prod') {
         steps {
            script {
               env.PIPELINE_NAMESPACE = "prod"
               kubernetesDeploy kubeconfigId: 'docker-desktop', configs: 'k8s/deployment-template.yaml'
            }
         }
      }
   }
}

Run the pipeline

Our sample Spring Boot application connects to MongoDB. Therefore, we need to deploy the Mongo instance to the test and prod namespaces before running the pipeline. We can use manifest mongodb-deployment.yaml in k8s directory.

$ kubectl create ns test
$ kubectl apply -f k8s/mongodb-deployment.yaml -n test
$ kubectl create ns prod
$ kubectl apply -f k8s/mongodb-deployment.yaml -n prod

Finally, let’s run our test pipeline. It finishes successfully as shown below.

jenkins-on-kubernetes-pipeline

Now, we may check out a list of running pods in the prod namespace.

Conclusion

Helm and JCasC plugin simplify the installation of Jenkins on Kubernetes. Also, Maven comes with a huge set of plugins, that may be used with Docker and Kubernetes. In this article, I showed how to use the Jib Maven Plugin to build an image of your application and Jenkins plugins to run pipelines and deploys on Kubernetes. You can compare it with the concept over GitLab CI presented in the article GitLab CI/CD on Kubernetes. Enjoy 🙂

The post Continuous Integration with Jenkins on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/11/10/continuous-integration-with-jenkins-on-kubernetes/feed/ 8 9091
Gitlab CI/CD on Kubernetes https://piotrminkowski.com/2020/10/19/gitlab-ci-cd-on-kubernetes/ https://piotrminkowski.com/2020/10/19/gitlab-ci-cd-on-kubernetes/#comments Mon, 19 Oct 2020 07:20:44 +0000 https://piotrminkowski.com/?p=9002 You can use GitLab CI/CD to build and deploy your applications on Kubernetes. It is not hard to integrate GitLab with Kubernetes. You can take an advantage of the GUI support to set up a connection with your Kubernetes cluster. Furthermore, GitLab CI provides a built-in container registry to store and share images. Preface In […]

The post Gitlab CI/CD on Kubernetes appeared first on Piotr's TechBlog.

]]>
You can use GitLab CI/CD to build and deploy your applications on Kubernetes. It is not hard to integrate GitLab with Kubernetes. You can take an advantage of the GUI support to set up a connection with your Kubernetes cluster. Furthermore, GitLab CI provides a built-in container registry to store and share images.

Preface

In this article, I will describe all the steps required to build and deploy your Java application on Kubernetes with GitLab CI/CD. First, we are going to run an instance of the GitLab server on the local Kubernetes cluster. Then, we will use the special GitLab features in order to integrate it with Kubernetes. After that, we will create a pipeline for our Maven application. You will learn how to build it, run automated tests, build a Docker image, and finally run it on Kubernetes with GitLab CI.
In this article, I’m going to focus on simplicity. I will show you how to run GitLab CI on Kubernetes with minimal effort and resources. With this in mind, I hope it will help to form an opinion about GitLab CI/CD. The more advanced, production installation may be easily performed with Helm. I will also describe that approach in the next section.

Run GitLab on Kubernetes

In order to easily start with GitLab CI on Kubernetes, we will use its image from the Docker Hub. We may choose between community and enterprise editions. First, we need to change the default external URL. We will also enable the container registry feature. To override both these settings we need to add the environment variable GITLAB_OMNIBUS_CONFIG. It can contain any of the GitLab configuration properties. Since I’m running the Kubernetes cluster locally, I also need to override the default URL of the Docker registry. You can easily get it by running the command docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' registry. The local image registry is running outside the Kubernetes cluster as a simple Docker container with the name registry.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitlab-deployment
spec:
  selector:
    matchLabels:
      app: gitlab
  template:
    metadata:
      labels:
        app: gitlab
    spec:
      containers:
      - name: gitlab
        image: gitlab/gitlab-ee
        env:
          - name: GITLAB_OMNIBUS_CONFIG
            value: "external_url 'http://gitlab-service.default/';gitlab_rails['registry_enabled'] = true;gitlab_rails['registry_api_url'] = \"http://172.17.0.2:5000\""
        ports:
        - containerPort: 80
          name: HTTP
        volumeMounts:
          - mountPath: /var/opt/gitlab
            name: data
      volumes:
        - name: data
          emptyDir: {}

Then we will also create the Kubernetes service gitlab-service. GitLab UI is available on port 80. We will use that service to access the GitLab UI outside the Kubernetes cluster.

apiVersion: v1
kind: Service
metadata:
  name: gitlab-service
spec:
  type: NodePort
  selector:
    app: gitlab
  ports:
  - port: 80
    targetPort: 80
    name: http

Finally, we can apply the GitLab manifest with Deployment and Service to Kubernetes. To do that you just need to execute the command kubectl apply -f k8s/gitlab.yaml. Let’s check the address of gitlab-service.

Of course, we may also install GitLab on Kubernetes with Helm. However, you should keep in mind that it will generate a full deployment with some core and optional components. Consequently, you will have to increase RAM assigned to your cluster to around 15MB to be able to run it. Here’s a list of required Helm commands. For more details, you may refer to the GitLab documentation.

$ helm repo add gitlab https://charts.gitlab.io/
$ helm repo update
$ helm upgrade --install gitlab gitlab/gitlab \
  --timeout 600s \
  --set global.hosts.domain=example.com \
  --set global.hosts.externalIP=10.10.10.10 \
  --set certmanager-issuer.email=me@example.com

Clone the source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-spring-boot-on-kubernetes. Then you should just follow my instructions 🙂

First, let’s create a new repository on GitLab. Of course, its name is sample-spring-boot-on-kubernetes as shown below.

gitlab-on-kubernetes-create-repo

Then, you should clone my example repository, and move it to your GitLab instance running on Kubernetes. Assuming the address of my local instance of GitHub is http://localhost:30129, I need to execute the following command.

$ git remote add gitlab http://localhost:30129/root/sample-spring-boot-on-kubernetes.git

To clarify, let’s display a list of Git remotes for the current repository.

$ git remote -v
gitlab  http://localhost:30129/root/sample-spring-microservices-kubernetes.git (fetch)
gitlab  http://localhost:30129/root/sample-spring-microservices-kubernetes.git (push)
origin  https://github.com/piomin/sample-spring-microservices-kubernetes.git (fetch)
origin  https://github.com/piomin/sample-spring-microservices-kubernetes.git (push)

Finally, we can push the source code to the GitLab repository using the gitlab remote.

$ git push gitlab

Configure GitLab integration with Kubernetes

After login to the GitLab UI, you should enable local HTTP requests. Do that you need to go to the admin section. Then click “Settings” -> “Network” -> “Outbound requests”. Finally, you need to check the box “Allow requests to the local network from web hooks and services”. We will use internal communication between GitLab and Kubernetes API, and between the GitLab CI runner and the GitLab master.

Now, we may configure the connection to the Kubernetes API. To do that you should go to the section “Kubernetes”, then click “Add Kubernetes cluster”, and finally switch to the tab “Connect existing cluster”. We need to provide some basic information about our cluster in the form. The name is required, and therefore I’m setting the same name as my Kubernetes context. You may leave a default value in the “Environment scope” field. In the “API URL” field I’m providing the internal address of Kubernetes API. It is http://kubernetes.default:443.

We also need to paste the cluster CA certificate. In order to obtain it, you should first find the secret with the prefix default-token-, and copy it with the following command.

$ kubectl get secret default-token-ttswt -o jsonpath="{['data']['ca\.crt']}" | base64 --decode

Finally, we should create a special ServiceAccount for GitLab with the cluster-admin role.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitlab
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: gitlab-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: gitlab
    namespace: kube-system

Then you should find the secret with the prefix gitlab in the kube-system namespace, and display its details. After that, we need to copy value of field token, and paste it to the form without decoding.

$ kubectl describe secret gitlab-token-5sk2v -n kube-system

Here’s the full information about our Kubernetes cluster required by GitLab. Let’s add it by clicking “Add Kubernetes cluster”.

Once you have successfully added the new Kubernetes cluster in the GitLab UI you need to display its details. In the tab “Applications” you should find the section “GitLab runner” and install it.

The GitLab runner is automatically deployed in the namespace gitlab-managed-apps. We can verify if it started succesfully.

$ kubectl get pod -n gitlab-managed-apps
NAME                                   READY        STATUS    RESTARTS       AGE
runner-gitlab-runner-5649dbf49-5mnjv   1/1          Running   0              5m56s

The GitLab runner tries to communicate with the GitLab master. To verify that everything works fine, we need to go to the section “Overview” -> “Runners”. If you see the IP address and version number, it means that the runner is able to communicate with the master. In case of any problems, you should take a look at the pod logs.

Create application pipeline

The GitLab CI/CD configuration file is available in the project root directory. Its name is .gitlab-ci.yml. It consists of 5 stages and uses the Maven Docker image for executing builds. It is automatically detected by GitLab CI. Let’s take a closer look at it.

First, we are running the build stage responsible for building the application for the source code. It just runs the command mvn compile. Then, we are running JUnit tests using mvn test command. If all the tests are passed, we may build a Docker image with our application. We use the Jib Maven plugin for it. It is able to build an image in the docker-less mode. Therefore, we don’t have to run an image with a Docker client. Jib builds an image and pushes it to the Docker registry. Finally, we can deploy our container on Kubernetes. To do that we are using the bitnami/kubectl image. It allows us to execute kubectl commands. In the first step, we are deploying the application in the test namespace. The last stage deploy-prod requires a manual approval. Both deploy stages are allowed only for a master branch.

image: maven:latest

stages:
  - build
  - test
  - image-build
  - deploy-tb
  - deploy-prod

build:
  stage: build
  script:
    - mvn compile

test:
  stage: test
  script:
    - mvn test

image-build:
  stage: image-build
  script:
    - mvn -s .m2/settings.xml -P jib compile jib:build

deploy-tb:
  image: bitnami/kubectl:latest
  stage: deploy-tb
  only:
    - master
  script:
    - kubectl apply -f k8s/deployment.yaml -n test

deploy-prod:
  image: bitnami/kubectl:latest
  stage: deploy-prod
  only:
    - master
  when: manual
  script:
    - kubectl apply -f k8s/deployment.yaml -n prod

We may push our application image to a remote or a local Docker registry. If you do not pass any address, by default Jib tries to push the image to the docker.io registry.

<plugin>
   <groupId>com.google.cloud.tools</groupId>
   <artifactId>jib-maven-plugin</artifactId>
   <version>2.4.0</version>
   <configuration>
      <to>piomin/sample-spring-boot-on-kubernetes</to>
   </configuration>
</plugin>

In order to push images to the docker.io registry, we need to provide client’s authentication credentials.

<servers>
   <server>
      <id>registry-1.docker.io</id>
      <username>${DOCKER_LOGIN}</username>
      <password>${DOCKER_PASSWORD}</password>
   </server>
</servers>

Here’s a similar configuration, but for the local instance of the registry.

<plugin>
   <groupId>com.google.cloud.tools</groupId>
   <artifactId>jib-maven-plugin</artifactId>
   <version>2.4.0</version>
   <configuration>
      <allowInsecureRegistries>true</allowInsecureRegistries>
      <to>172.17.0.2:5000/root/sample-spring-boot-on-kubernetes</to>
   </configuration>
</plugin>

Run GitLab CI pipeline on Kubernetes

Finally, we may run our GitLab CI/CD pipeline. We can push the change in the source code or just run the build manually. You can see the result for the master branch in the picture below.

gitlab-on-kubernetes-pipeline-finished

The last stage deploy-prod requires manual approval. We may confirm it by clicking the “Play” button.

gitlab-on-kubernetes-pipeline-approval

If you push changes to another branch than master, the pipeline will run just three stages. It will build the application, run tests, and build a Docker image.

gitlab-on-kubernetes-dev-build

You can also take advantage of the integrated container registry. You just need to set the right name of a Docker image. It should contain the GitLab owner name and image name. In that case, it is root/sample-spring-boot-on-kubernetes. I’m using my local Docker registry available at 172.17.0.2:5000.

Conclusion

GitLab seems to be a very interesting tool for building CI/CD processes on Kubernetes. It provides built-in integration with Kubernetes and a Docker container registry. Its documentation is at a very high level. In this article, I tried to show you that it is relatively easy to build CI/CD pipelines for Maven applications on Kubernetes. If you are interested in building a full CI/CD environment you may refer to the article How to setup continuous delivery environment. Enjoy 🙂

The post Gitlab CI/CD on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/10/19/gitlab-ci-cd-on-kubernetes/feed/ 2 9002
Guide to Quarkus on Kubernetes https://piotrminkowski.com/2020/08/10/guide-to-quarkus-on-kubernetes/ https://piotrminkowski.com/2020/08/10/guide-to-quarkus-on-kubernetes/#comments Mon, 10 Aug 2020 15:30:42 +0000 http://piotrminkowski.com/?p=8336 Quarkus is usually described as a Kubernetes-native Java framework. It allows us to automatically generate Kubernetes resources based on the defaults and user-provided configuration. It also provides an extension for building and pushing container images. Quarkus can create a container image and push it to a registry before deploying the application to the target platform. […]

The post Guide to Quarkus on Kubernetes appeared first on Piotr's TechBlog.

]]>
Quarkus is usually described as a Kubernetes-native Java framework. It allows us to automatically generate Kubernetes resources based on the defaults and user-provided configuration. It also provides an extension for building and pushing container images. Quarkus can create a container image and push it to a registry before deploying the application to the target platform. It also provides an extension that allows developers to use Kubernetes ConfigMap as a configuration source, without having to mount them into the pod. We may use fabric8 Kubernetes Client directly to interact with the cluster, for example during JUnit tests.
In this guide, you will learn how to:

  • Use Quarkus Dekorate extension to automatically generate Kubernetes manifests basing on the source code and configuration
  • Build and push images to Docker registry with Jib extension
  • Deploy your application on Kubernetes without any manually created YAML in one click
  • Use Quarkus Kubernetes Config to inject configuration properties from ConfigMap

This guide is the second in series about Quarkus framework. If you are interested in the introduction to building Quarkus REST applications with Kotlin you may refer to my article Guide to Quarkus with Kotlin.

github-logo Source code

The source code with the sample Quarkus applications is available on GitHub. First, you need to clone the following repository: https://github.com/piomin/sample-quarkus-applications.git. Then, you need to go to the employee-service directory. We use the same repository as in my previous article about Quarkus.

1. Dependencies

Quarkus does not implement mechanisms for generating Kubernetes manifests, deploying them on the platform, or building images. It adds some logic to the existing tools. To enable extensions to Dekorate and Jib we should include the following dependencies.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-kubernetes</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-container-image-jib</artifactId>
</dependency>

Jib builds optimized images for Java applications without a Docker daemon, and without deep mastery of Docker best-practices. It is available as plugins for Maven and Gradle and as a Java library. Dekorate is a Java library that makes generating and decorating Kubernetes manifests as simple as adding a dependency to your project. It may generate manifests basing on the source code, annotations, and configuration properties.

2. Preparation

In the first part of my guide to Kotlin, we were running our application in development mode with an embedded H2 database. In this part of the tutorial, we will integrate our application with Postgres deployed on Kubernetes. To do that we first need to change configuration settings for the data source. H2 database will be active only in dev and test mode. The configuration of Postgresql data source would be based on environment variables.


# kubernetes
quarkus.datasource.db-kind=postgresql
quarkus.datasource.username=${POSTGRES_USER}
quarkus.datasource.password=${POSTGRES_PASSWORD}
quarkus.datasource.jdbc.url=jdbc:postgresql://${POSTGRES_HOST}:5432/${POSTGRES_DB}
# dev
%dev.quarkus.datasource.db-kind=h2
%dev.quarkus.datasource.username=sa
%dev.quarkus.datasource.password=password
%dev.quarkus.datasource.jdbc.url=jdbc:h2:mem:testdb
# test
%test.quarkus.datasource.db-kind=h2
%test.quarkus.datasource.username=sa
%test.quarkus.datasource.password=password
%test.quarkus.datasource.jdbc.url=jdbc:h2:mem:testdb

3. Configure Kubernetes extension

With Quarkus Kubernetes extension we may customize the behavior of the manifest generator. To do that we need to provide configuration settings with the prefix quarkus.kubernetes.*. There are pretty many options like defining labels, annotations, environment variables, Secret and ConfigMap references, or mounting volumes. First, let’s take a look at the Secret and ConfigMap prepared for Postgres.

apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-config
  labels:
    app: postgres
data:
  POSTGRES_DB: quarkus
  POSTGRES_USER: quarkus
  POSTGRES_HOST: postgres
---
apiVersion: v1
kind: Secret
metadata:
  name: postgres-secret
  labels:
    app: postgres
data:
  POSTGRES_PASSWORD: YWRtaW4xMjM=

In this fragment of configuration, besides simple label and annotation, we are adding reference to all the keys inside postgres-config and postgres-secret.

quarkus.kubernetes.labels.app-type=demo
quarkus.kubernetes.annotations.app-type=demo
quarkus.kubernetes.env.secrets=postgres-secret
quarkus.kubernetes.env.configmaps=postgres-config

4. Build image and deploy

Before executing build and deploy we need to apply manifest with Postgres. Here’s Deployment definition of Postgres.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:latest
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_DB
              valueFrom:
                configMapKeyRef:
                  key: POSTGRES_DB
                  name: postgres-config
            - name: POSTGRES_USER
              valueFrom:
                configMapKeyRef:
                  key: POSTGRES_USER
                  name: postgres-config
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: POSTGRES_PASSWORD
                  name: postgres-secret
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgredb
      volumes:
        - name: postgredb
          persistentVolumeClaim:
            claimName: postgres-claim

Let’s apply it on Kubernetes together with required ConfigMap, Secret, PersistenceVolume and PersistenceVolumeClaim. All the objects are available inside example repository in the file employee-service/k8s/postgres-deployment.yaml.

$ kubectl apply -f employee-service\k8s\postgresql-deployment.yaml

After deploying Postgres we may proceed to the main task. In order to build a Docker image with the application, we need to enable option quarkus.container-image.build during Maven build. If you also want to deploy and run a container with the application on your local Kubernetes instance you need to enable option quarkus.kubernetes.deploy.

$ clean package -Dquarkus.container-image.build=true -Dquarkus.kubernetes.deploy=true

If your Kubernetes cluster is located on the hosted cloud you should push the image to a remote Docker registry before deployment. To do that we should also activate option quarkus.container-image.push during Maven build. If you do not push to the default Docker registry you have to set parameter quarkus.container-image.registry=gcr.io inside the application.properties file. The only thing I need to set for building images is the following property, which is the same as my login to docker.io site.

quarkus.container-image.group=piomin

After running the required Maven command our application is deployed on Kubernetes. Let’s take a look at what happened during the Maven build. Here’s the fragment of logs during that build. You see that Quarkus extension generated two files kubernetes.yaml and kubernetes.json inside target/kubernetes directory. Then it proceeded to build a Docker image with our application. Because we didn’t specify any base image it takes a default one for Java 11 – fabric8/java-alpine-openjdk11-jre.

quarkus-build-image

Let’s take a look on the Deployment definition automatically generated by Quarkus.

  1. It adds some annotations like port or path to metrics endpoint used by Prometheus to monitor application and enabled scraping. It also adds Git commit id, repository URL, and our custom annotation defined in application.properties.
  2. It adds labels with the application name, version (taken from Maven pom.xml), and our custom label app-type.
  3. It injects Kubernetes namespace name into the container.
  4. It injects the reference to the postgres-secret defined in application.properties.
  5. It injects the reference to the postgres-config defined in application.properties.
  6. The name of the image is automatically created. It is based on Maven artifactId and version.
  7. The definition of liveness and readiness is generated if Maven module quarkus-smallrye-health is present
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations: # (1)
    prometheus.io/path: /metrics
    prometheus.io/port: 8080
    app.quarkus.io/commit-id: f6ae37288ed445177f23291c921c6099cfc58c6e
    app.quarkus.io/vcs-url: https://github.com/piomin/sample-quarkus-applications.git
    app.quarkus.io/build-timestamp: 2020-08-10 - 13:22:32 +0000
    app-type: demo
    prometheus.io/scrape: "true"
  labels: # (2)
    app.kubernetes.io/name: employee-service
    app.kubernetes.io/version: 1.1
    app-type: demo
  name: employee-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: employee-service
      app.kubernetes.io/version: 1.1
  template:
    metadata:
      annotations:
        prometheus.io/path: /metrics
        prometheus.io/port: 8080
        app.quarkus.io/commit-id: f6ae37288ed445177f23291c921c6099cfc58c6e
        app.quarkus.io/vcs-url: https://github.com/piomin/sample-quarkus-applications.git
        app.quarkus.io/build-timestamp: 2020-08-10 - 13:22:32 +0000
        app-type: demo
        prometheus.io/scrape: "true"
      labels:
        app.kubernetes.io/name: employee-service
        app.kubernetes.io/version: 1.1
        app-type: demo
    spec:
      containers:
      - env:
        - name: KUBERNETES_NAMESPACE # (3)
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        envFrom:
        - secretRef: # (4)
            name: postgres-secret
        - configMapRef: # (5)
            name: postgres-config
        image: piomin/employee-service:1.1 # (6)
        imagePullPolicy: IfNotPresent
        livenessProbe: # (7)
          failureThreshold: 3
          httpGet:
            path: /health/live
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 0
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 10
        name: employee-service
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health/ready
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 0
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 10
      serviceAccount: employee-service

Once the image has been built it is available in local registry. Quarkus automatically deploy it to the current cluster using already generated Kubernetes manifests.

quarkus-build-maven

Here’s the list of pods in default namespace.

quarkus-pods

5. Using Kubernetes Config extension

With Kubernetes Config extension you can use ConfigMap as a configuration source, without having to mount them into the pod with the application. To use that extension we need to include the following Maven dependency.


<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-kubernetes-config</artifactId>
</dependency>

This extension works directly with Kubernetes API using fabric8 KubernetesClient. That’s why we should set the proper permissions for ServiceAccount. Fortunately, all the required configuration is automatically generated by Quarkus Kubernetes extension. The RoleBinding object is appied automatically if quarkus-kubernetes-config module is present.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  annotations:
    prometheus.io/path: /metrics
    prometheus.io/port: 8080
    app.quarkus.io/commit-id: a5da459af01637657ebb0ec3a606eb53d13b8524
    app.quarkus.io/vcs-url: https://github.com/piomin/sample-quarkus-applications.git
    app.quarkus.io/build-timestamp: 2020-08-10 - 14:25:20 +0000
    app-type: demo
    prometheus.io/scrape: "true"
  labels:
    app.kubernetes.io/name: employee-service
    app.kubernetes.io/version: 1.1
    app-type: demo
  name: employee-service:view
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: view
subjects:
- kind: ServiceAccount
  name: employee-service

Here’s our example ConfigMap that contains a single property property1.


apiVersion: v1
kind: ConfigMap
metadata:
  name: employee-config
data:
  application.properties: |-
    property1=one

The same property is defined inside application.properties available on the classpath, but there it has a different value.

property1=test

Before deploying a new version of application we need to add the following properties. First of them enables Kubernetes ConfigMap injection, while the second specifies the name of injected ConfigMap.

quarkus.kubernetes-config.enabled=true
quarkus.kubernetes-config.config-maps=employee-config

Finally we just need to implement a simple endpoint that injects and returns configuration property.

@ConfigProperty(name = "property1")
lateinit var property1: String

@GET
@Path("/property1")
fun property1(): String = property1

The properties obtained from the ConfigMap have a higher priority than any properties of the same name that are found in application.properties available on the classpath. Let’s test it.

quarkus-config

The post Guide to Quarkus on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/08/10/guide-to-quarkus-on-kubernetes/feed/ 3 8336
Simplify development on Kubernetes with Dekorate, Skaffold and Spring Boot https://piotrminkowski.com/2020/06/08/simplify-development-on-kubernetes-with-dekorate-skaffold-and-spring-boot/ https://piotrminkowski.com/2020/06/08/simplify-development-on-kubernetes-with-dekorate-skaffold-and-spring-boot/#comments Mon, 08 Jun 2020 10:42:12 +0000 http://piotrminkowski.com/?p=8091 Although Kubernetes is a great solution for managing containerized applications, scaling, and automating deployment, a local development on it may be a painful experience. A typical workflow includes several steps like checking the functionality of the code locally, building and tagging a docker image, creating a deployment configuration, and finally deploying everything on Kubernetes. In […]

The post Simplify development on Kubernetes with Dekorate, Skaffold and Spring Boot appeared first on Piotr's TechBlog.

]]>
Although Kubernetes is a great solution for managing containerized applications, scaling, and automating deployment, a local development on it may be a painful experience. A typical workflow includes several steps like checking the functionality of the code locally, building and tagging a docker image, creating a deployment configuration, and finally deploying everything on Kubernetes. In this article, I’m going to show how to use some tools together to simplify that process.
I have already described how to use such tools as Skaffold and Jib for local Java and Spring Boot development on Kubernetes in one of my previous articles Local Java Development on Kubernetes. I have also already described what is Dekorate framework and how to use it together with Spring Boot for building and deploying applications on OpenShift in the article Deploying Spring Boot application on OpenShift with Dekorate. Using all these tools together may be a great combination! Let’s proceed with the details.

Example

The sample Spring Boot application prepared for usage with Skaffold is available on my GitHub repository https://github.com/piomin/sample-springboot-dekorate-istio.git. Assuming you already have Skaffold, to deploy it on Kubernetes you just need to execute command skaffold dev --port-forward on the root directory.

Dependencies

We are using version 2.2.7 of Spring Boot with JDK 11. To enable generating manifests during Maven build with Dekorate we need to include library kubernetes-spring-starter. To use Dekorate annotations in our application we should include kubernetes-annotations dependency.

<parent>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-parent</artifactId>
   <version>2.2.7.RELEASE</version>
</parent>

<properties>
   <java.version>11</java.version>
</properties>

<dependencies>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-actuator</artifactId>
   </dependency>
   <dependency>
      <groupId>io.dekorate</groupId>
      <artifactId>kubernetes-spring-starter</artifactId>
      <version>0.12.2</version>
   </dependency>
   <dependency>
      <groupId>io.dekorate</groupId>
      <artifactId>kubernetes-annotations</artifactId>
      <version>0.12.2</version>
   </dependency>
</dependencies>

To enable Jib we just need to include its Maven plugin. Of course, we also need to have Spring Boot Maven Plugin responsible for generating uber jar with the application.


<build>
   <plugins>
      <plugin>
         <groupId>org.springframework.boot</groupId>
         <artifactId>spring-boot-maven-plugin</artifactId>
         <executions>
            <execution>
               <goals>
                  <goal>build-info</goal>
               </goals>
            </execution>
         </executions>
      </plugin>
      <plugin>
         <groupId>com.google.cloud.tools</groupId>
         <artifactId>jib-maven-plugin</artifactId>
         <version>2.3.0</version>
      </plugin>
   </plugins>
</build>

Integrating Skaffold with Dekorate

Dekorate is generating Kubernetes manifests during the compilation phase of Maven build. The default target directory for those manifests is target/classes/META-INF/dekorate. I didn’t find any description in Dekorate documentation on how to override that path, but it is not a problem for us. We may as well set that directory in Skaffold configuration.
Dekorate is generating manifests in two variants: YAML and JSON. During tests, I have some problems with the YAML manifest generated by Dekorate, so I decided to switch to JSON. It worked perfectly fine. Here’s our configuration in skaffold.yaml. As you see I’m enabling integration with Jib and adding a new search path for Kubernetes manifests target/classes/META-INF/dekorate/*.json in deploy section. I have also left a default path for manifests which is k8s directory.

apiVersion: skaffold/v2alpha1
kind: Config
build:
  artifacts:
    - image: minkowp/sample-springboot-dekorate-istio
      jib: {}
  tagPolicy:
    gitCommit: {}
deploy:
  kubectl:
    manifests:
      - target/classes/META-INF/dekorate/*.json
      - k8s/*.yaml

Using Dekorate annotations with Spring Boot

As you probably know Spring Boot is an annotation-based framework. So the most suitable way for developer to declare resources on a target platform is of course through annotations. In that case Dekorate is a perfect solution. It has built-in support for Spring Boot – like a detection of health check endpoints configured using Spring Boot Actuator. The rest may be configured using its annotations.
The following code snippet illustrates how to use Dekorate annotations. First you should annotate the main class with @KubernetesApplication. We have pretty many options here, but I show you the most typical features. We can set the number of pods for a single Deployment (replicas). We may also inject environment variable to the container or provide a reference to the property in ConfigMap. In that case we are using property propertyFromMap inside sample-configmap ConfigMap. We can also expose our application through Service. The exposed port is 8080. Each Deployment may be labelled using many labels declared inside labels field. It is possible to set some JVM parameters for our application using @JvmOptions annotation.

@SpringBootApplication
@KubernetesApplication(replicas = 2,
        envVars = { @Env(name = "propertyEnv", value = "Hello from env!"),
                    @Env(name = "propertyFromMap", value = "property1", configmap = "sample-configmap") },
        expose = true,
        ports = @Port(name = "http", containerPort = 8080),
        labels = @Label(key = "version", value = "v1"))
@JvmOptions(server = true, xmx = 256, gc = GarbageCollector.SerialGC)
public class DekorateIstioApplication {

    public static void main(String[] args) {
        SpringApplication.run(DekorateIstioApplication.class, args);
    }

}

A sample ConfigMap should be placed inside k8s directory. Assuming that it has not deployed on Kubernetes yet, it is the only one manifest we need to create manually.

apiVersion: v1
kind: ConfigMap
metadata:
  name: sample-configmap
data:
  property1: I'm ConfigMap property

That process is completely transparent for us, but just for clarification let’s take a look on the manifest generated by Dekorate. As you see instead of 114 lines of YAML code we just had to declare two annotations with some fields.

apiVersion: v1
kind: Service
metadata:
  annotations:
    app.dekorate.io/commit-id: 25e1ac563793a33d5229ef860a5a1ae169aa62ec
    app.dekorate.io/vcs-url: https://github.com/piomin/sample-springboot-dekorate-istio.git
  labels:
    app.kubernetes.io/name: sample-springboot-dekorate-istio
    version: v1
    app.kubernetes.io/version: 1.0
  name: sample-springboot-dekorate-istio
spec:
  ports:
  - name: http
    port: 8080
    targetPort: 8080
  selector:
    app.kubernetes.io/name: sample-springboot-dekorate-istio
    version: v1
    app.kubernetes.io/version: 1.0
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    app.dekorate.io/commit-id: 25e1ac563793a33d5229ef860a5a1ae169aa62ec
    app.dekorate.io/vcs-url: https://github.com/piomin/sample-springboot-dekorate-istio.git
  labels:
    app.kubernetes.io/name: sample-springboot-dekorate-istio
    version: v1
    app.kubernetes.io/version: 1.0
  name: sample-springboot-dekorate-istio
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: sample-springboot-dekorate-istio
      version: v1
      app.kubernetes.io/version: 1.0
  template:
    metadata:
      annotations:
        app.dekorate.io/commit-id: 25e1ac563793a33d5229ef860a5a1ae169aa62ec
        app.dekorate.io/vcs-url: https://github.com/piomin/sample-springboot-dekorate-istio.git
      labels:
        app.kubernetes.io/name: sample-springboot-dekorate-istio
        version: v1
        app.kubernetes.io/version: 1.0
    spec:
      containers:
      - env:
        - name: KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: propertyEnv
          value: Hello from env!
        - name: propertyFromMap
          valueFrom:
            configMapKeyRef:
              key: property1
              name: sample-configmap
              optional: false
        - name: JAVA_OPTS
          value: -Xmx=256M -XX:+UseSerialGC -server
        - name: JAVA_OPTIONS
          value: -Xmx=256M -XX:+UseSerialGC -server
        image: minkowp/sample-springboot-dekorate-istio:1.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /actuator/info
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 0
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 10
        name: sample-springboot-dekorate-istio
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /actuator/health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 0
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 10
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    app.kubernetes.io/name: sample-springboot-dekorate-istio
    version: v1
    app.kubernetes.io/version: 1.0
  name: sample-springboot-dekorate-istio
spec:
  rules:
  - host: ""
    http:
      paths:
      - backend:
          serviceName: sample-springboot-dekorate-istio
          servicePort: 8080
        path: /

Here’s the structure of source files in our application.

development-on-kubernetes-dekorate-skaffold-springboot-sourcecode

Deploy Spring Boot on Kubernetes

After executing command skaffold dev --port-forward Skaffold is continuously listening for the changes in the source code. Thanks to port-forward option we may easily test it on port 8080. Here’s the status of our Deployment on Kubernetes.

development-on-kubernetes-dekorate-skaffold-springboot-deployment

There is a simple endpoint that test properties injected to the container and from ConfigMap.

@RestController
@RequestMapping("/sample")
public class SampleController {

    @Value("${propertyFromMap}")
    String propertyFromMap;
    @Value("${propertyEnv}")
    String propertyEnv;

    @GetMapping("/properties")
    public String getProperties() {
        return "propertyFromMap=" + propertyFromMap + ", propertyEnv=" + propertyEnv;
    }

}

We can easily verify it.

development-on-kubernetes-dekorate-skaffold-springboot-curl

Summary

Spring Boot development on Kubernetes may be a pleasure when using the right tools for it. Dekorate is a very interesting option for developers in connection to annotation-based microservices frameworks. Using it together with Skaffold may additionally speed-up your local development of Spring Boot applications.

The post Simplify development on Kubernetes with Dekorate, Skaffold and Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/06/08/simplify-development-on-kubernetes-with-dekorate-skaffold-and-spring-boot/feed/ 1 8091