Maven Archives - Piotr's TechBlog https://piotrminkowski.com/tag/maven/ Java, Spring, Kotlin, microservices, Kubernetes, containers Thu, 05 Sep 2024 16:10:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Maven Archives - Piotr's TechBlog https://piotrminkowski.com/tag/maven/ 32 32 181738725 SBOM with Spring Boot https://piotrminkowski.com/2024/09/05/sbom-with-spring-boot/ https://piotrminkowski.com/2024/09/05/sbom-with-spring-boot/#comments Thu, 05 Sep 2024 16:09:48 +0000 https://piotrminkowski.com/?p=15361 This article will teach you, how to leverage SBOM support in Spring Boot to implement security checks for your apps. A Software Bill of Materials (SBOM) lists all your app codebase’s open-source and third-party components. As a result, it allows us to perform vulnerability scanning, license checks, and risk analysis. Spring Boot 3.3 introduces built-in […]

The post SBOM with Spring Boot appeared first on Piotr's TechBlog.

]]>
This article will teach you, how to leverage SBOM support in Spring Boot to implement security checks for your apps. A Software Bill of Materials (SBOM) lists all your app codebase’s open-source and third-party components. As a result, it allows us to perform vulnerability scanning, license checks, and risk analysis. Spring Boot 3.3 introduces built-in support for generating SBOMs during app build and exposing them through the actuator endpoint. In this article, we will analyze the app based on the latest version of Spring Boot and another one using the outdated version of the libraries. You will see how to use the snyk CLI to verify the generated SBOM files.

It is the first article on my blog after a long holiday break. I hope you enjoy it. If you are interested in Spring Boot you can find several other posts about it on my blog. I can recommend my article about another fresh Spring Boot feature related to security, that describes SSL certs hot reload on Kubernetes.

Source Code

If you would like to try this exercise by yourself, you may always take a look at my source code. Today you will have to clone two sample Git repositories. The first one contains automatically updated source code of microservices based on the latest version of the Spring Boot framework. The second repository contains an archived version of microservices based on the earlier, unsupported version of Spring Boot. Once you clone both of these repositories, you just need to follow my instructions.

By the way, you can verify SBOMs generated for your Spring Boot apps in various ways. I decided to use snyk CLI for that. Alternatively, you can use the web version of the Snyk SBOM checker available here. In order to install the snyk CLI on your machine you need to follow its documentation. I used homebrew to install it on my macOS:

$ brew tap snyk/tap
$ brew install snyk
ShellSession

Enable SBOM Support in Spring Boot

By default, Spring Boot supports the CycloneDX format for generating SBOMs. In order to enable it, we need to include the cyclonedx-maven-plugin Maven plugin in your project root pom.xml.

<build>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
    <plugin>
      <groupId>org.cyclonedx</groupId>
      <artifactId>cyclonedx-maven-plugin</artifactId>
    </plugin>
  </plugins>
</build>
XML

There are several microservices defined in the same Git repository. They are all using the same root pom.xml. Each of them defines its list of dependencies. For this exercise, we need to have at least the Spring Boot Web and Actuator starters. However, let’s take a look at the whole list of dependencies for the employee-service (one of our sample microservices):

<dependencies>
  <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-config</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
  </dependency>
  <dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-tracing-bridge-otel</artifactId>
  </dependency>
  <dependency>
    <groupId>io.opentelemetry</groupId>
    <artifactId>opentelemetry-exporter-zipkin</artifactId>
  </dependency>
  <dependency>
    <groupId>io.github.openfeign</groupId>
    <artifactId>feign-micrometer</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springdoc</groupId>
    <artifactId>springdoc-openapi-starter-webmvc-api</artifactId>
    <version>2.6.0</version>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
  </dependency>
  <dependency>
    <groupId>org.instancio</groupId>
    <artifactId>instancio-junit</artifactId>
    <version>4.8.1</version>
    <scope>test</scope>
  </dependency>
</dependencies>
XML

After including the cyclonedx-maven-plugin plugin we need to execute the mvn package command in the repository root directory:

$ mvn clean package -DskipTests
ShellSession

The plugin will generate SBOM files for all the existing microservices and place them in the target/classes/META-INF/sbom directory for each Maven module.

spring-boot-sbom-maven

The generated SBOM file will always be placed inside the JAR file as well. Let’s take a look at the location of the SBOM file inside the employee-service uber JAR.

In order to expose the actuator SBOM endpoint we need to include the following configuration property. Since our configuration is stored by the Spring Cloud Config server, we need to put such a property in the YAML files inside the config-service/src/main/resources/config directory.

management:
  endpoints:
    web:
      exposure:
        include: health,sbom
ShellSession

Then, let’s start the config-service with the following Maven command:

$ cd config-service
$ mvn clean spring-boot:run
ShellSession

After that, we can start our sample microservice. It loads the configuration properties from the config-service. It listens on the dynamically generated port number. For me, it is 53498. In order to see the contents of the generated SBOM file, we need to call the GET /actuator/sbom/application path.

Generate and Verify SBOMs with the Snyk CLI

The exact structure of the SBOM file is not very important from our perspective. We need a tool that allows us to verify components and dependencies published inside that file. As I mentioned before, we can use the snyk CLI for that. We will examine the file generated in the repository root directory. Here’s the snyk command that allows us to print all the detected vulnerabilities in the SBOM file:

$ snyk sbom test \
   --file=target/classes/META-INF/sbom/application.cdx.json \
   --experimental
ShellSession

Here’s the report created as the output of the command executed above. As you see, there are two detected issues related to the included dependencies. Of course, I’m not including those dependencies directly in the Maven pom.xml. They were automatically included by the Spring Boot starters used by the microservices. By the way, I was even not aware that Spring Boot includes kotlin-stdlib even if I’m not using any Kotlin library directly in the app.

spring-boot-sbom-snyk

Although there are two issues detected in the report, it doesn’t look very bad. Now, let’s try to analyze something more outdated. I have already mentioned my old repository with microservices: sample-spring-microservices. It is already in the archived status and uses Spring Boot in the 1.5 version. If we don’t want to modify anything there, we can also use snyk CLI to generate SBOM instead of the Maven plugin. Since built-in support for SBOM comes with Spring Boot 3.3 there is no sense in including a plugin for the apps with the 1.5 version. Here’s the snyk command that generates SBOM for all the projects inside the repository and exports it to the application.cdx.json file:

$ snyk sbom --format=cyclonedx1.4+json --all-projects > application.cdx.json
ShellSession

Then, let’s examine the SBOM file using the same command as before:

$ snyk sbom test --file=application.cdx.json --experimental
ShellSession

Now, the results are much more pessimistic. There are 211 detected issues, including 6 critical.

Final Thoughts

SBOMs allow organizations to identify and address potential security risks more effectively. Spring Boot support for generating SBOM files simplifies the process when incorporating them into the organizational software development life cycle.

The post SBOM with Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/09/05/sbom-with-spring-boot/feed/ 2 15361
Kubernetes Testing with CircleCI, Kind, and Skaffold https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/ https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/#respond Tue, 28 Nov 2023 13:04:18 +0000 https://piotrminkowski.com/?p=14706 In this article, you will learn how to use tools like Kind or Skaffold to build integration tests on CircleCI for apps running on Kubernetes. Our main goal in this exercise is to build the app image and verify the Deployment on Kubernetes in the CircleCI pipeline. Skaffold and Jib Maven plugin build the image […]

The post Kubernetes Testing with CircleCI, Kind, and Skaffold appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use tools like Kind or Skaffold to build integration tests on CircleCI for apps running on Kubernetes. Our main goal in this exercise is to build the app image and verify the Deployment on Kubernetes in the CircleCI pipeline. Skaffold and Jib Maven plugin build the image from the source and deploy it on Kind using YAML manifests. Finally, we will run some load tests on the deployed app using the Grafana k6 tool and its integration with CircleCI.

If you want to build and run tests against Kubernetes, you can read my article about integration tests with JUnit. On the other hand, if you are looking for other testing tools for testing in a Kubernetes-native environment you can refer to that article about Testkube.

Introduction

Before we start, let’s do a brief introduction. There are three simple Spring Boot apps that communicate with each other. The first-service app calls the endpoint exposed by the caller-service app, and then the caller-service app calls the endpoint exposed by the callme-service app. The diagram visible below illustrates that architecture.

kubernetes-circleci-arch

So in short, our goal is to deploy all the sample apps on Kind during the CircleCI build and then test the communication by calling the endpoint exposed by the first-service through the Kubernetes Service.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. It contains three apps: first-service, caller-service, and callme-service. The main Skaffold config manifest is available in the project root directory. Required Kubernetes YAML manifests are always placed inside the k8s directory. Once you take a look at the source code, you should just follow my instructions. Let’s begin.

Our sample Spring Boot apps are very simple. They are exposing a single “ping” endpoint over HTTP and call “ping” endpoints exposed by other apps. Here’s the @RestController in the first-service app:

@RestController
@RequestMapping("/first")
public class FirstController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(FirstController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() 
         ? buildProperties.get().getName() : "first-service", version);
      String response = restTemplate.getForObject(
         "http://caller-service:8080/caller/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm first-service " + version + ". Calling... " + response;
   }

}

Here’s the @RestController inside the caller-service app. The endpoint is called by the first-service app through the RestTemplate bean.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallerController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.or(Optional::empty), version);
      String response = restTemplate.getForObject(
         "http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }

}

Finally, here’s the @RestController inside the callme-service app. It also exposes a single GET /callme/ping endpoint called by the caller-service app:

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallmeController.class);
   private static final String INSTANCE_ID = UUID.randomUUID().toString();
   private Random random = new Random();

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() 
         ? buildProperties.get().getName() : "callme-service", version);
      return "I'm callme-service " + version;
   }

}

Build and Deploy Images with Skaffold and Jib

Firstly, let’s take a look at the main Maven pom.xml in the project root directory. We use the latest version of Spring Boot and the latest LTS version of Java for compilation. All three app modules inherit settings from the parent pom.xml. In order to build the image with Maven we are including jib-maven-plugin. Since it is still using Java 17 in the default base image, we need to override this behavior with the <from>.
            </from>
          </configuration>
        </plugin>
      </plugins>
    </build>
  </profile>
</profiles>

Now, let’s take a look at the main skaffold.yaml file. Skaffold builds the image using Jib support and deploys all three apps on Kubernetes using manifests available in the k8s/deployment.yaml file inside each app module. Skaffold disables JUnit tests for Maven and activates the jib profile. It is also able to deploy Istio objects after activating the istio Skaffold profile. However, we won’t use it today.

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: simple-istio-services
build:
  artifacts:
    - image: piomin/first-service
      jib:
        project: first-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/caller-service
      jib:
        project: caller-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/callme-service
      jib:
        project: callme-service
        args:
          - -Pjib
          - -DskipTests
  tagPolicy:
    gitCommit: {}
manifests:
  rawYaml:
    - '*/k8s/deployment.yaml'
deploy:
  kubectl: {}
profiles:
  - name: istio
    manifests:
      rawYaml:
        - k8s/istio-*.yaml
        - '*/k8s/deployment-versions.yaml'
        - '*/k8s/istio-*.yaml'

Here’s the typical Deployment for our apps. The app is running on the 8080 port.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: first-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: first-service
  template:
    metadata:
      labels:
        app: first-service
    spec:
      containers:
        - name: first-service
          image: piomin/first-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

For testing purposes, we need to expose the first-service outside of the Kind cluster. In order to do that, we will use the Kubernetes NodePort Service. Our app will be available under the 30000 port.

apiVersion: v1
kind: Service
metadata:
  name: first-service
  labels:
    app: first-service
spec:
  type: NodePort
  ports:
  - port: 8080
    name: http
    nodePort: 30000
  selector:
    app: first-service

Note that all other Kubernetes services (“caller-service” and “callme-service”) are exposed only internally using a default ClusterIP type.

How It Works

In this section, we will discuss how we would run the whole process locally. Of course, our goal is to configure it as the CircleCI pipeline. In order to expose the Kubernetes Service outside Kind we need to define the externalPortMappings section in the configuration manifest. As you probably remember, we are exposing our app under the 30000 port. The following file is available in the repository under the k8s/kind-cluster-test.yaml path:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
  - role: control-plane
    extraPortMappings:
      - containerPort: 30000
        hostPort: 30000
        listenAddress: "0.0.0.0"
        protocol: tcp

Assuming we already installed kind CLI on our machine, we need to execute the following command to create a new cluster:

$ kind create cluster --name c1 --config k8s/kind-cluster-test.yaml

You should have the same result as visible on my screen:

We have a single-node Kind cluster ready. There is a single c1-control-plane container running on Docker. As you see, it exposes 30000 port outside of the cluster:

The Kubernetes context is automatically switched to kind-c1. So now, we just need to run the following command from the repository root directory to build and deploy the apps:

$ skaffold run

If you see a similar output in the skaffold run logs, it means that everything works fine.

kubernetes-circleci-skaffold

We can verify a list of Kubernetes services. The first-service is exposed under the 30000 port as expected.

$ kubectl get svc
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
caller-service   ClusterIP   10.96.47.193   <none>        8080/TCP         2m24s
callme-service   ClusterIP   10.96.98.53    <none>        8080/TCP         2m24s
first-service    NodePort    10.96.241.11   <none>        8080:30000/TCP   2m24s

Assuming you have already installed the Grafana k6 tool locally, you may run load tests using the following command:

$ k6 run first-service/src/test/resources/k6/load-test.js

That’s all. Now, let’s define the same actions with the CircleCI workflow.

Test Kubernetes Deployment with the CircleCI Workflow

The CircleCI config.yml file should be placed in the .circle directory. We are doing two things in our pipeline. In the first step, we are executing Maven unit tests without the Kubernetes cluster. That’s why we need a standard executor with OpenJDK 21 and the maven ORB. In order to run Kind during the CircleCI build, we need to have access to the Docker daemon. Therefore, we use the latest version of the ubuntu-2204 machine.

version: 2.1

orbs:
  maven: circleci/maven@1.4.1

executors:
  jdk:
    docker:
      - image: 'cimg/openjdk:21.0'
  machine_executor_amd64:
    machine:
      image: ubuntu-2204:2023.10.1
    environment:
      architecture: "amd64"
      platform: "linux/amd64"

After that, we can proceed to the job declaration. The name of our job is deploy-k8s. It uses the already-defined machine executor. Let’s discuss the required steps after running a standard checkout command:

  1. We need to install the kubectl CLI and copy it to the /usr/local/bin directory. Skaffold uses kubectl to interact with the Kubernetes cluster.
  2. After that, we have to install the skaffold CLI
  3. Our job also requires the kind CLI to be able to create or delete Kind clusters on Docker…
  4. … and the Grafana k6 CLI to run load tests against the app deployed on the cluster
  5. There is a good chance that this step won’t required once CircleCI releases a new version of ubuntu-2204 machine (probably 2024.1.1 according to the release strategy). For now, ubuntu-2204 provides OpenJDK 17, so we need to install OpenJDK 17 to successfully build the app from the source code
  6. After installing all the required tools we can create a new Kubernetes with the kind create cluster command.
  7. Once a cluster is ready, we can deploy our apps using the skaffold run command.
  8. Once the apps are running on the cluster, we can proceed to the tests phase. We are running the test defined inside the first-service/src/test/resources/k6/load-test.js file.
  9. After doing all the required steps, it is important to remove the Kind cluster
 jobs:
  deploy-k8s:
    executor: machine_executor_amd64
    steps:
      - checkout
      - run: # (1)
          name: Install Kubectl
          command: |
            curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
            chmod +x kubectl
            sudo mv ./kubectl /usr/local/bin/kubectl
      - run: # (2)
          name: Install Skaffold
          command: |
            curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64
            chmod +x skaffold
            sudo mv skaffold /usr/local/bin
      - run: # (3)
          name: Install Kind
          command: |
            [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
            chmod +x ./kind
            sudo mv ./kind /usr/local/bin/kind
      - run: # (4)
          name: Install Grafana K6
          command: |
            sudo gpg -k
            sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
            echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
            sudo apt-get update
            sudo apt-get install k6
      - run: # (5)
          name: Install OpenJDK 21
          command: |
            java -version
            sudo apt-get update && sudo apt-get install openjdk-21-jdk
            sudo update-alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
            sudo update-alternatives --set javac /usr/lib/jvm/java-21-openjdk-amd64/bin/javac
            java -version
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
      - run: # (6)
          name: Create Kind Cluster
          command: |
            kind create cluster --name c1 --config k8s/kind-cluster-test.yaml
      - run: # (7)
          name: Deploy to K8s
          command: |
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
            skaffold run
      - run: # (8)
          name: Run K6 Test
          command: |
            kubectl get svc
            k6 run first-service/src/test/resources/k6/load-test.js
      - run: # (9)
          name: Delete Kind Cluster
          command: |
            kind delete cluster --name c1

Here’s the definition of our load test. It has to be written in JavaScript. It defines some thresholds like a % of maximum failed requests or maximum response time for 95% of requests. As you see, we are testing the http://localhost:30000/first/ping endpoint:

import { sleep } from 'k6';
import http from 'k6/http';

export const options = {
  duration: '60s',
  vus: 10,
  thresholds: {
    http_req_failed: ['rate<0.25'],
    http_req_duration: ['p(95)<1000'],
  },
};

export default function () {
  http.get('http://localhost:30000/first/ping');
  sleep(2);
}

Finally, the last part of the CircleCI config file. It defines pipeline workflow. In the first step, we are running tests with Maven. After that, we proceeded to the deploy-k8s job.

workflows:
  build-and-deploy:
    jobs:
      - maven/test:
          name: test
          executor: jdk
      - deploy-k8s:
          requires:
            - test

Once we push a change to the sample Git repository we trigger a new CircleCI build. You can verify it by yourself here in my CircleCI project page.

As you see all the pipeline steps have been finished successfully.

kubernetes-circleci-build

We can display logs for every single step. Here are the logs from the k6 load test step.

There were some errors during the warm-up. However, the test shows that our scenario works on the Kubernetes cluster.

Final Thoughts

CircleCI is one of the most popular CI/CD platforms. Personally, I’m using it for running builds and tests for all my demo repositories on GitHub. For the sample projects dedicated to the Kubernetes cluster, I want to verify such steps as building images with Jib, Kubernetes deployment scripts, or Skaffold configuration. This article shows how to easily perform such tests with CircleCI and Kubernetes cluster running on Kind. Hope it helps 🙂

The post Kubernetes Testing with CircleCI, Kind, and Skaffold appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/feed/ 0 14706
Testing Java Apps on Kubernetes with Testkube https://piotrminkowski.com/2023/11/27/testing-java-apps-on-kubernetes-with-testkube/ https://piotrminkowski.com/2023/11/27/testing-java-apps-on-kubernetes-with-testkube/#respond Mon, 27 Nov 2023 09:32:12 +0000 https://piotrminkowski.com/?p=14684 In this article, you will learn how to test Java apps on Kubernetes with Testkube automatically. We will build the tests for the typical Spring REST-based app. In the first scenario, Testkube runs the JUnit tests using its Maven support. After that, we will run the load tests against the running instance of our app […]

The post Testing Java Apps on Kubernetes with Testkube appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to test Java apps on Kubernetes with Testkube automatically. We will build the tests for the typical Spring REST-based app. In the first scenario, Testkube runs the JUnit tests using its Maven support. After that, we will run the load tests against the running instance of our app using the Grafana k6 tool. Once again, Kubetest provides a standard mechanism for that, no matter which tool we use for testing.

If you are interested in testing on Kubernetes you can also read my article about integration tests with JUnit. There is also a post about contract testing on Kubernetes with Microcks available here.

Introduction

Testkube is a Kubernetes native test orchestration and execution framework. It allows us to run automated tests inside the Kubernetes cluster. It supports several popular testing or build tools like JMeter, Grafana k6, and Maven. We can easily integrate with the CI/CD pipelines or GitOps workflows. We can manage Kubetest by using the CRD objects directly, with the CLI, or through the UI dashboard. Let’s check how it works.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. It contains only a single app. Once you clone it you can go to the src/test directory. You will find there both the JUnit tests written in Java and the k6 tests written in JavaScript. After that, you should just follow my instructions. Let’s begin.

Run Kubetest on Kubernetes

In the first step, we are going to install Testkube on Kubernetes using its Helm chart. Let’s add the kubeshop Helm repository and fetch latest charts info:

$ helm repo add kubeshop https://kubeshop.github.io/helm-charts
$ helm repo update

Then, we can install Testkube in the testkube namespace by executing the following helm command:

$ helm install testkube kubeshop/testkube \
    --create-namespace --namespace testkube

This will add custom resource definitions (CRD), RBAC roles, and role bindings to the Kubernetes cluster. This installation requires having cluster administrative rights.

Once the installation is finished, we can verify a list of running in the testkube namespace. The testkube-api-server and testkube-dashboard are the most important components. However, there are also some additional tools installed like Mongo database or Minio.

$ oc get po -n testkube
NAME                                                    READY   STATUS    RESTARTS        AGE
testkube-api-server-d4d7f9f8b-xpxc9                     1/1     Running   1 (6h17m ago)   6h18m
testkube-dashboard-64578877c7-xghsz                     1/1     Running   0               6h18m
testkube-minio-testkube-586877d8dd-8pmmj                1/1     Running   0               6h18m
testkube-mongodb-dfd8c7878-wzkbp                        1/1     Running   0               6h18m
testkube-nats-0                                         3/3     Running   0               6h18m
testkube-nats-box-567d94459d-6gc4d                      1/1     Running   0               6h18m
testkube-operator-controller-manager-679b998f58-2sv2x   2/2     Running   0               6h18m

We can also install testkube CLI on our laptop. It is not required, but we will use it during the exercise just try the full spectrum of options. You can find CLI installation instructions here. I’m installing it on macOS:

$ brew install testkube

Once the installation is finished, you can run the testkube version command to see that warm “Hello” screen 🙂

testkube-kubernetes-cli

Run Maven Tests with Testkube

Firstly, let’s take a look at the JUnit tests inside our sample Spring Boot app. We are using the TestRestTemplate bean to call all the exposed REST endpoints exposed. There are three JUnit tests for testing adding, getting, and removing the Person objects.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@TestMethodOrder(MethodOrderer.OrderAnnotation::class)
class PersonControllerTests {

   @Autowired
   lateinit var template: TestRestTemplate

   @Test
   @Order(1)
   fun shouldAddPerson() {
      var person = Instancio.of(Person::class.java)
         .ignore(Select.field("id"))
         .create()
      person = template
         .postForObject("/persons", person, Person::class.java)
      Assertions.assertNotNull(person)
      Assertions.assertNotNull(person.id)
      Assertions.assertEquals(1001, person.id)
   }

   @Test
   @Order(2)
   fun shouldUpdatePerson() {
      var person = Instancio.of(Person::class.java)
         .set(Select.field("id"), 1)
         .create()
      template.put("/persons", person)
      var personRemote = template
         .getForObject("/persons/{id}", Person::class.java, 1)
      Assertions.assertNotNull(personRemote)
      Assertions.assertEquals(person.age, personRemote.age)
   }

   @Test
   @Order(3)
   fun shouldDeletePerson() {
      template.delete("/persons/{id}", 1)
      val person = template
         .getForObject("/persons/{id}", Person::class.java, 1)
      Assertions.assertNull(person)
   }

}

We are using Maven as a build tool. The current version of Spring Boot is 3.2.0. The version of JDK used for the compilation is 17. Here’s the fragment of our pom.xml in the repository root directory:

<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>3.2.0</version>
  </parent>
  <groupId>pl.piomin.services</groupId>
  <artifactId>sample-spring-kotlin-microservice</artifactId>
  <version>1.5.3</version>

  <properties>
    <java.version>17</java.version>
    <kotlin.version>1.9.21</kotlin.version>
  </properties>

  <dependencies>
    ...   
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.instancio</groupId>
      <artifactId>instancio-junit</artifactId>
      <version>3.6.0</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
</project>

Testkube provides the Executor CRD for defining a way of running each test. There are several default executors per each type of supported build or test tool. We can display a list of provided executors by running the testkube get executor command. You will see the list of all tools supported by Testkube. Of course, the most interesting executors for us are k6-executor and maven-executor.

$ testkube get executor

Context:  (1.16.8)   Namespace: testkube
----------------------------------------

  NAME                 | URI | LABELS
-----------------------+-----+-----------------------------------
  artillery-executor   |     |
  curl-executor        |     |
  cypress-executor     |     |
  ginkgo-executor      |     |
  gradle-executor      |     |
  jmeter-executor      |     |
  jmeterd-executor     |     |
  k6-executor          |     |
  kubepug-executor     |     |
  maven-executor       |     |
  playwright-executor  |     |
  postman-executor     |     |
  soapui-executor      |     |
  tracetest-executor   |     |
  zap-executor         |     |

By default, maven-executor uses JDK 11 for running Maven tests. Moreover, it still doesn’t provide images for running tests against JDK19+. For me, this is quite a big drawback since the latest LTS version of Java is 21. The maven-executor-jdk17 Executor contains the name of the running image (1) and a list of supported test types (2).

apiVersion: executor.testkube.io/v1
kind: Executor
metadata:
  name: maven-executor-jdk17
  namespace: testkube
spec:
  args:
    - '--settings'
    - <settingsFile>
    - <goalName>
    - '-Duser.home'
    - <mavenHome>
  command:
    - mvn
  content_types:
    - git-dir
    - git
  executor_type: job
  features:
    - artifacts
  # (1)
  image: kubeshop/testkube-maven-executor:jdk17 
  meta:
    docsURI: https://kubeshop.github.io/testkube/test-types/executor-maven
    iconURI: maven
  # (2)
  types:
    - maven:jdk17/project
    - maven:jdk17/test
    - maven:jdk17/integration-test

Finally, we just need to define the Test object that references to maven-executor-jdk17 by the type parameter. Of course, we also need to set the address of the Git repository and the name of the branch.

apiVersion: tests.testkube.io/v3
kind: Test
metadata:
  name: sample-spring-kotlin
  namespace: testkube
spec:
  content:
    repository:
      branch: master
      type: git
      uri: https://github.com/piomin/sample-spring-kotlin-microservice.git
    type: git
  type: maven:jdk17/test

Finally, we can run the sample-spring-kotlin test using the following command:

$ testkube run test sample-spring-kotlin

Using UI Dashboard

First of all, let’s expose the Testkube UI dashboard on the local port. The dashboard also requires a connection to the testkube-api-server from the web browser. After exposing the dashboard with the following port-forward command we can access it under the http://localhost:8080 address:

$ kubectl port-forward svc/testkube-dashboard 8080 -n testkube
$ kubectl port-forward svc/testkube-api-server 8088 -n testkube

Once we access the Testkube dashboard we will see a list of all defined tests:

testkube-kubernetes-ui

Then, we can click the selected tile with the test to see the details. You will be redirected to the history of previous executions available in the “Recent executions” tab. There are six previous executions of our sample-spring-kotlin test. Two of them were finished successfully, the four others were failed.

Let’s take a look at the logs of the last one execution. As you see, all three JUnit tests were successful.

testkube-kubernetes-test-logs

Run Load Tests with Testkube and Grafana k6

In this section, we will create the tests for the instance of our sample app running on Kubernetes. So, in the first step, we need to deploy the app. Here’s the Deployment manifest. We can apply it to the default namespace. The manifest uses the latest image of the sample app available in the registry under the quay.io/pminkows/sample-kotlin-string:1.5.3 address.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-kotlin-spring
  labels:
    app: sample-kotlin-spring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sample-kotlin-spring
  template:
    metadata:
      labels:
        app: sample-kotlin-spring
    spec:
      containers:
      - name: sample-kotlin-spring
        image: quay.io/pminkows/sample-kotlin-spring:1.5.3
        ports:
        - containerPort: 8080

Let’s also create the Kubernetes Service that exposes app pods internally:

apiVersion: v1
kind: Service
metadata:
  name: sample-kotlin-spring
spec:
  selector:
    app: sample-kotlin-spring
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080

After that, we can proceed to the Test manifest. This time, we don’t have to override the default executor, since the k6 version is not important. The test source is located inside the sample Git repository in the src/test/resources/k6/load-tests-get.js (1) file in the master branch. In that case, the repository type is git (2). The k6 test should run for 5 seconds and should use 5 concurrent threads (3). We also need to set the address of a target service as the PERSONS_URI environment variable (4). Of course, we are testing through the Kubernetes Service visible internally under the sample-kotlin-spring.default.svc host and port 8080. The type of the test is k6/script (5).

apiVersion: tests.testkube.io/v3
kind: Test
metadata:
  labels:
    executor: k6-executor
    test-type: k6-script
  name: load-tests-gets
  namespace: testkube
spec:
  content:
    repository:
      branch: master
      # (1)
      path: src/test/resources/k6/load-tests-get.js
      # (2) 
      type: git
      uri: https://github.com/piomin/sample-spring-kotlin-microservice.git
    type: git
  executionRequest:
    # (3)
    args:
      - '-u'
      - '5'
      - '-d'
      - 10s
    # (4)
    variables:
      PERSONS_URI:
        name: PERSONS_URI
        type: basic
        value: http://sample-kotlin-spring.default.svc:8080
        valueFrom: {}
  # (5)
  type: k6/script

Let’s take a look at the k6 test file written in JavaScript. As I mentioned before, you can find it in the src/test/resources/k6/load-tests-get.js file. The test calls the GET /persons/{id} endpoint. It sets the random number between 1 and 1000 as the id path parameter and reads a target service URL from the PERSONS_URI environment variable.

import http from 'k6/http';
import { check } from 'k6';
import { randomIntBetween } from 'https://jslib.k6.io/k6-utils/1.2.0/index.js';

export default function () {
  const id = randomIntBetween(1, 1000);
  const res = http.get(`${__ENV.PERSONS_URI}/persons/${id}`);
  check(res, {
    'is status 200': (res) => res.status === 200,
    'body size is > 0': (r) => r.body.length > 0,
  });
}

Finally, we can run the load-tests-gets test with the following command:

$ testkube run test load-tests-gets

The same as for the Maven test we can verify the execution history in the Testkube dashboard:

We can also display all the logs from the test:

Final Thoughts

Testkube provides a unified way to run Kubernetes tests for the several most popular testing tools. It may be a part of your CI/CD pipeline or a GitOps process. Honestly, I’m still not very convinced if I need a dedicated Kubernetes-native solution for automated tests, instead e.g. a stage in my pipeline that runs test commands. However, you can also use Testkube to execute load or integration tests against the app running on Kubernetes. It is possible to schedule them periodically. Thanks to that you can verify your apps continuously using a single, central tool.

The post Testing Java Apps on Kubernetes with Testkube appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/11/27/testing-java-apps-on-kubernetes-with-testkube/feed/ 0 14684
Manage Multiple GitHub Repositories with Renovate and CircleCI https://piotrminkowski.com/2023/01/12/manage-multiple-github-repositories-with-renovate-and-circleci/ https://piotrminkowski.com/2023/01/12/manage-multiple-github-repositories-with-renovate-and-circleci/#comments Thu, 12 Jan 2023 11:37:55 +0000 https://piotrminkowski.com/?p=13895 In this article, you will learn how to automatically update your GitHub repositories with Renovate and CircleCI. The problem we will try to solve today is strictly related to my blogging. As I always attach code examples to my posts, I have a lot of repositories to manage. I know that sometimes it is more […]

The post Manage Multiple GitHub Repositories with Renovate and CircleCI appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to automatically update your GitHub repositories with Renovate and CircleCI. The problem we will try to solve today is strictly related to my blogging. As I always attach code examples to my posts, I have a lot of repositories to manage. I know that sometimes it is more convenient to have a single repo per all the demos, but I do not prefer it. My repo is always related to the particular technology or even to the case it is showing. 

Let’s consider what’s the problem with that approach. I’m usually sharing the same repository across multiple articles if they are closely related to each other. But despite that, I have more than 100 repositories with code examples. Once I create a repository I usually don’t have time to keep it up to date. I need a tool that will do that automatically for me. This, however, forces me to improve automatic tests. If I configure a tool that automatically updates a code in GitHub repositories, I need to verify that the change is valid and will not break the demo app.

There is another problem related to that. Classics to the genre – lack of automation tests… I was always focusing on creating the example app to show the use case described in the post, but not on building the valuable tests. It’s time to fix that! This is my first New Year’s resolution 🙂 As you probably guessed, my work is still in progress. But even now, I can show you which tools I’m using for that and how to configure them. I will also share some first thoughts. Let’s begin!

First Problem: Not Maintained Repositories

Did you ever try to run the app from the source code created some years ago? In theory, everything should go fine. But in practice, several things may have changed. I can use a different version of e.g. Java or Maven than before. Even if have automated tests, they may not work fine especially since I didn’t use any tool to run the build and tests remotely. Of course, I don’t have such many old, unmaintained repositories. Sometimes, I was updating them manually, in particular, those more popular and shared across several articles.

Let’s just take a look at this example. It is from the following repository. I’m trying to generate a class definition from the Protocol Buffers schema file. As you see, the plugin used for that is not able to find the protoc executable. Honestly, I don’t remember how it worked before. Maybe I installed something on my laptop… Anyway, the solution was to use another plugin that doesn’t require any additional executables. Of course, I need to do it manually.

Let’s analyze another example. This time it fails during integration tests from another repository. The test is trying to connect to the Docker container. The problem here is that I was using Windows some years ago and Docker Toolbox was, by default, available under the 192.168.99.100 address. I should not leave such an address in the test. However, once again, I was just running all the tests locally, and at that time they finished successfully.

By the way, moving such a test to the CircleCI pipeline is not a simple thing to do. In order to run some containers (pact-broker with posgtresql) before the pipeline I decided to use Docker Compose. To run containers with Docker Compose I had to enable remote docker for CicrleCI as described here.

Second Problem: Updating Dependencies

If you manage application repositories that use several libraries, you probably know that an update is sometimes not just a formality. Even if that’s a patch or a minor update. Although my applications are usually not very complicated, the update of the Spring Boot version may be challenging. In the following example of Netflix DGS usage (GraphQL framework), I tried to update from the 2.4.2 to the 2.7.7 version. Here’s the result.

In that particular case, my app was initiating the H2 database with some data from the data.sql file. But since one of the 2.4.X Spring Boot version, the records from the data.sql are loaded before database schema initialization. The solution is to replace that file with the import.sql script or add the property spring.jpa.defer-datasource-initialization=true to the application properties. After choosing the second option we solved the problem… and then another one occurs. This time it is related to Netflix DGS and GraphQL Java libraries as described here.

Currently, according to the comments, there is no perfect solution to that problem with Maven. Probably I will have to wait for the next release of Netflix DGS or wait until they will propose the right solution.

Let’s analyze another example – once again with the Spring Boot update. This time it is related to the Spring Data and Embedded Mongo. The case is very interesting since it fails just on the remote builder. When I’m running the test on my local machine everything works perfectly fine.

A similar issue has been described here. However, the described solution doesn’t help me anymore. Probably I will decide to migrate my tests to the Testcontainers. By the way, it is also a very interesting example, since it has an impact only on the tests. So, even with a high level of automation, you will still need to do manual work.

Third Problem: Lack of Automated Tests

It is some kind of paradox – although I’m writing a lot about continuous delivery or tests I have a lot of repositories without any tests. Of course, when I was creating real applications for several companies I was adding many tests to ensure they will fine on production. But even for simple demo apps it is worth adding several tests that verify if everything works fine. In that case, I don’t have many small unit tests but rather a test that runs a whole app and verifies e.g. all the endpoints. Fortunately, the frameworks like Spring Boot or Quarkus provide intuitive tools for that. There are helpers for almost all popular solutions. Here’s my @SprignBootTest for GraphQL queries.

@SpringBootTest(webEnvironment = 
      SpringBootTest.WebEnvironment.RANDOM_PORT)
public class EmployeeQueryResolverTests {

    @Autowired
    GraphQLTestTemplate template;

    @Test
    void employees() throws IOException {
        Employee[] employees = template
           .postForResource("employees.graphql")
           .get("$.data.employees", Employee[].class);
        Assertions.assertTrue(employees.length > 0);
    }

    @Test
    void employeeById() throws IOException {
        Employee employee = template
           .postForResource("employeeById.graphql")
           .get("$.data.employee", Employee.class);
        Assertions.assertNotNull(employee);
        Assertions.assertNotNull(employee.getId());
    }

    @Test
    void employeesWithFilter() throws IOException {
        Employee[] employees = template
           .postForResource("employeesWithFilter.graphql")
           .get("$.data.employeesWithFilter", Employee[].class);
        Assertions.assertTrue(employees.length > 0);
    }
}

In the previous test, I’m using an in-memory H2 database in the background. If I want to test smth with the “real” database I can use Testcontainers. This tool runs the required container on Docker during the test. In the following example, we run PostgreSQL. After that, the Spring Boot application automatically connects to the database thanks to the @DynamicPropertySource annotation that sets the generated URL as the Spring property.

@SpringBootTest(webEnvironment = 
      SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerTests {

   @Autowired
   TestRestTemplate restTemplate;

   @Container
   static PostgreSQLContainer<?> postgres = 
      new PostgreSQLContainer<>("postgres:15.1")
           .withExposedPorts(5432);

   @DynamicPropertySource
   static void registerMySQLProperties(DynamicPropertyRegistry registry) {
       registry.add("spring.datasource.url", 
          postgres::getJdbcUrl);
       registry.add("spring.datasource.username", 
          postgres::getUsername);
       registry.add("spring.datasource.password", 
          postgres::getPassword);
   }

   @Test
   @Order(1)
   void add() {
       Person person = Instancio.of(Person.class)
               .ignore(Select.field("id"))
               .create();
       person = restTemplate
          .postForObject("/persons", person, Person.class);
       Assertions.assertNotNull(person);
       Assertions.assertNotNull(person.getId());
   }

   @Test
   @Order(2)
   void updateAndGet() {
       final Integer id = 1;
       Person person = Instancio.of(Person.class)
               .set(Select.field("id"), id)
               .create();
       restTemplate.put("/persons", person);
       Person updated = restTemplate
          .getForObject("/persons/{id}", Person.class, id);
       Assertions.assertNotNull(updated);
       Assertions.assertNotNull(updated.getId());
       Assertions.assertEquals(id, updated.getId());
   }

   @Test
   @Order(3)
   void getAll() {
       Person[] persons = restTemplate
          .getForObject("/persons", Person[].class);
       Assertions.assertEquals(1, persons.length);
   }

   @Test
   @Order(4)
   void deleteAndGet() {
       restTemplate.delete("/persons/{id}", 1);
       Person person = restTemplate
          .getForObject("/persons/{id}", Person.class, 1);
       Assertions.assertNull(person);
   }

}

In some cases, we may have multiple applications (or microservices) communicating with each other. We can mock that communication with the libraries like Mockito. On the other, we can simulate real HTTP traffic with the libraries like Hoverfly or Wiremock. Here’s the example with Hoverfly and the Spring Boot Test module.

@SpringBootTest(properties = { "POD_NAME=abc", "POD_NAMESPACE=default"}, 
   webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ExtendWith(HoverflyExtension.class)
public class CallerControllerTests {

   @LocalServerPort
   int port;
   @Autowired
   TestRestTemplate restTemplate;

   @Test
   void ping(Hoverfly hoverfly) {
      String msg = "callme-service v1.0-SNAPSHOT (id=1): abc in default";
      hoverfly.simulate(dsl(
            service("http://callme-service.serverless.svc.cluster.local")
               .get("/callme/ping")
               .willReturn(success(msg, "text/plain"))));

      String response = restTemplate
         .getForObject("/caller/ping", String.class);
      assertNotNull(response);

      String c = "caller-service(id=1): abc in default is calling " + msg;
      assertEquals(c, response);
   }
}

Of course, these are just examples of tests. There are a lot of different tests and technologies used in all my repositories. Some others would be added in the near future 🙂 Now, let’s go to the point.

Choosing the Right Tools

As mentioned in the introduction, I will use CircleCI and Renovate for managing my GitHub repositories. CircleCI is probably the most popular choice for running builds of open-source projects stored in GitHub repositories. GitHub also provides a tool for updating dependencies called Dependabot. However, Renovate has some significant advantages over Dependabot. It provides a lot of configuration options, may be run anywhere (including Kubernetes – more details here), and can integrate also with GitLab or Bitbucket. We will also use SonarCloud for a static code quality analysis.

Renovate is able to analyze not only the descriptors of traditional package managers like npm, Maven, or Gradle but also e.g. CircleCI configuration files or Docker image tags. Here’s a list of my requirements that the following tool needs to meet:

  1. It should be able to perform different actions depending on the dependency update type (major, patch, or minor)
  2. It needs to create PR on change and auto-merge it only if the build performed by CircleCI finishes successfully. Therefore it needs to wait for the status of that build
  3. Auto-merge should not be enabled for major updates. They require approval from the repository admin

Renovate meets all these requirements. We can also easily install Renovate on GitHub and use it to update CircleCI configuration files inside repositories. In order to install Renovate on GitHub you need to go to the marketplace. After you install it, go the Settings, and then the Applications menu item. In order to set the list of repositories enabled for Renovate click the Configure button.

Then in the Repository Access section, you can enable all your repositories or choose several from the whole list.

github-renovate-circleci-conf

Configure Renovate and CircleCI inside the GitHub Repository

Each GitHub repository has to contain CircleCI and Renovate configuration files. Renovate tries to detect the renovate.json file in the repository root directory. We don’t provide many configuration settings to achieve the expected results. By default, Renovate creates a pull request once it detects a new version of dependency but does not auto-merge it. We want to auto-merge all non-major changes. Therefore, we need to set a list of all update types merged automatically (minor, patch, pin, and digest).

By default, Renovate creates PR just after it creates a branch with a new version of the dependency. Because we are auto-merging all non-major PRs we need to force Renovate to create them only after the build on CircleCI finishes successfully. Once, all the tests on the newly created branch will be passed, Renovate creates PR and auto-merge if it does not contain major changes. Otherwise, it leaves the PR for approval. To achieve it, we to set the property prCreation to not-pending. Here’s the renovate.json file I’m using for all my GitHub repositories.

{
  "$schema": "https://docs.renovatebot.com/renovate-schema.json",
  "extends": [
    "config:base",":dependencyDashboard"
  ],
  "packageRules": [
    {
      "matchUpdateTypes": ["minor", "patch", "pin", "digest"],
      "automerge": true
    }
  ],
  "prCreation": "not-pending"
}

The CircleCI configuration is stored in the .circleci/config.yml file. I mostly use Maven as a build tool. Here’s a typical CircleCI configuration file for my repositories. It defines two jobs: a standard maven/test job for building the project and running unit tests and a job for running SonarQube analysis.

version: 2.1

jobs:
  analyze:
    docker:
      - image: 'cimg/openjdk:17.0'
    steps:
      - checkout
      - run:
          name: Analyze on SonarCloud
          command: mvn verify sonar:sonar

executors:
  j17:
    docker:
      - image: 'cimg/openjdk:17.0'

orbs:
  maven: circleci/maven@1.4.0

workflows:
  maven_test:
    jobs:
      - maven/test:
          executor: j17
      - analyze:
          context: SonarCloud

By default, CicrcleCI runs builds on Docker containers. However, this approach is not suitable everywhere. For Testcontainers we need a machine executor that has full access to the Docker process. Thanks to that, it is able to run additional containers during tests with e.g. databases.

version: 2.1

jobs:
  analyze:
    docker:
      - image: 'cimg/openjdk:11.0'
    steps:
      - checkout
      - run:
          name: Analyze on SonarCloud
          command: mvn verify sonar:sonar -DskipTests

orbs:
  maven: circleci/maven@1.3.0

executors:
  machine_executor_amd64:
    machine:
      image: ubuntu-2204:2022.04.2
    environment:
      architecture: "amd64"
      platform: "linux/amd64"

workflows:
  maven_test:
    jobs:
      - maven/test:
          executor: machine_executor_amd64
      - analyze:
          context: SonarCloud

Finally, the last part of configuration – an integration between CircleCI and SonarCloud. We need to add some properties to Maven pom.xml to enable SonarCloud context.

<properties>
  <sonar.projectKey>piomin_sample-spring-redis</sonar.projectKey>
  <sonar.organization>piomin</sonar.organization>
  <sonar.host.url>https://sonarcloud.io</sonar.host.url>
</properties>

How It Works

Let’s verify how it works. Once you provide the required configuration for Renovate, CircleCI, and SonarCloud in your GitHub repository the process starts. Renovate initially detects a list of required dependency updates. Since I enabled the dependency dashboard, Renovate immediately creates an issue with a list of changes as shown below. It just provides a summary view showing a list of changes in the dependencies.

github-renovate-circleci-dashboard

Here’s a list of detected package managers in this repository. Besides Maven and CircleCI, there is also Dockerfile and Gitlab CI configuration file there.

Some pull requests has already been automerged by Renovate, if the build on CircleCI has finished successfully.

github-renovate-circleci-pr

Some other pull requests are still waiting in the Open state – waiting for approval (a major update from Java 11 to Java 17) or for a fix because the build on CicrcleCI failed.

We can go into the details of the selected PR. Let’s do that for the first PR (#11) on the list visible above. Renovate is trying to update Spring Boot from 2.6.1 to the latest 2.7.7. It created the branch renovate/spring-boot that contains the required changes.

github-renovate-circleci-pr-details

The PR could be merged automatically. However the build failed, so it didn’t happen.

github-renovate-circleci-pr-checks

We can go to the details of the build. As you see in the CircleCI dashboard all the tests failed. In this particular case, I have already tried to fix the by updating the version of embedded Mongo. However, it didn’t solve the problem.

Here’s a list of commits in the master branch. As you see Renovate is automatically updating the repository after the build of the particular branch finishes successfully.

As you see, each time a new branch is created CircleCI runs a build to verify if it does not break the tests.

github-renovate-circleci-builds

Conclusion

I have some conclusions after making the described changes in my repository:

  • 1) Include automated tests in your projects even if you are creating an app for demo showcase, not for production usage. It will help you back to a project after some time. It will also ensure that everything works fine in your demo and helps other people when using it.
  • 2) All these tools like Renovate, CircleCI, or SonarCloud can be easily used with your GitHub project for free. You don’t need to spend a lot of time configuring them, but the effect can be significant.
  • 3) Keeping the repositories up to date is important. Sometimes people wrote to me that something doesn’t work properly in my examples. Even now, I found some small bugs in the code logic. Thanks to the described approach, I hope to give you a better quality of my examples – as you are my blog followers.

If you have something like that in my repository main site, it means that I already have reviewed the project and added all the described mechanisms in that article.

The post Manage Multiple GitHub Repositories with Renovate and CircleCI appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/01/12/manage-multiple-github-repositories-with-renovate-and-circleci/feed/ 1 13895
Kubernetes CI/CD with Tekton and ArgoCD https://piotrminkowski.com/2021/08/05/kubernetes-ci-cd-with-tekton-and-argocd/ https://piotrminkowski.com/2021/08/05/kubernetes-ci-cd-with-tekton-and-argocd/#comments Thu, 05 Aug 2021 14:07:18 +0000 https://piotrminkowski.com/?p=10011 In this article, you will learn how to configure the CI/CD process on Kubernetes using Tekton and ArgoCD. The first question may be – do we really need both these tools to achieve that? Of course, no. Since Tekton is a cloud-native CI/CD tool you may use only it to build your pipelines on Kubernetes. […]

The post Kubernetes CI/CD with Tekton and ArgoCD appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to configure the CI/CD process on Kubernetes using Tekton and ArgoCD. The first question may be – do we really need both these tools to achieve that? Of course, no. Since Tekton is a cloud-native CI/CD tool you may use only it to build your pipelines on Kubernetes. However, a modern way of building the CD process should follow the GitOps pattern. It means that we store a configuration of the application in Git – the same as a source code. The CD process should react to changes in this configuration, and then apply them to the Kubernetes cluster. Here comes Argo CD.

In the next part of this article, we will build a sample CI/CD process for a Java application. Some steps of that process will be managed by Tekton, and some others by ArgoCD. Let’s take a look at the diagram below. In the first step, we are going to clone the Git repository with the application source code. Then we will run JUnit tests. After that, we will trigger a source code analysis with SonarQube. Finally, we will build the application image. All these steps are a part of the continuous integration process. Argo CD is responsible for the deployment phase. Also, in case of any changes in the configuration is synchronizes the state of the application on Kubernetes with the Git repository.

tekton-argocd-pipeline

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. This time there is a second repository – dedicated to storing a configuration independently from application source code. You can clone the following repository and go to the cicd/apps/sample-spring-kotlin directory. After that, you should just follow my instructions. Let’s begin.

Prerequisites

Before we begin, we need to install Tekton and ArgoCD on Kubernetes. We can do this in several different ways. The simplest one is by using OpenShift operators. Their names may be a bit confusing. But in fact, Red Hat OpenShift Pipeline installs Tekton, while Red Hat OpenShift GitOps installs ArgoCD.

tekton-argocd-openshift

Build CI Pipeline with Tekton

The idea behind Tekton is very simple and typical for the CI/CD approach. We are building pipelines. Pipelines consist of several independent steps – tasks. In order to run a pipeline, we should create the PipelineRun object. It manages the PipelineResources passed to tasks as inputs and outputs. Tekton executes each task in its own Kubernetes pod. For more details, you may visit the Tekton documentation site.

Here’s the definition of our pipeline. Before adding tasks we define workspaces at the pipeline global level. We need a workspace for keeping application source code during the build and also a place to store Sonarqube settings. Such workspaces are required by particular tasks.

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: sample-java-pipeline
spec:
  tasks:
    ...
  workspaces:
    - name: source-dir
    - name: sonar-settings

For typical operations like git clone or Maven build, we may use predefined tasks. We can find them on Tekton Hub available here. If you are testing Tekton on OpenShift, some of them are already available as ClusterTask there just after the installation with an operator.

Task 1: Clone Git repository

Here’s the first step of our pipeline. It is referencing to the git-clone ClusterTask. We need to pass the address of the GitHub repository and the name of the branch. Also, we have to assign the workspace to the task using the output name, which is required by that task.

- name: git-clone
  params:
    - name: url
      value: 'https://github.com/piomin/sample-spring-kotlin-microservice.git'
    - name: revision
      value: master
  taskRef:
    kind: ClusterTask
    name: git-clone
  workspaces:
    - name: output
      workspace: source-dir

Task 2: Run JUnit tests with Maven

In the next step, we are running JUnit tests. This time we also use a predefined ClusterTask called maven. In order to run JUnit tests, we should set the GOAL parameter to test. This task requires two workspaces: a workspace with a source code and a second workspace with Maven settings. Because we do not override any Maven configuration I’m just passing there the workspace with source code.

- name: junit-tests
  params:
    - name: GOALS
      value:
        - test
  runAfter:
    - git-clone
  taskRef:
    kind: ClusterTask
    name: maven
  workspaces:
    - name: source
      workspace: source-dir
    - name: maven-settings
      workspace: source-dir

Task 3: Execute Sonarqube scanning

The two next steps will be a little bit more complicated. In order to run Sonarqube scanning, we first need to import the sonarqube-scanner task from Tekton Hub.

$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/sonarqube-scanner/0.1/sonarqube-scanner.yaml

After that, we may refer to the already imported task. We should set the address of our Sonarqube instance in the SONAR_HOST_URL parameter, and the unique name of the project in the SONAR_PROJECT_KEY parameter. The task takes two input workspaces. The first of them contains source code, while the second may contain properties file to override some Sonarqube settings. Since it is not possible to pass Sonarqube organization name in task parameters, we will have to do that using the sonar-project.properties file.

- name: sonarqube
  params:
    - name: SONAR_HOST_URL
      value: 'https://sonarcloud.io'
    - name: SONAR_PROJECT_KEY
      value: sample-spring-boot-kotlin
  runAfter:
    - junit-tests
  taskRef:
    kind: Task
    name: sonarqube-scanner
  workspaces:
    - name: source-dir
      workspace: source-dir
    - name: sonar-settings
      workspace: sonar-settings

Task 4: Get the version of the application from pom.xml

In the next step, we are going to retrieve the version number of our application. We will use the version property available inside the Maven pom.xml file. In order to read the value of the version property, we will execute the evaluate command provided by the Maven Helper Plugin. Then we are going to emit that version as a task result. Since there is not such predefined task available on Tekton Hub, we will create our own custom task.

The definition of our custom task is visible below. After executing the mvn help:evaluate command we set that value as a task result with the name version.

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: maven-get-project-version
spec:
  workspaces:
    - name: source
  params:
    - name: MAVEN_IMAGE
      type: string
      description: Maven base image
      default: gcr.io/cloud-builders/mvn@sha256:57523fc43394d6d9d2414ee8d1c85ed7a13460cbb268c3cd16d28cfb3859e641
    - name: CONTEXT_DIR
      type: string
      description: >-
        The context directory within the repository for sources on
        which we want to execute maven goals.
      default: "."
  results:
    - description: Project version read from pom.xml
      name: version
  steps:
    - name: mvn-command
      image: $(params.MAVEN_IMAGE)
      workingDir: $(workspaces.source.path)/$(params.CONTEXT_DIR)
      script: |
        #!/usr/bin/env bash
        VERSION=$(/usr/bin/mvn help:evaluate -Dexpression=project.version -q -DforceStdout)
        echo -n $VERSION | tee $(results.version.path)

Then let’s just create a task on Kubernetes. The YAML manifest is available in the GitHub repository under the cicd/pipelines/ directory.

$ kubectl apply -f cicd/pipelines/tekton-maven-version.yaml
$ kubectl get task
NAME                        AGE
maven-get-project-version   17h
sonarqube-scanner           2d19h

Finally, we just need to refer to the already created task and set a workspace with the application source code.

- name: get-version
  runAfter:
    - sonarqube
  taskRef:
    kind: Task
    name: maven-get-project-version
  workspaces:
    - name: source
      workspace: source-dir

Task 5: Build and push image

Finally, we may proceed to the last step of our pipeline. We will build the application image and push it to the registry. The output image will be tagged using the Maven version number. That’s why our task needs to refer to the result emitted by the previous task using the following notation: tasks.get-version.results.version. Then, this property is passed as an input parameter to the jib-maven task responsible for building our image in Dockerless mode.

- name: build-and-push-image
  params:
    - name: IMAGE
      value: >-
        image-registry.openshift-image-registry.svc:5000/piotr-cicd/sample-spring-kotlin:$(tasks.get-version.results.version)
  runAfter:
    - get-version
  taskRef:
    kind: ClusterTask
    name: jib-maven
  workspaces:
    - name: source
      workspace: source-dir

Run Tekton pipeline

Before we run the pipeline we need to create two resources to use for workspaces. The application source code will be saved on a persistence volume. That’s why we should create PVC.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: tekton-workspace
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

The Sonar properties file may be passed as ConfigMap to the pipeline. Because I’m going to perform source code analysis on the cloud instance, I also need to set an organization name.

kind: ConfigMap
apiVersion: v1
metadata:
  name: sonar-properties
data:
  sonar-project.properties: sonar.organization=piomin

Finally, we can start our pipeline as shown below. Of course, we could just create a PipelineRun object.

Now, our pipeline is running. Let’s take a look at the logs of the junit-tests task. As you see there were three JUnit tests executed, and all of them finished successfully.

tekton-argocd-logs

Then we can go to the SonarCloud site and see the source code analysis report.

Also, let’s verify a list of available images. The tag of our application image is the same as the version set in Maven pom.xml.

$ oc get is
NAME                   IMAGE REPOSITORY                                                                   TAGS    UPDATED
sample-spring-kotlin   image-registry.openshift-image-registry.svc:5000/piotr-cicd/sample-spring-kotlin   1.3.0   7 minutes ago

Trigger pipeline on GitHub push

In the previous section, we started a pipeline on-demand. What about running it after pushing changes in source code to the GitHub repository? Fortunately, Tekton provides a built-in mechanism for that. We need to define Trigger and EventListener. Firstly, we should create the TriggerTemplate object. It is defining the PipelineRun object, which references to our sample-java-pipeline.

apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
  name: sample-github-template
spec:
  params:
    - default: main
      description: The git revision
      name: git-revision
    - description: The git repository url
      name: git-repo-url
  resourcetemplates:
    - apiVersion: tekton.dev/v1beta1
      kind: PipelineRun
      metadata:
        generateName: sample-java-pipeline-run-
      spec:
        pipelineRef:
          name: sample-java-pipeline
        serviceAccountName: pipeline
        workspaces:
          - name: source-dir
            persistentVolumeClaim:
              claimName: tekton-workspace
          - configMap:
              name: sonar-properties
            name: sonar-settings

The ClusterTriggerBinding is already available. There is a dedicated definition for GitHub push defined on OpenShift. Thanks to that our EventListener may just refer to that binding and already created TriggerTemplate.

apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
  name: sample-github-listener
spec:
  serviceAccountName: pipeline
  triggers:
    - bindings:
        - kind: ClusterTriggerBinding
          ref: github-push
      name: trigger-1
      template:
        ref: sample-github-template

After creating EventListener Tekton automatically creates Kubernetes Service that allows triggering push events.

$ oc get svc
NAME                                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
el-sample-github-listener             ClusterIP   172.30.88.73   <none>        8080/TCP   2d1h

On OpenShift, we can easily expose the service outside the cluster as the Route object.

$ oc expose svc el-sample-github-listener
$ oc get route
NAME         HOST/PORT                                                             PATH   SERVICES       
                PORT            TERMINATION   WILDCARD
el-example   el-sample-github-listener-piotr-cicd.apps.qyt1tahi.eastus.aroapp.io          el-sample-github-listener   http-listener                 None

After exposing the service we can go to our GitHub repository and define a webhook. In your repository go to Settings -> Webhooks -> Add webhook. Then paste the address of your Route, choose application/json as a Content type and select push event to send.

tekton-argocd-ui

Now, you just need to push any change to your GitHub repository.

Continuous Delivery with ArgoCD

The application Kubernetes deployment.yaml manifest is available in the GitHub repository under the cicd/apps/sample-spring-kotlin directory. It is very simple. It contains only Deployment and Service definitions. The version of the Deployment manifest refers to the 1.3.0 version of the image.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-kotlin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sample-spring-kotlin
  template:
    metadata:
      labels:
        app: sample-spring-kotlin
    spec:
      containers:
      - name: sample-spring-kotlin
        image: image-registry.openshift-image-registry.svc:5000/piotr-cicd/sample-spring-kotlin:1.3.0
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: sample-spring-kotlin
spec:
  type: ClusterIP
  selector:
    app: sample-spring-kotlin
  ports:
  - port: 8080

Now, we can switch to Argo CD. We can create a new application there using UI or YAML manifest. We will use default settings, so the only thing we need to set is the address of the GitHub repository, a path to the Kubernetes manifest, and a target namespace on the cluster. It means that synchronization between the GitHub repository and Kubernetes needs to be triggered manually.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: sample-spring-kotlin
spec:
  destination:
    name: ''
    namespace: piotr-cicd
    server: 'https://kubernetes.default.svc'
  source:
    path: cicd/apps/sample-spring-kotlin
    repoURL: 'https://github.com/piomin/openshift-quickstart.git'
    targetRevision: HEAD
  project: default

Let’s take a look at the Argo CD UI console. Here’s the state initial synchronization. ArgoCD has already deployed the version 1.3.0 of our application built by the Tekton pipeline.

Now, let’s release a new version of the application. We change the version in Maven pom.xml from 1.3.0 to 1.4.0. This change, commit id and message are visible in the picture below. Push to GitHub repository triggers run of the sample-java-pipeline pipeline by calling webhook.

tekton-argocd-sync

After a successful run, our pipeline builds and pushes a new version of the application image to the registry. The currently tagged version of the image is 1.4.0 as shown below.

$ oc get is
NAME                   IMAGE REPOSITORY                                                                   TAGS                 UPDATED
sample-spring-kotlin   image-registry.openshift-image-registry.svc:5000/piotr-cicd/sample-spring-kotlin   1.4.0,1.3.0  17 seconds ago

After that, we may switch to the configuration repository. We are going to change the version of the target image. We will also increase the number of running pods as shown below.

Argo CD automatically detects changes in the GitHub repository and sets the status to OutOfSync. It highlights the objects that have been changed by the last commit.

Now, the only thing we need to do is to click the SYNC button. After that Argo CD creates a new revision with the latest image and runs 2 application pods instead of a single one.

Final Thoughts

Tekton and ArgoCD may be used together to successfully design and run CI/CD processes on Kubernetes. Argo CD watches cluster objects stored in a Git repository and manages the create, update, and delete processes for objects within the repository. Tekton is a CI/CD tool that handles all parts of the development lifecycle, from building images to deploying cluster objects. You can easily run and manage them on OpenShift. If you want to compare a currently described approach based on Tekton and ArgoCD with Jenkins you may read my article Continuous Integration with Jenkins on Kubernetes.

The post Kubernetes CI/CD with Tekton and ArgoCD appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/08/05/kubernetes-ci-cd-with-tekton-and-argocd/feed/ 11 10011
Continuous Integration with Jenkins on Kubernetes https://piotrminkowski.com/2020/11/10/continuous-integration-with-jenkins-on-kubernetes/ https://piotrminkowski.com/2020/11/10/continuous-integration-with-jenkins-on-kubernetes/#comments Tue, 10 Nov 2020 14:26:32 +0000 https://piotrminkowski.com/?p=9091 Although Jenkins is a mature solution, it still can be the first choice for building CI on Kubernetes. In this article, I’ll show how to install Jenkins on Kubernetes, and use it for building a Java application with Maven. You will learn how to use and customize the Helm for this installation. We will implement […]

The post Continuous Integration with Jenkins on Kubernetes appeared first on Piotr's TechBlog.

]]>
Although Jenkins is a mature solution, it still can be the first choice for building CI on Kubernetes. In this article, I’ll show how to install Jenkins on Kubernetes, and use it for building a Java application with Maven. You will learn how to use and customize the Helm for this installation. We will implement typical steps for building and deploying Java applications on Kubernetes.

Here’s the architecture of our solution.

Clone the source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-spring-boot-on-kubernetes. Then you should just follow my instructions 🙂

Deploy Jenkins on Kubernetes with Helm

We may install Jenkins server on Kubernetes using the Helm package manager. The official Helm chart spawns agents on Kubernetes and utilizes Jenkins Kubernetes Plugin. It comes with a default configuration. However, we may override all the properties using the JCaSC Plugin. It is worth reading more about this plugin before continuing. You can find out more about it here.

Firstly, we will add a new Helm repository and update it.

$ helm repo add jenkins https://charts.jenkins.io
$ helm repo update

Then, we may install Jenkins by executing the following command. The Jenkins instance is running in the dedicated namespace. So, before installing it we need to create a new namespace with the kubectl create ns jenkins command. In order to include an additional configuration, we need to create a YAML file and pass it with -f option.

$ helm install -f k8s/jenkins-helm-config.yaml jenkins jenkins/jenkins -n jenkins

Customize Jenkins

We need to configure several things before creating a build pipeline. Let’s consider the following list of tasks:

  1. Customize the default agent – we need to mount a volume into the default agent pod to store workspace files and share them with the other agents.
  2. Create the maven agent – we need to define a new agent able to perform a Maven build. It should use the same persistent volume as the default agent. Also, it should use the JDK 11, because our application is compiled with that version of Java. Finally, we will increase the default CPU and memory limits for the agent pods.
  3. Create GitHub credentials – the Jenkins pipeline needs to be able to clone the source code from GitHub
  4. Install the Kubernetes Continuous Deploy plugin – we will use this plugin to deploy resource configurations to a Kubernetes cluster.
  5. Create kubeconfig credentials – we have to provide a configuration of our Kubernetes context.

Let’s take a look at the whole Jenkins configuration file. Consequently, it contains agent, additionalAgents sections, and defines a JCasC script with the credentials definition.

agent:
  podName: default
  customJenkinsLabels: default
  volumes:
    - type: PVC
      claimName: jenkins-agent
      mountPath: /home/jenkins
      readOnly: false

additionalAgents:
  maven:
    podName: maven
    customJenkinsLabels: maven
    image: jenkins/jnlp-agent-maven
    tag: jdk11
    volumes:
      - type: PVC
        claimName: jenkins-agent
        mountPath: /home/jenkins
        readOnly: false
    resources:
      limits:
        cpu: "1"
        memory: "2048Mi"

master:
  JCasC:
    configScripts:
      creds: |
        credentials:
          system:
            domainCredentials:
              - domain:
                  name: "github.com"
                  description: "GitHub domain"
                  specifications:
                    - hostnameSpecification:
                        includes: "github.com"
                credentials:
                  - usernamePassword:
                      scope: GLOBAL
                      id: github_credentials
                      username: piomin
                      password: ${GITHUB_PASSWORD}
              - credentials:
                  - kubeconfig:
                      id: "docker-desktop"
                      kubeconfigSource:
                        directEntry:
                          content: |-
                            apiVersion: v1
                            kind: Config
                            preferences: {}
                            clusters:
                            - cluster:
                                certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01URXdPVEE1TkRjd04xb1hEVE13TVRFd056QTVORGN3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTStrCnhKd3l6UVEydXNvRHh5RmwxREdwZWZQTVA0RGFVaVJsK01SQ1p1S0NFWUFkL0ZQOWtFS0RlVXMydmVURi9jMXYKUjZpTDlsMVMvdmN6REoyRXRuZUd0TXVPdWFXNnFCWkN5OFJ2NmFReHd0UEpnWVZGTHBNM2dXYmhqTHp3RXplOApEQlhxekZDZkNobXl3SkdORVdWV0s4VnBuSlpMbjRVbUZKcE5RQndsSXZwRC90UDJVUVRiRGNHYURqUE5vY2c0Cms1SmNOc3h3SDV0NkhIN0JZMW9jTEFLUUhsZ2V4V2ZObWdRRkM2UUcrRVNsWkpDVEtNdVVuM2JsRWVlYytmUWYKaVk3YmdOb3BjRThrS0lUY2pzMG95dGVyNmxuY2ZLTVBqSnc2RTNXMmpXRThKU2Z2WDE2dGVhZUZISDEyTmRqWgpWTER2ZWc3eVBsTlRmRVJld25FQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMWjUzVEhBSXp0bHljV0NrS1hhY2l4K0Y5a1FNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBZWllMTRoSlZkTHF3bUY0SGVPS0ZhNXVUYUt6aXRUSElJNHJhU3cxTUppZElQTmZERwprRk5KeXM1M2I4MHMveWFXQ3BPbXdCK1dEak9hWmREQjFxcXVxU1FxeGhkNWMxU2FBZ1VXTGp5OXdLa1dPMzBTCjB3RTRlVkY3Q1c5VGpSMEoyVXV6UEVXdFBKRWF4U2xKMGhuZlQyeFYvb0N5OE9kMm9EZjZkSFRvbE5UTUEyalcKZjRZdXl3U1Z5a2RNaXZYMU5xZzdmK3RrcEVwb25PdkQ4ZmFEL2dXZmpaWHNFdHo4NXRNcTVLd2NQNUh2ZDJ0ZgoyKzBSbEtFT0pyY1dyL1lEc2w3dWdDdkFJTVk4WGdJL1E5dTZZTjAzTngzWXdSS2UrMElpSzcyOHVuNVJaVEVXCmNZNHc0YkpudlN6WWpKeUJIaHNiQVNTNzN6NndXVEo4REhKSwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
                                server: https://kubernetes.default
                              name: docker-desktop
                            contexts:
                            - context:
                                cluster: docker-desktop
                                user: docker-desktop
                              name: docker-desktop
                            current-context: docker-desktop
                            users:
                            - name: docker-desktop
                              user:
                                client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGVENDQWYyZ0F3SUJBZ0lJRnh2QzMyK2tPMEl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURFeE1Ea3dPVFEzTURkYUZ3MHlNVEV4TURrd09UUTNNekZhTURZeApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sc3dHUVlEVlFRREV4SmtiMk5yWlhJdFptOXlMV1JsCmMydDBiM0F3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ3M0TXdUU3ByMkRoOTMKTlpERldsNWQyaWgwbllBdTJmTk1RYjZ2ZHR5RUVpTUVpNk5BM05qRGM4OWl5WUhOU2J4YmVNNlNUMzRlTFIwaQpXbHJJSlhhVjNBSXhnbFo4SkdqczVUSHRlM1FjNXZVSkJJWXhndFJFTFlJMGlJekpZdEhoU1NwMFU0eWNjdzl5CnVGSm1YTHVBRVdXR0tTcitVd2Y3RWtuWmJoaFRNQWI0RUF1NlR6dkpyRHhpTDAzU0UrSWhJMTJDV1Y3cVRqZ1gKdGI1OXdKcWkwK3ZJSDBSc3dxOUpnemtQTUhLNkFBZkgxbmFmZ3VCQjM2VEppcUR6YWFxV2VZTmludlIrSkVHMAptakV3NWlFN3JHdUgrZVBxSklvdTJlc1YvN1hyYWx2UEl2Zng2ajFvRWI4NWtna2RuV0JiQlNmTmJCdnhHQU1uCmdnLzdzNHdoQWdNQkFBR2pTREJHTUE0R0ExVWREd0VCL3dRRUF3SUZvREFUQmdOVkhTVUVEREFLQmdnckJnRUYKQlFjREFqQWZCZ05WSFNNRUdEQVdnQlMyZWQweHdDTTdaY25GZ3BDbDJuSXNmaGZaRURBTkJna3Foa2lHOXcwQgpBUXNGQUFPQ0FRRUFpbUg1c1JqcjB6WjlDRkU2SVVwNVRwV2pBUXhma29oQkpPOUZmRGE3N2kvR1NsYm1jcXFrCldiWUVYRkl3MU9EbFVjUy9QMXZ5d3ZwV0Y0VXNXTGhtYkF5ZkZGbXZRWFNEZHhYbTlkamI2OEVtRWFPSlI1VTYKOHJOTkR0TUVEY25sbFd2Qk1CRXBNbkNtcm9KcXo3ZzVzeDFQSmhwcDBUdUZDQTIwT2FXb3drTUNNUXRIZlhLQgpVUDA2eGxRU2o1SGNOS1BSQWFyQzBtSzZPVUhybExBcUIvOCtDQlowVUY2MXhTTGN1WFJvYU52S1ZDWHZnQy9kCkQ4ckxuWXFmbWl6WHMvcHJ3dEhsaVFBR2lmemU1MmttbTkyR2RrS2V1SmFRbmM5RWwrd2RZaUVBSHVKU1YvK04Kc2VRelpTa0ZmT2ozbHUxdWtoSDg4dGcxUUp2TkpuM1FhQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
                                client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBck9ETUUwcWE5ZzRmZHpXUXhWcGVYZG9vZEoyQUx0bnpURUcrcjNiY2hCSWpCSXVqClFOell3M1BQWXNtQnpVbThXM2pPa2s5K0hpMGRJbHBheUNWMmxkd0NNWUpXZkNSbzdPVXg3WHQwSE9iMUNRU0cKTVlMVVJDMkNOSWlNeVdMUjRVa3FkRk9NbkhNUGNyaFNabHk3Z0JGbGhpa3EvbE1IK3hKSjJXNFlVekFHK0JBTAp1azg3eWF3OFlpOU4waFBpSVNOZGdsbGU2azQ0RjdXK2ZjQ2FvdFByeUI5RWJNS3ZTWU01RHpCeXVnQUh4OVoyCm40TGdRZCtreVlxZzgybXFsbm1EWXA3MGZpUkJ0Sm94TU9ZaE82eHJoL25qNmlTS0x0bnJGZisxNjJwYnp5TDMKOGVvOWFCRy9PWklKSFoxZ1d3VW56V3diOFJnREo0SVArN09NSVFJREFRQUJBb0lCQVFDWEZZTGtYVEFlVit0aAo2RnRVVG96b0lxOTJjdXRDaHRHZFZGdk14dWtqTnlLSloydk9WUFBQcE5lYXN4YVFqWjlpcGFxS3JaUS8xUmVBCkhVejNXOTVPUzg5UzYyQ2Y3OFlQT3FLdXRGU2VxYTErS3drSUhobGFXQmRSeUFDYVE1VysrSTEweWt1NXNzak8KYm8zOHpaQkQ5WEF2bHF6dlJTdFZYZjlTV1doQzBlWnRKTm84QU4yZnpkdkRjUUgwOVRsejh1S05EaUNra2RYQQpHTTdZTUdoQktYWGd6YlcxSUVMejRlRUpDZDh0dklReitwcWtxRktIcHRjNnVJY1hLQjFxUGVGRDRSMm9iNUlNCnl5MUpBWlZyR0JHaUk5d1p5OFU1a253UW93emwwUTEwZXlRdUkwTG42SWthZG5SQktMRHcrczRGaE1UQVViOWYKT1NBR3JaVnRBb0dCQU9RTDJzSEN3T25KOW0xYmRiSlVLeTFsRHRsTmV4aDRKOGNERVZKR3NyRVNndlgrSi9ZZQpXb0NHZXc3cGNXakFWaWJhOUMzSFBSbEtOV2hXOExOVlpvUy9jQUxCa1lpSUZNdDlnT1NpZmtCOFhmeVJQT3hJCmNIc2ZjOXZ2OEFJcmxZVVpCNjI1ak8rUFZTOXZLOXZXUGtka3d0MlFSUHlqYlEwVS9mZmdvUWVIQW9HQkFNSVIKd0lqL3RVbmJTeTZzL1JXSlFiVmxPb1VPVjFpb0d4WmNOQjBkaktBV3YwUksvVHdldFBRSXh2dmd1d2RaVFdiTApSTGk5M3RPY3U0UXhNOFpkdGZhTnh5dWQvM2xXSHhPYzRZa0EwUHdSTzY4MjNMcElWNGxrc0tuNUN0aC80QTFJCmw3czV0bHVEbkI3dFdYTFM4MHcyYkE4YWtVZXlBbkZmWXpTTUR1a1hBb0dBSkRFaGNialg1d0t2Z21HT2gxUEcKV25qOFowNWRwOStCNkpxN0NBVENYVW5qME9pYUxQeGFQcVdaS0IreWFQNkZiYnM0SDMvTVdaUW1iNzNFaTZHVgpHS0pOUTVLMjV5VTVyNlhtYStMQ0NMZjBMcDVhUGVHdFFFMFlsU0k2UkEzb3Qrdm1CUk02bzlacW5aR1dNMWlJCkg4cUZCcWJiM0FDUDBSQ3cwY01ycTBjQ2dZRUFvMWM5cmhGTERMYStPTEx3OE1kdHZyZE00ZUNJTTk2SnJmQTkKREtScVQvUFZXQzJscG94UjBYUHh4dDRIak0vbERiZllSNFhIbm1RMGo3YTUxU1BhbTRJSk9QVHFxYjJLdW44NApkSTl6VmpWSy90WTJRYlBSdVpvOTkxSGRod3RhRU5RZ29UeVo5N3gyRXJIQ3I1cE5uTC9SZzRUZzhtOHBEek14CjFIQnR2RkVDZ1lFQTF5aHNPUDBRb3F2SHRNOUFuWVlua2JzQU12L2dqT3FrWUk5cjQ2c1V3Mnc3WHRJb1NTYlAKU0hmbGRxN0IxVXJoTmRJMFBXWXRWZ3kyV1NrN0FaeG8vUWtLZGtPbTAxS0pCY2xteW9JZDE0a0xCVkZxbUV6Rgp1c2l4MmpwdTVOTWhjUWo4aFY2Sk42aXdraHJkYjByZVpuMGo4MG1ZRE96d3hjMmpvTmxSWjN3PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
                      scope: GLOBAL

Before installing Jenkins we need to create the PersistentVolumeClaim object. This volume is used by the agents to store workspace files.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: jenkins-agent
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: hostpath

Finally, let’s create it inside the jenkins namespace.

$ kubectl create -f k8s\jenkins-agent-pvc.yaml

Explore Jenkins on Kubernetes

After starting the Jenkins instance, we may log in to its web management console. The default root username is admin. In order to obtain the password, we should execute the following command.

$ kubectl get secret --namespace jenkins jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode
YdeQuJZHa1

The Jenkins instance is available on port 8080 inside the Kubernetes cluster. Let’s expose it on the local port with the kubectl port-forward command. Now, we can access it on the address http://localhost:8080.

$ kubectl port-forward service/jenkins 8080:8080 -n jenkins

Let’s log in to the Jenkins console. After that, we can verify the correctness of our installation. Firstly, we need to navigate to the “Manage Jenkins”, and then to the “Manage Credentials” section. As you can see below, there are two credentials there: github_credentials and docker-desktop.

jenkins-on-kubernetes-credentials

Then, let’s move back to the “Manage Jenkins”, and go to the “Manage Nodes and Clouds” section. In the “Configure Clouds” tab, there is the Kubernetes configuration as shown below. It contains two pod templates: default and maven.

jenkins-on-kubernetes-cloud

Explore a sample application

The sample application is built on top of Spring Boot. We use Maven for building it. On the other hand, we use the Jib plugin for creating a Docker image. Thanks to that, we won’t have to install any other tools to build Docker images with Jenkins.

<properties>
   <java.version>11</java.version>
</properties>
<build>
   <plugins>
      <plugin>
         <groupId>org.springframework.boot</groupId>
         <artifactId>spring-boot-maven-plugin</artifactId>
         <executions>
            <execution>
               <goals>
                  <goal>build-info</goal>
               </goals>
            </execution>
         </executions>
         <configuration>
            <excludeDevtools>false</excludeDevtools>
         </configuration>
      </plugin>
   </plugins>
</build>
<profiles>
   <profile>
      <id>jib</id>
      <activation>
         <activeByDefault>false</activeByDefault>
      </activation>
      <build>
         <plugins>
            <plugin>
               <groupId>com.google.cloud.tools</groupId>
               <artifactId>jib-maven-plugin</artifactId>
               <version>2.4.0</version>
               <configuration>
                  <to>piomin/sample-spring-boot-on-kubernetes</to>
               </configuration>
            </plugin>
         </plugins>
      </build>
   </profile>
</profiles>

The Deployment manifest contains the PIPELINE_NAMESPACE parameter. Our pipeline replaces it with the target namespace name. Therefore we may deploy our application in the multiple Kubernetes namespaces.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-boot-on-kubernetes-deployment
  namespace: ${PIPELINE_NAMESPACE}
spec:
  selector:
    matchLabels:
      app: sample-spring-boot-on-kubernetes
  template:
    metadata:
      labels:
        app: sample-spring-boot-on-kubernetes
    spec:
      containers:
      - name: sample-spring-boot-on-kubernetes
        image: piomin/sample-spring-boot-on-kubernetes
        ports:
        - containerPort: 8080
        env:
          - name: MONGO_DATABASE
            valueFrom:
              configMapKeyRef:
                name: mongodb
                key: database-name
          - name: MONGO_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-user
          - name: MONGO_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-password

Create Jenkins pipeline on Kubernetes

Finally, we may create the Jenkins pipeline for our application. It consists of six steps. Firstly, we clone the source code repository from GitHub. We use the default Jenkins agent for it. In the next three stages we use the maven agent. In the second stage, we build the application. Then, we run JUnit tests. After that, we build a Docker image in the “dockerless” mode using the Maven Jib plugin. In the last two stages, we can take an advantage of the Kubernetes Continuous Deploy plugin. We use an already created kubeconfig credentials, and deployment-template.yaml file from the source code. We just need to set PIPELINE_NAMESPACE environment variable.

pipeline {
   agent {
      label "default"
   }
   stages {
      stage('Checkout') {
         steps {
            script {
               git url: 'https://github.com/piomin/sample-spring-boot-on-kubernetes.git', credentialsId: 'github_credentials'
            }
         }
      }
      stage('Build') {
         agent {
            label "maven"
         }
         steps {
            sh 'mvn clean compile'
         }
      }
      stage('Test') {
         agent {
            label "maven"
         }
         steps {
            sh 'mvn test'
         }
      }
      stage('Image') {
         agent {
            label "maven"
         }
         steps {
            sh 'mvn -P jib -Djib.to.auth.username=${DOCKER_LOGIN} -Djib.to.auth.password=${DOCKER_PASSWORD} compile jib:build'
         }
      }
      stage('Deploy on test') {
         steps {
            script {
               env.PIPELINE_NAMESPACE = "test"
               kubernetesDeploy kubeconfigId: 'docker-desktop', configs: 'k8s/deployment-template.yaml'
            }
         }
      }
      stage('Deploy on prod') {
         steps {
            script {
               env.PIPELINE_NAMESPACE = "prod"
               kubernetesDeploy kubeconfigId: 'docker-desktop', configs: 'k8s/deployment-template.yaml'
            }
         }
      }
   }
}

Run the pipeline

Our sample Spring Boot application connects to MongoDB. Therefore, we need to deploy the Mongo instance to the test and prod namespaces before running the pipeline. We can use manifest mongodb-deployment.yaml in k8s directory.

$ kubectl create ns test
$ kubectl apply -f k8s/mongodb-deployment.yaml -n test
$ kubectl create ns prod
$ kubectl apply -f k8s/mongodb-deployment.yaml -n prod

Finally, let’s run our test pipeline. It finishes successfully as shown below.

jenkins-on-kubernetes-pipeline

Now, we may check out a list of running pods in the prod namespace.

Conclusion

Helm and JCasC plugin simplify the installation of Jenkins on Kubernetes. Also, Maven comes with a huge set of plugins, that may be used with Docker and Kubernetes. In this article, I showed how to use the Jib Maven Plugin to build an image of your application and Jenkins plugins to run pipelines and deploys on Kubernetes. You can compare it with the concept over GitLab CI presented in the article GitLab CI/CD on Kubernetes. Enjoy 🙂

The post Continuous Integration with Jenkins on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/11/10/continuous-integration-with-jenkins-on-kubernetes/feed/ 8 9091
Gitlab CI/CD on Kubernetes https://piotrminkowski.com/2020/10/19/gitlab-ci-cd-on-kubernetes/ https://piotrminkowski.com/2020/10/19/gitlab-ci-cd-on-kubernetes/#comments Mon, 19 Oct 2020 07:20:44 +0000 https://piotrminkowski.com/?p=9002 You can use GitLab CI/CD to build and deploy your applications on Kubernetes. It is not hard to integrate GitLab with Kubernetes. You can take an advantage of the GUI support to set up a connection with your Kubernetes cluster. Furthermore, GitLab CI provides a built-in container registry to store and share images. Preface In […]

The post Gitlab CI/CD on Kubernetes appeared first on Piotr's TechBlog.

]]>
You can use GitLab CI/CD to build and deploy your applications on Kubernetes. It is not hard to integrate GitLab with Kubernetes. You can take an advantage of the GUI support to set up a connection with your Kubernetes cluster. Furthermore, GitLab CI provides a built-in container registry to store and share images.

Preface

In this article, I will describe all the steps required to build and deploy your Java application on Kubernetes with GitLab CI/CD. First, we are going to run an instance of the GitLab server on the local Kubernetes cluster. Then, we will use the special GitLab features in order to integrate it with Kubernetes. After that, we will create a pipeline for our Maven application. You will learn how to build it, run automated tests, build a Docker image, and finally run it on Kubernetes with GitLab CI.
In this article, I’m going to focus on simplicity. I will show you how to run GitLab CI on Kubernetes with minimal effort and resources. With this in mind, I hope it will help to form an opinion about GitLab CI/CD. The more advanced, production installation may be easily performed with Helm. I will also describe that approach in the next section.

Run GitLab on Kubernetes

In order to easily start with GitLab CI on Kubernetes, we will use its image from the Docker Hub. We may choose between community and enterprise editions. First, we need to change the default external URL. We will also enable the container registry feature. To override both these settings we need to add the environment variable GITLAB_OMNIBUS_CONFIG. It can contain any of the GitLab configuration properties. Since I’m running the Kubernetes cluster locally, I also need to override the default URL of the Docker registry. You can easily get it by running the command docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' registry. The local image registry is running outside the Kubernetes cluster as a simple Docker container with the name registry.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitlab-deployment
spec:
  selector:
    matchLabels:
      app: gitlab
  template:
    metadata:
      labels:
        app: gitlab
    spec:
      containers:
      - name: gitlab
        image: gitlab/gitlab-ee
        env:
          - name: GITLAB_OMNIBUS_CONFIG
            value: "external_url 'http://gitlab-service.default/';gitlab_rails['registry_enabled'] = true;gitlab_rails['registry_api_url'] = \"http://172.17.0.2:5000\""
        ports:
        - containerPort: 80
          name: HTTP
        volumeMounts:
          - mountPath: /var/opt/gitlab
            name: data
      volumes:
        - name: data
          emptyDir: {}

Then we will also create the Kubernetes service gitlab-service. GitLab UI is available on port 80. We will use that service to access the GitLab UI outside the Kubernetes cluster.

apiVersion: v1
kind: Service
metadata:
  name: gitlab-service
spec:
  type: NodePort
  selector:
    app: gitlab
  ports:
  - port: 80
    targetPort: 80
    name: http

Finally, we can apply the GitLab manifest with Deployment and Service to Kubernetes. To do that you just need to execute the command kubectl apply -f k8s/gitlab.yaml. Let’s check the address of gitlab-service.

Of course, we may also install GitLab on Kubernetes with Helm. However, you should keep in mind that it will generate a full deployment with some core and optional components. Consequently, you will have to increase RAM assigned to your cluster to around 15MB to be able to run it. Here’s a list of required Helm commands. For more details, you may refer to the GitLab documentation.

$ helm repo add gitlab https://charts.gitlab.io/
$ helm repo update
$ helm upgrade --install gitlab gitlab/gitlab \
  --timeout 600s \
  --set global.hosts.domain=example.com \
  --set global.hosts.externalIP=10.10.10.10 \
  --set certmanager-issuer.email=me@example.com

Clone the source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-spring-boot-on-kubernetes. Then you should just follow my instructions 🙂

First, let’s create a new repository on GitLab. Of course, its name is sample-spring-boot-on-kubernetes as shown below.

gitlab-on-kubernetes-create-repo

Then, you should clone my example repository, and move it to your GitLab instance running on Kubernetes. Assuming the address of my local instance of GitHub is http://localhost:30129, I need to execute the following command.

$ git remote add gitlab http://localhost:30129/root/sample-spring-boot-on-kubernetes.git

To clarify, let’s display a list of Git remotes for the current repository.

$ git remote -v
gitlab  http://localhost:30129/root/sample-spring-microservices-kubernetes.git (fetch)
gitlab  http://localhost:30129/root/sample-spring-microservices-kubernetes.git (push)
origin  https://github.com/piomin/sample-spring-microservices-kubernetes.git (fetch)
origin  https://github.com/piomin/sample-spring-microservices-kubernetes.git (push)

Finally, we can push the source code to the GitLab repository using the gitlab remote.

$ git push gitlab

Configure GitLab integration with Kubernetes

After login to the GitLab UI, you should enable local HTTP requests. Do that you need to go to the admin section. Then click “Settings” -> “Network” -> “Outbound requests”. Finally, you need to check the box “Allow requests to the local network from web hooks and services”. We will use internal communication between GitLab and Kubernetes API, and between the GitLab CI runner and the GitLab master.

Now, we may configure the connection to the Kubernetes API. To do that you should go to the section “Kubernetes”, then click “Add Kubernetes cluster”, and finally switch to the tab “Connect existing cluster”. We need to provide some basic information about our cluster in the form. The name is required, and therefore I’m setting the same name as my Kubernetes context. You may leave a default value in the “Environment scope” field. In the “API URL” field I’m providing the internal address of Kubernetes API. It is http://kubernetes.default:443.

We also need to paste the cluster CA certificate. In order to obtain it, you should first find the secret with the prefix default-token-, and copy it with the following command.

$ kubectl get secret default-token-ttswt -o jsonpath="{['data']['ca\.crt']}" | base64 --decode

Finally, we should create a special ServiceAccount for GitLab with the cluster-admin role.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitlab
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: gitlab-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: gitlab
    namespace: kube-system

Then you should find the secret with the prefix gitlab in the kube-system namespace, and display its details. After that, we need to copy value of field token, and paste it to the form without decoding.

$ kubectl describe secret gitlab-token-5sk2v -n kube-system

Here’s the full information about our Kubernetes cluster required by GitLab. Let’s add it by clicking “Add Kubernetes cluster”.

Once you have successfully added the new Kubernetes cluster in the GitLab UI you need to display its details. In the tab “Applications” you should find the section “GitLab runner” and install it.

The GitLab runner is automatically deployed in the namespace gitlab-managed-apps. We can verify if it started succesfully.

$ kubectl get pod -n gitlab-managed-apps
NAME                                   READY        STATUS    RESTARTS       AGE
runner-gitlab-runner-5649dbf49-5mnjv   1/1          Running   0              5m56s

The GitLab runner tries to communicate with the GitLab master. To verify that everything works fine, we need to go to the section “Overview” -> “Runners”. If you see the IP address and version number, it means that the runner is able to communicate with the master. In case of any problems, you should take a look at the pod logs.

Create application pipeline

The GitLab CI/CD configuration file is available in the project root directory. Its name is .gitlab-ci.yml. It consists of 5 stages and uses the Maven Docker image for executing builds. It is automatically detected by GitLab CI. Let’s take a closer look at it.

First, we are running the build stage responsible for building the application for the source code. It just runs the command mvn compile. Then, we are running JUnit tests using mvn test command. If all the tests are passed, we may build a Docker image with our application. We use the Jib Maven plugin for it. It is able to build an image in the docker-less mode. Therefore, we don’t have to run an image with a Docker client. Jib builds an image and pushes it to the Docker registry. Finally, we can deploy our container on Kubernetes. To do that we are using the bitnami/kubectl image. It allows us to execute kubectl commands. In the first step, we are deploying the application in the test namespace. The last stage deploy-prod requires a manual approval. Both deploy stages are allowed only for a master branch.

image: maven:latest

stages:
  - build
  - test
  - image-build
  - deploy-tb
  - deploy-prod

build:
  stage: build
  script:
    - mvn compile

test:
  stage: test
  script:
    - mvn test

image-build:
  stage: image-build
  script:
    - mvn -s .m2/settings.xml -P jib compile jib:build

deploy-tb:
  image: bitnami/kubectl:latest
  stage: deploy-tb
  only:
    - master
  script:
    - kubectl apply -f k8s/deployment.yaml -n test

deploy-prod:
  image: bitnami/kubectl:latest
  stage: deploy-prod
  only:
    - master
  when: manual
  script:
    - kubectl apply -f k8s/deployment.yaml -n prod

We may push our application image to a remote or a local Docker registry. If you do not pass any address, by default Jib tries to push the image to the docker.io registry.

<plugin>
   <groupId>com.google.cloud.tools</groupId>
   <artifactId>jib-maven-plugin</artifactId>
   <version>2.4.0</version>
   <configuration>
      <to>piomin/sample-spring-boot-on-kubernetes</to>
   </configuration>
</plugin>

In order to push images to the docker.io registry, we need to provide client’s authentication credentials.

<servers>
   <server>
      <id>registry-1.docker.io</id>
      <username>${DOCKER_LOGIN}</username>
      <password>${DOCKER_PASSWORD}</password>
   </server>
</servers>

Here’s a similar configuration, but for the local instance of the registry.

<plugin>
   <groupId>com.google.cloud.tools</groupId>
   <artifactId>jib-maven-plugin</artifactId>
   <version>2.4.0</version>
   <configuration>
      <allowInsecureRegistries>true</allowInsecureRegistries>
      <to>172.17.0.2:5000/root/sample-spring-boot-on-kubernetes</to>
   </configuration>
</plugin>

Run GitLab CI pipeline on Kubernetes

Finally, we may run our GitLab CI/CD pipeline. We can push the change in the source code or just run the build manually. You can see the result for the master branch in the picture below.

gitlab-on-kubernetes-pipeline-finished

The last stage deploy-prod requires manual approval. We may confirm it by clicking the “Play” button.

gitlab-on-kubernetes-pipeline-approval

If you push changes to another branch than master, the pipeline will run just three stages. It will build the application, run tests, and build a Docker image.

gitlab-on-kubernetes-dev-build

You can also take advantage of the integrated container registry. You just need to set the right name of a Docker image. It should contain the GitLab owner name and image name. In that case, it is root/sample-spring-boot-on-kubernetes. I’m using my local Docker registry available at 172.17.0.2:5000.

Conclusion

GitLab seems to be a very interesting tool for building CI/CD processes on Kubernetes. It provides built-in integration with Kubernetes and a Docker container registry. Its documentation is at a very high level. In this article, I tried to show you that it is relatively easy to build CI/CD pipelines for Maven applications on Kubernetes. If you are interested in building a full CI/CD environment you may refer to the article How to setup continuous delivery environment. Enjoy 🙂

The post Gitlab CI/CD on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/10/19/gitlab-ci-cd-on-kubernetes/feed/ 2 9002
Guide to Quarkus on Kubernetes https://piotrminkowski.com/2020/08/10/guide-to-quarkus-on-kubernetes/ https://piotrminkowski.com/2020/08/10/guide-to-quarkus-on-kubernetes/#comments Mon, 10 Aug 2020 15:30:42 +0000 http://piotrminkowski.com/?p=8336 Quarkus is usually described as a Kubernetes-native Java framework. It allows us to automatically generate Kubernetes resources based on the defaults and user-provided configuration. It also provides an extension for building and pushing container images. Quarkus can create a container image and push it to a registry before deploying the application to the target platform. […]

The post Guide to Quarkus on Kubernetes appeared first on Piotr's TechBlog.

]]>
Quarkus is usually described as a Kubernetes-native Java framework. It allows us to automatically generate Kubernetes resources based on the defaults and user-provided configuration. It also provides an extension for building and pushing container images. Quarkus can create a container image and push it to a registry before deploying the application to the target platform. It also provides an extension that allows developers to use Kubernetes ConfigMap as a configuration source, without having to mount them into the pod. We may use fabric8 Kubernetes Client directly to interact with the cluster, for example during JUnit tests.
In this guide, you will learn how to:

  • Use Quarkus Dekorate extension to automatically generate Kubernetes manifests basing on the source code and configuration
  • Build and push images to Docker registry with Jib extension
  • Deploy your application on Kubernetes without any manually created YAML in one click
  • Use Quarkus Kubernetes Config to inject configuration properties from ConfigMap

This guide is the second in series about Quarkus framework. If you are interested in the introduction to building Quarkus REST applications with Kotlin you may refer to my article Guide to Quarkus with Kotlin.

github-logo Source code

The source code with the sample Quarkus applications is available on GitHub. First, you need to clone the following repository: https://github.com/piomin/sample-quarkus-applications.git. Then, you need to go to the employee-service directory. We use the same repository as in my previous article about Quarkus.

1. Dependencies

Quarkus does not implement mechanisms for generating Kubernetes manifests, deploying them on the platform, or building images. It adds some logic to the existing tools. To enable extensions to Dekorate and Jib we should include the following dependencies.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-kubernetes</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-container-image-jib</artifactId>
</dependency>

Jib builds optimized images for Java applications without a Docker daemon, and without deep mastery of Docker best-practices. It is available as plugins for Maven and Gradle and as a Java library. Dekorate is a Java library that makes generating and decorating Kubernetes manifests as simple as adding a dependency to your project. It may generate manifests basing on the source code, annotations, and configuration properties.

2. Preparation

In the first part of my guide to Kotlin, we were running our application in development mode with an embedded H2 database. In this part of the tutorial, we will integrate our application with Postgres deployed on Kubernetes. To do that we first need to change configuration settings for the data source. H2 database will be active only in dev and test mode. The configuration of Postgresql data source would be based on environment variables.


# kubernetes
quarkus.datasource.db-kind=postgresql
quarkus.datasource.username=${POSTGRES_USER}
quarkus.datasource.password=${POSTGRES_PASSWORD}
quarkus.datasource.jdbc.url=jdbc:postgresql://${POSTGRES_HOST}:5432/${POSTGRES_DB}
# dev
%dev.quarkus.datasource.db-kind=h2
%dev.quarkus.datasource.username=sa
%dev.quarkus.datasource.password=password
%dev.quarkus.datasource.jdbc.url=jdbc:h2:mem:testdb
# test
%test.quarkus.datasource.db-kind=h2
%test.quarkus.datasource.username=sa
%test.quarkus.datasource.password=password
%test.quarkus.datasource.jdbc.url=jdbc:h2:mem:testdb

3. Configure Kubernetes extension

With Quarkus Kubernetes extension we may customize the behavior of the manifest generator. To do that we need to provide configuration settings with the prefix quarkus.kubernetes.*. There are pretty many options like defining labels, annotations, environment variables, Secret and ConfigMap references, or mounting volumes. First, let’s take a look at the Secret and ConfigMap prepared for Postgres.

apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-config
  labels:
    app: postgres
data:
  POSTGRES_DB: quarkus
  POSTGRES_USER: quarkus
  POSTGRES_HOST: postgres
---
apiVersion: v1
kind: Secret
metadata:
  name: postgres-secret
  labels:
    app: postgres
data:
  POSTGRES_PASSWORD: YWRtaW4xMjM=

In this fragment of configuration, besides simple label and annotation, we are adding reference to all the keys inside postgres-config and postgres-secret.

quarkus.kubernetes.labels.app-type=demo
quarkus.kubernetes.annotations.app-type=demo
quarkus.kubernetes.env.secrets=postgres-secret
quarkus.kubernetes.env.configmaps=postgres-config

4. Build image and deploy

Before executing build and deploy we need to apply manifest with Postgres. Here’s Deployment definition of Postgres.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:latest
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_DB
              valueFrom:
                configMapKeyRef:
                  key: POSTGRES_DB
                  name: postgres-config
            - name: POSTGRES_USER
              valueFrom:
                configMapKeyRef:
                  key: POSTGRES_USER
                  name: postgres-config
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: POSTGRES_PASSWORD
                  name: postgres-secret
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgredb
      volumes:
        - name: postgredb
          persistentVolumeClaim:
            claimName: postgres-claim

Let’s apply it on Kubernetes together with required ConfigMap, Secret, PersistenceVolume and PersistenceVolumeClaim. All the objects are available inside example repository in the file employee-service/k8s/postgres-deployment.yaml.

$ kubectl apply -f employee-service\k8s\postgresql-deployment.yaml

After deploying Postgres we may proceed to the main task. In order to build a Docker image with the application, we need to enable option quarkus.container-image.build during Maven build. If you also want to deploy and run a container with the application on your local Kubernetes instance you need to enable option quarkus.kubernetes.deploy.

$ clean package -Dquarkus.container-image.build=true -Dquarkus.kubernetes.deploy=true

If your Kubernetes cluster is located on the hosted cloud you should push the image to a remote Docker registry before deployment. To do that we should also activate option quarkus.container-image.push during Maven build. If you do not push to the default Docker registry you have to set parameter quarkus.container-image.registry=gcr.io inside the application.properties file. The only thing I need to set for building images is the following property, which is the same as my login to docker.io site.

quarkus.container-image.group=piomin

After running the required Maven command our application is deployed on Kubernetes. Let’s take a look at what happened during the Maven build. Here’s the fragment of logs during that build. You see that Quarkus extension generated two files kubernetes.yaml and kubernetes.json inside target/kubernetes directory. Then it proceeded to build a Docker image with our application. Because we didn’t specify any base image it takes a default one for Java 11 – fabric8/java-alpine-openjdk11-jre.

quarkus-build-image

Let’s take a look on the Deployment definition automatically generated by Quarkus.

  1. It adds some annotations like port or path to metrics endpoint used by Prometheus to monitor application and enabled scraping. It also adds Git commit id, repository URL, and our custom annotation defined in application.properties.
  2. It adds labels with the application name, version (taken from Maven pom.xml), and our custom label app-type.
  3. It injects Kubernetes namespace name into the container.
  4. It injects the reference to the postgres-secret defined in application.properties.
  5. It injects the reference to the postgres-config defined in application.properties.
  6. The name of the image is automatically created. It is based on Maven artifactId and version.
  7. The definition of liveness and readiness is generated if Maven module quarkus-smallrye-health is present
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations: # (1)
    prometheus.io/path: /metrics
    prometheus.io/port: 8080
    app.quarkus.io/commit-id: f6ae37288ed445177f23291c921c6099cfc58c6e
    app.quarkus.io/vcs-url: https://github.com/piomin/sample-quarkus-applications.git
    app.quarkus.io/build-timestamp: 2020-08-10 - 13:22:32 +0000
    app-type: demo
    prometheus.io/scrape: "true"
  labels: # (2)
    app.kubernetes.io/name: employee-service
    app.kubernetes.io/version: 1.1
    app-type: demo
  name: employee-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: employee-service
      app.kubernetes.io/version: 1.1
  template:
    metadata:
      annotations:
        prometheus.io/path: /metrics
        prometheus.io/port: 8080
        app.quarkus.io/commit-id: f6ae37288ed445177f23291c921c6099cfc58c6e
        app.quarkus.io/vcs-url: https://github.com/piomin/sample-quarkus-applications.git
        app.quarkus.io/build-timestamp: 2020-08-10 - 13:22:32 +0000
        app-type: demo
        prometheus.io/scrape: "true"
      labels:
        app.kubernetes.io/name: employee-service
        app.kubernetes.io/version: 1.1
        app-type: demo
    spec:
      containers:
      - env:
        - name: KUBERNETES_NAMESPACE # (3)
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        envFrom:
        - secretRef: # (4)
            name: postgres-secret
        - configMapRef: # (5)
            name: postgres-config
        image: piomin/employee-service:1.1 # (6)
        imagePullPolicy: IfNotPresent
        livenessProbe: # (7)
          failureThreshold: 3
          httpGet:
            path: /health/live
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 0
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 10
        name: employee-service
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health/ready
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 0
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 10
      serviceAccount: employee-service

Once the image has been built it is available in local registry. Quarkus automatically deploy it to the current cluster using already generated Kubernetes manifests.

quarkus-build-maven

Here’s the list of pods in default namespace.

quarkus-pods

5. Using Kubernetes Config extension

With Kubernetes Config extension you can use ConfigMap as a configuration source, without having to mount them into the pod with the application. To use that extension we need to include the following Maven dependency.


<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-kubernetes-config</artifactId>
</dependency>

This extension works directly with Kubernetes API using fabric8 KubernetesClient. That’s why we should set the proper permissions for ServiceAccount. Fortunately, all the required configuration is automatically generated by Quarkus Kubernetes extension. The RoleBinding object is appied automatically if quarkus-kubernetes-config module is present.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  annotations:
    prometheus.io/path: /metrics
    prometheus.io/port: 8080
    app.quarkus.io/commit-id: a5da459af01637657ebb0ec3a606eb53d13b8524
    app.quarkus.io/vcs-url: https://github.com/piomin/sample-quarkus-applications.git
    app.quarkus.io/build-timestamp: 2020-08-10 - 14:25:20 +0000
    app-type: demo
    prometheus.io/scrape: "true"
  labels:
    app.kubernetes.io/name: employee-service
    app.kubernetes.io/version: 1.1
    app-type: demo
  name: employee-service:view
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: view
subjects:
- kind: ServiceAccount
  name: employee-service

Here’s our example ConfigMap that contains a single property property1.


apiVersion: v1
kind: ConfigMap
metadata:
  name: employee-config
data:
  application.properties: |-
    property1=one

The same property is defined inside application.properties available on the classpath, but there it has a different value.

property1=test

Before deploying a new version of application we need to add the following properties. First of them enables Kubernetes ConfigMap injection, while the second specifies the name of injected ConfigMap.

quarkus.kubernetes-config.enabled=true
quarkus.kubernetes-config.config-maps=employee-config

Finally we just need to implement a simple endpoint that injects and returns configuration property.

@ConfigProperty(name = "property1")
lateinit var property1: String

@GET
@Path("/property1")
fun property1(): String = property1

The properties obtained from the ConfigMap have a higher priority than any properties of the same name that are found in application.properties available on the classpath. Let’s test it.

quarkus-config

The post Guide to Quarkus on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/08/10/guide-to-quarkus-on-kubernetes/feed/ 3 8336
Guide to building Spring Boot library https://piotrminkowski.com/2020/08/04/guide-to-building-spring-boot-library/ https://piotrminkowski.com/2020/08/04/guide-to-building-spring-boot-library/#comments Tue, 04 Aug 2020 08:32:58 +0000 http://piotrminkowski.com/?p=8270 In this article, I’m going to show you how to create and share your own custom Spring Boot library. If you decide to build such a product you should follow some best practices recommended by Spring Team. It’s a little bit more complicated than creating a plain Java library. Finally, you should publish your artifacts […]

The post Guide to building Spring Boot library appeared first on Piotr's TechBlog.

]]>
In this article, I’m going to show you how to create and share your own custom Spring Boot library. If you decide to build such a product you should follow some best practices recommended by Spring Team. It’s a little bit more complicated than creating a plain Java library. Finally, you should publish your artifacts somewhere to share it with the community. Probably you need to obtain positive feedback from the community, so you should think about adding some extras. I’m also going to describe them. Let’s begin!

Examples

If you are looking for the examples of simple Spring Boot libraries you can take a look on my repositories: https://github.com/piomin/spring-boot-logging and https://github.com/piomin/spring-boot-istio.

1. Pick the right name

We should pick the right name for our library. Spring recommends creating special modules called “starters” that contain code with auto-configuration and customize the infrastructure of a given technology. The name of the third-party starter should end with spring-boot-starter and start with the name of the project or something related to the technology we are using in the library. It is contrary to the names of all official starters, which are created following the pattern spring-boot-starter-*. For example, the names of my libraries are logstash-logging-spring-boot-starter or istio-spring-boot-starter.

2. Create auto-configuration

Typically the “starter” module is separated from the “autoconfigure” module. However, it is not required. The autoconfigure module contains everything necessary for a start. Moreover, if I’m creating a simple library that does not consist of many classes, I’m inserting everything into a single starter module. Of course, that is my approach. You can still create a separate starter module that includes the required dependencies for the project. It is the most important that all the beans are registered inside the auto-configuration class. Do not annotate each of your beans inside a library with @Component or @Service, but define them in an auto-configured module. Here’s a simple auto-configuration class inside my logstash-logging-spring-boot-starter library.

 

@Configuration
@ConfigurationProperties(prefix = "logging.logstash")
public class SpringLoggingAutoConfiguration {

   private static final String LOGSTASH_APPENDER_NAME = "LOGSTASH";
   private String url = "localhost:8500";
   private String ignorePatterns;
   private boolean logHeaders;
   private String trustStoreLocation;
   private String trustStorePassword;

   @Value("${spring.application.name:-}")
   String name;

   @Autowired(required = false)
   Optional<RestTemplate> template;

   @Bean
   public UniqueIDGenerator generator() {
      return new UniqueIDGenerator();
   }

   @Bean
   public SpringLoggingFilter loggingFilter() {
      return new SpringLoggingFilter(generator(), ignorePatterns, logHeaders);
   }

   @Bean
   @ConditionalOnMissingBean(RestTemplate.class)
   public RestTemplate restTemplate() {
      RestTemplate restTemplate = new RestTemplate();
      List<ClientHttpRequestInterceptor> interceptorList = new ArrayList<ClientHttpRequestInterceptor>();
      interceptorList.add(new RestTemplateSetHeaderInterceptor());
      restTemplate.setInterceptors(interceptorList);
      return restTemplate;
   }

   @Bean
   @ConditionalOnProperty("logging.logstash.enabled")
   public LogstashTcpSocketAppender logstashAppender() {
      LoggerContext loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
      LogstashTcpSocketAppender logstashTcpSocketAppender = new LogstashTcpSocketAppender();
      logstashTcpSocketAppender.setName(LOGSTASH_APPENDER_NAME);
      logstashTcpSocketAppender.setContext(loggerContext);
      logstashTcpSocketAppender.addDestination(url);
      if (trustStoreLocation != null) {
         SSLConfiguration sslConfiguration = new SSLConfiguration();
         KeyStoreFactoryBean factory = new KeyStoreFactoryBean();
         factory.setLocation(trustStoreLocation);
         if (trustStorePassword != null)
            factory.setPassword(trustStorePassword);
         sslConfiguration.setTrustStore(factory);
         logstashTcpSocketAppender.setSsl(sslConfiguration);
      }
      LogstashEncoder encoder = new LogstashEncoder();
      encoder.setContext(loggerContext);
      encoder.setIncludeContext(true);
      encoder.setCustomFields("{\"appname\":\"" + name + "\"}");
      encoder.start();
      logstashTcpSocketAppender.setEncoder(encoder);
      logstashTcpSocketAppender.start();
      loggerContext.getLogger(Logger.ROOT_LOGGER_NAME).addAppender(logstashTcpSocketAppender);
      return logstashTcpSocketAppender;
   }
}

To enable auto-configuration for the custom library we need to create file spring.factories in /src/main/resources/META-INF directory that contains a list of auto-configuration classes.

org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
pl.piomin.logging.config.SpringLoggingAutoConfiguration

 

3. Process annotations

Spring is an annotation-based framework. If you are creating your custom library you will usually define some annotations used to enable or disable features. With Spring Boot you can easily process such annotations. Here’s my custom annotation used to enable the Istio client on application startup. I’m following the Spring pattern widely used in Spring Cloud.

@Target({ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface EnableIstio {
    int timeout() default 0;
    String version() default "";
    int weight() default 0;
    int numberOfRetries() default 0;
    int circuitBreakerErrors() default 0;
}

I need to process already defined annotation only once on startup. That’s why I’m creating a bean that implements the ApplicationListener interface to catch ContextRefreshedEvent emitted by Spring Boot.

public class ApplicationStartupListener implements
      ApplicationListener<ContextRefreshedEvent> {
   private ApplicationContext context;
   private EnableIstioAnnotationProcessor processor;
   public ApplicationStartupListener(ApplicationContext context,
         EnableIstioAnnotationProcessor processor) {
      this.context = context;
      this.processor = processor;
   }
   @Override
   public void onApplicationEvent(ContextRefreshedEvent contextRefreshedEvent) {
      Optional<EnableIstio> annotation =
            context.getBeansWithAnnotation(EnableIstio.class).keySet().stream()
            .map(key -> context.findAnnotationOnBean(key, EnableIstio.class))
            .findFirst();
      annotation.ifPresent(enableIstio -> processor.process(enableIstio));
   }
}

 

4. Spring Boot library dependencies

Our library should reference only those artifacts or other starters, that are necessary for implementation. Here’s a minimal set of artifacts required for my istio-spring-boot-starter. Besides Spring and Spring Boot libraries I only use Kubernetes and Istio Java clients. We might as well declare a reference to spring-boot-starter-parent.

<dependencies>
   <dependency>
      <groupId>me.snowdrop</groupId>
      <artifactId>istio-client</artifactId>
      <version>${istio-client.version}</version>
   </dependency>
   <dependency>
      <groupId>io.fabric8</groupId>
      <artifactId>kubernetes-client</artifactId>
      <version>${kubernetes-client.version}</version>
   </dependency>
   <dependency>
      <groupId>org.springframework</groupId>
      <artifactId>spring-context-support</artifactId>
      <version>${spring.version}</version>
      <scope>provided</scope>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-autoconfigure</artifactId>
      <version>${spring.boot.version}</version>
      <scope>provided</scope>
   </dependency>
</dependencies>

 

5. Publish

Typically the implementation process of the Spring Boot library is divided into two phases. In the first phase, we need to implement the specific mechanisms related to our library. In the second phase, we should take care of following the Spring Boot best practices. Assuming we have already finished it, we may publish our custom starter to share it with the community. In my opinion, the best way to do it is by publishing it on the Maven Central repository.
You must go through several steps to publish JAR files to Maven Central. The list of necessary steps is listed below. For the more detailed description, you may refer to the article How to Publish Your Artifacts to Maven Central on DZone.
Here’s the list of prerequisites:

  • Create an account at Sonatype (https://oss.sonatype.org/)
  • Claim your product’s namespace by creating an issue in Sonatype’s Jira
  • Generate PGP private/public key pair to sign your JAR files
  • Publish your key to a public key server to one of GPG servers

After completing all the required steps you may proceed to a configuration in Maven POM file. You need to include there two sets of configurations. The first of them contains a piece of the necessary information about our project, author, and source code repository.

<name>logstash-logging-spring-boot-starter</name>
<description>Library for HTTP logging with Spring Boot</description>
<url>https://github.com/piomin/spring-boot-logging</url>
<developers>
   <developer>
      <name>Piotr Mińkowski</name>
      <email>piotr.minkowski@gmail.com</email>
      <url>https://github.com/piomin</url>
   </developer>
</developers>
<licenses>
   <license>
      <name>MIT License</name>
      <url>http://www.opensource.org/licenses/mit-license.php</url>
      <distribution>repo</distribution>
   </license>
</licenses>
<scm>
   <connection>scm:git:git://github.com/piomin/spring-boot-logging.git</connection>
   <developerConnection>scm:git:git@github.com:piomin/spring-boot-logging.git</developerConnection>
   <url>https://github.com/piomin/spring-boot-logging</url>
</scm>
<distributionManagement>
   <snapshotRepository>
      <id>ossrh</id>
      <url>https://oss.sonatype.org/content/repositories/snapshots</url>
   </snapshotRepository>
   <repository>
      <id>ossrh</id>
      <url>https://oss.sonatype.org/service/local/staging/deploy/maven2/</url>
   </repository>
</distributionManagement>

In the second step, we need to add some Maven plugins for signing JAR file, including source code files and Javadocs there. Here’s a required list of plugins activated only with release Maven profile enabled.

profiles>
   <profile>
      <id>release</id>
      <build>
         <plugins>
            <plugin>
               <groupId>org.apache.maven.plugins</groupId>
               <artifactId>maven-gpg-plugin</artifactId>
               <version>1.6</version>
               <executions>
                  <execution>
                     <id>sign-artifacts</id>
                     <phase>verify</phase>
                     <goals>
                        <goal>sign</goal>
                     </goals>
                  </execution>
               </executions>
            </plugin>
            <plugin>
               <groupId>org.apache.maven.plugins</groupId>
               <artifactId>maven-source-plugin</artifactId>
               <version>3.2.1</version>
               <executions>
                  <execution>
                     <id>attach-sources</id>
                     <goals>
                        <goal>jar-no-fork</goal>
                     </goals>
                  </execution>
               </executions>
            </plugin>
            <plugin>
               <groupId>org.apache.maven.plugins</groupId>
               <artifactId>maven-javadoc-plugin</artifactId>
               <version>3.2.0</version>
               <executions>
                  <execution>
                     <id>attach-javadocs</id>
                     <goals>
                        <goal>jar</goal>
                     </goals>
                  </execution>
               </executions>
            </plugin>
         </plugins>
      </build>
   </profile>
</profiles>

Finally you need to execute command mvn clean deploy -P release, and visit Sonatype site to confirm publication of your library.

spring-boot-library-sonatype

6. Promote your Spring Boot library

Congratulations! You have already published your first Spring Boot library. But, the question is what’s next? You probably would like to encourage people to try it. Of course, you can advertise it on social media, or create articles on dev portals. But my first advice is to take care of the presentation site. If you are storing your source code on GitHub prepare a Readme file with a detailed description of your library. It is also worth adding some tags that describe your project.

spring-boot-library-github-2

It is relatively easy to integrate your GitHub repository with some third-party tools used for continuous integration or static source code analysis. Thanks to that you can continuously improve your library. Moreover, you can add some badges to your repository that indicate you are using such tools. In my repositories spring-boot-logging and spring-boot-istio, I have already added badges with Maven release, CircleCI builds status and SonarCloud analysis reports. Looks fine? 🙂

spring-boot-library-github-1

Conclusion

In this article, I describe the process of creating a Spring Boot library from the beginning to the end. You can take a look on my libraries https://github.com/piomin/spring-boot-logging and https://github.com/piomin/spring-boot-istio if you are looking for simple examples. Of course, there are many other third-party Spring Boot starters published on GitHub you can also take a look. If you interested in building your own Spring Boot library you should learn more about auto-configuration: A Magic Around Spring Boot Auto Configuration.

The post Guide to building Spring Boot library appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/08/04/guide-to-building-spring-boot-library/feed/ 3 8270
Development on Kubernetes with Okteto and Spring Boot https://piotrminkowski.com/2020/06/15/development-on-kubernetes-with-okteto-and-spring-boot/ https://piotrminkowski.com/2020/06/15/development-on-kubernetes-with-okteto-and-spring-boot/#comments Mon, 15 Jun 2020 10:27:51 +0000 http://piotrminkowski.com/?p=8120 Okteto Platform seems to be an interesting alternative to the other more popular tools that simplify development on Kubernetes. In comparison to the tools like Skaffold or Draft, the idea around Okteto is to move development entirely to Kubernetes. What does it mean in practice? Let’s check it out. Okteto decouples development from deployment. It […]

The post Development on Kubernetes with Okteto and Spring Boot appeared first on Piotr's TechBlog.

]]>
Okteto Platform seems to be an interesting alternative to the other more popular tools that simplify development on Kubernetes. In comparison to the tools like Skaffold or Draft, the idea around Okteto is to move development entirely to Kubernetes. What does it mean in practice? Let’s check it out.
Okteto decouples development from deployment. It means that we can deploy our application using kubectl, Helm, or even Skaffold, and then use Okteto for implementing and testing application components. Of course, we may do the whole process just using Okteto commands available within CLI. This simple idea – “code locally, run and debug directly on Okteto Cloud” – accelerate your local development, reduces local setup, and eliminates integration issues.
In this article, I’m going to show you how to use Okteto for the development of a Spring Boot application that connects to a MongoDB database running on Kubernetes. The sample application will be built using Maven, which is supported on Okteto. There is a small catch in this solution – it is payable. However, you may take advantage of the developer free plan, which includes 2 CPUs and 4GB RAM, and sleeps after 24h. Developer PRO plan with 4 CPUs and 8GB RAM is available for 14 days free trial.

I have already described another useful tool for deploying applications on Kubernetes – Skaffold. If you are interested in more details you may refer to the following article Local Java Development on Kubernetes.

Example

The source code of the sample Spring Boot application that connects to MongoDB is as usual available in my GitHub repository: https://github.com/piomin/sample-spring-boot-on-kubernetes.git.

Getting started with Okteto

You should start from downloading Okteto CLI. The same as other similar solutions it is available as a single executable file that needs to be placed in your PATH. After that you should execute command okteto login that downloads and adds your Okteto’s account credentials to kubeconfig file, and sets it as a current context. Now you can use kubectl to interact with your new remote Kubernetes cluster.

okteto-login

Okteto configuration

In fact, to start development with Okteto you just need to add file okteto.yml with required configuration to the root directory of your application. You have pretty configuration options, but you should add the name of your application and an image used for development. What is very important here – it is not the Docker image just with your application. When Okteto development mode is initialized for your application, it replaces your application container with a development container that contains development tools (e.g. maven and jdk, or npm, python, go compiler, debuggers, etc). In that case we are using a special Docker image dedicated for maven okteto/maven. We are also setting Maven command for starting our Spring Boot application, some environment variables with MongoDB credentials and database name, and port forwarding option.


name: employee-service
image: okteto/maven
command: ["mvn", "spring-boot:run" ]
workdir: /app
environment:
- MONGO_USERNAME=okteto
- MONGO_DATABASE=okteto
- MONGO_PASSWORD=okteto
forward:
- 8080:8080

Now, all you need to do is to execute command okteto up in the root directory of your application. Okteto is able to automatically create Deployment, perform Maven build and deploy your Spring Boot application on a remote Kubernetes cluster.

Spring Boot development

Spring Boot application has been started on Okteto Cloud by executing Maven goal spring-boot:run. Before that it runs the full Maven build. Okteto is listening for changes in the code and synchronizes it to your remote development environment. Of course such change should not trigger Maven build once again – like for example in Skaffold. To enable that feature the only thing we need to do is to add Spring Boot Devtools to the project dependencies.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-devtools</artifactId>
</dependency>

Applications that use spring-boot-devtools will automatically restart whenever files on the classpath change. With Okteto the change in the local file system will trigger the restart of the application running on your remote development environment. It is also important to enable the automatic rebuilding of an application on file change in your local development IDE. If you are using IntelliJ IDEA you should enable the option Build project automatically inside Compiler section and also enable some additional registry keys. Here’s the article with a detailed description of all required steps:
https://mkyong.com/spring-boot/intellij-idea-spring-boot-template-reload-is-not-working/.

Implementation of Spring Boot for Okteto

We are building a simple application that uses Mongo as a backend store and exposes REST endpoints outside. Here’s the list of required Maven dependencies.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
<dependency>
   <groupId>org.projectlombok</groupId>
   <artifactId>lombok</artifactId>
</dependency>

We are using Spring Data Repository pattern for integration with Mongo. We are adding some custom find methods to the inherited list of methods defined for CrudRepository in Spring Data Mongo.

public interface PersonRepository extends CrudRepository<Person, String> {

   Set<Person> findByFirstNameAndLastName(String firstName, String lastName);
   Set<Person> findByAge(int age);
   Set<Person> findByAgeGreaterThan(int age);

}

Database connection settings are configured in application.yml. We are using environment variables injected to the container and set in okteto.yml configuration file.

spring:
  application:
    name: sample-spring-boot-on-kubernetes
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}
management:
  endpoints:
    web:
      exposure:
        include: "*"
  endpoint:
    health:
      show-details: ALWAYS

We are injecting the repository into the controller and implementing simple CRUD methods exposed as HTTP endpoints.

@RestController
@RequestMapping("/persons")
public class PersonController {

   private PersonRepository repository;

   PersonController(PersonRepository repository) {
      this.repository = repository;
   }

   @PostMapping
   public Person add(@RequestBody Person person) {
      return repository.save(person);
   }

   @PutMapping
   public Person update(@RequestBody Person person) {
      return repository.save(person);
   }

   @DeleteMapping("/{id}")
   public void delete(@PathVariable("id") String id) {
      repository.deleteById(id);
   }

   @GetMapping
   public Iterable<Person> findAll() {
      return repository.findAll();
   }

   @GetMapping("/{id}")
   public Optional<Person> findById(@PathVariable("id") String id) {
      return repository.findById(id);
   }

   @GetMapping("/first-name/{firstName}/last-name/{lastName}")
   public Set<Person> findByFirstNameAndLastName(@PathVariable("firstName") String firstName,
         @PathVariable("lastName") String lastName) {
      return repository.findByFirstNameAndLastName(firstName, lastName);
   }

   @GetMapping("/age-greater-than/{age}")
   public Set<Person> findByAgeGreaterThan(@PathVariable("age") int age) {
      return repository.findByAgeGreaterThan(age);
   }

   @GetMapping("/age/{age}")
   public Set<Person> findByAge(@PathVariable("age") int age) {
      return repository.findByAge(age);
   }

}

Deploy Spring Boot on Okteto

Before deploying our example application on Okteto we will initialize MongoDB there. Okteto provides a web-based management console. Besides managing currently developed applications, we may easily deploy some predefined software, like databases or message brokers. As you may easily run MongoDB in a single click.

okteto-spring-boot-mongodb

Of course, in the background Okteto is creating Deployment and required Secrets.

okteto-mongo-kubectl

Now, we can finally initialize our remote development environment on Okteto Cloud. The command code>okteto up is able to create new Deployment if it does not exist. Then it is enabling port forwarding on localhost:8080 and starting Maven build as shown below.

okteto-spring-boot-up

After running command okteto up we have two the Deployment object and two running pods.

okteto-pods

We can verify the status of the environment using Okteto Web UI.

okteto-webui

Our application is exposing some Actuator and /persons endpoints outside. We can easily access them at the address https://sample-spring-boot-on-kubernetes-piomin.cloud.okteto.net. Since we have already enabled port forwarding, we can also just call it using a localhost address.

okteto-spring-boot-curls

Now, we are ready to send some test requests. Let’s add two persons using POST /persons endpoint.

okteto-testadd

Finally let’s call GET /persons method to verify if two Persons has been succesfully saved in Mongo.

okteto-curls-find

Conclusion

With Okteto you don’t have Docker or Kubernetes installed on your machine. You can focus just on development in your favorite programming IDE, the same as you would do without using any cloud platform. By defining a single Okteto configuration file and then by running a single command using Okteto CLI you are able to prepare your remote development environment. In fact, you don’t have to know much about Kubernetes to start your development with Okteto, but in case you need it you may still take advantage of all Kubernetes features and special resources.

The post Development on Kubernetes with Okteto and Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/06/15/development-on-kubernetes-with-okteto-and-spring-boot/feed/ 4 8120