paketo Archives - Piotr's TechBlog https://piotrminkowski.com/tag/paketo/ Java, Spring, Kotlin, microservices, Kubernetes, containers Wed, 19 Nov 2025 08:50:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 paketo Archives - Piotr's TechBlog https://piotrminkowski.com/tag/paketo/ 32 32 181738725 Quarkus with Buildpacks and OpenShift Builds https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/ https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/#respond Wed, 19 Nov 2025 08:50:04 +0000 https://piotrminkowski.com/?p=15806 In this article, you will learn how to build Quarkus application images using Cloud Native Buildpacks and OpenShift Builds. Some time ago, I published a blog post about building with OpenShift Builds based on the Shipwright project. At that time, Cloud Native Buildpacks were not supported at the OpenShift Builds level. It was only supported […]

The post Quarkus with Buildpacks and OpenShift Builds appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to build Quarkus application images using Cloud Native Buildpacks and OpenShift Builds. Some time ago, I published a blog post about building with OpenShift Builds based on the Shipwright project. At that time, Cloud Native Buildpacks were not supported at the OpenShift Builds level. It was only supported in the community project. I demonstrated how to add the appropriate build strategy yourself and use it to build an image for a Spring Boot application. However, OpenShift Builds, since version 1.6, support building with Cloud Native Buildpacks. Currently, Quarkus, Go, Node.js, and Python are supported. In this article, we will focus on Quarkus and also examine the built-in support for Buildpacks within Quarkus itself.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

Quarkus Buildpacks Extension

Recently, support for Cloud Native Buildpacks in Quarkus has been significantly enhanced. Here you can access the repository containing the source code for the Paketo Quarkus buildpack. To implement this solution, add one dependency to your application.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-container-image-buildpack</artifactId>
</dependency>
XML

Next, run the build command with Maven and activate the quarkus.container-image.build parameter. Also, set the appropriate Java version needed for your application. For the sample Quarkus application in this article, the Java version is 21.

mvn clean package \
  -Dquarkus.container-image.build=true \
  -Dquarkus.buildpack.builder-env.BP_JVM_VERSION=21
ShellSession

To build, you need Docker or Podman running. Here’s the output from the command run earlier.

As you can see, Quarkus uses, among other buildpacks, the buildpack as mentioned earlier.

The new image is now available for use.

$ docker images sample-quarkus/person-service:1.0.0-SNAPSHOT
REPOSITORY                      TAG              IMAGE ID       CREATED        SIZE
sample-quarkus/person-service   1.0.0-SNAPSHOT   e0b58781e040   45 years ago   160MB
ShellSession

Quarkus with OpenShift Builds Shipwright

Install the Openshift Build Operator

Now, we will move the image building process to the OpenShift cluster. OpenShift offers built-in support for creating container images directly within the cluster through OpenShift Builds, using the BuildConfig solution. For more details, please refer to my previous article. However, in this article, we explore a new technology for building container images called OpenShift Builds with Shipwright. To enable this solution on OpenShift, you need to install the following operator.

After installing this operator, you will see a new item in the “Build” menu called “Shiwright”. Switch to it, then select the “ClusterBuildStrategies” tab. There are two strategies on the list designed for Cloud Native Buildpacks. We are interested in the buildpacks strategy.

Create and Run Build with Shipwright

Finally, we can create the Shiwright Build object. It contains three sections. In the first step, we define the address of the container image repository where we will push our output image. For simplicity, we will use the internal registry provided by the OpenShift cluster itself. In the source section, we specify the repository address where the application source code is located. In the last section, we need to set the build strategy. We chose the previously mentioned buildpacks strategy for Cloud Native Buildpacks. Some parameters need to be set for the buildpacks strategy: run-image and cnb-builder-image. The cnb-builder-image indicates the name of the builder image containing the buildpacks. The run-image refers to a base image used to run the application. We will also activate the buildpacks Maven profile during the build to set the Quarkus property that switches from fast-jar to uber-jar packaging.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildpack-quarkus-build
spec:
  env:
    - name: BP_JVM_VERSION
      value: '21'
  output:
    image: 'image-registry.openshift-image-registry.svc:5000/builds/sample-quarkus-microservice:1.0'
  paramValues:
    - name: run-image
      value: 'paketobuildpacks/run-java-21-ubi9-base:latest'
    - name: cnb-builder-image
      value: 'paketobuildpacks/builder-jammy-java-tiny:latest'
    - name: env-vars
      values:
        - value: BP_MAVEN_ADDITIONAL_BUILD_ARGUMENTS=-Pbuildpacks
  retention:
    atBuildDeletion: true
  source:
    git:
      url: 'https://github.com/piomin/sample-quarkus-microservice.git'
    type: Git
  strategy:
    kind: ClusterBuildStrategy
    name: buildpacks
YAML

Here’s the Maven buildpacks profile that sets a single Quarkus property quarkus.package.jar.type. We must change it to uber-jar, because the paketobuildpacks/builder-jammy-java-tiny builder expects a single jar instead of the multi-folder layout used by the default fast-jar format. Of course, I would prefer to use the paketocommunity/builder-ubi-base builder, which can recognize the fast-jar format. However, at this time, it does not function correctly with OpenShift Builds.

<profiles>
  <profile>
    <id>buildpacks</id>
    <activation>
      <property>
        <name>buildpacks</name>
      </property>
    </activation>
    <properties>
      <quarkus.package.jar.type>uber-jar</quarkus.package.jar.type>
    </properties>
  </profile>
</profiles>
XML

To start the build, you can use the OpenShift console or execute the following command:

shp build run buildpack-quarkus-build --follow
ShellSession

We can switch to the OpenShift Console. As you can see, our build is running.

The history of such builds is available on OpenShift. You can also review the build logs.

Finally, you should see your image in the list of OpenShift internal image streams.

$ oc get imagestream
NAME                          IMAGE REPOSITORY                                                                                                    TAGS                UPDATED
sample-quarkus-microservice   default-route-openshift-image-registry.apps.pminkows.95az.p1.openshiftapps.com/builds/sample-quarkus-microservice   1.2,0.0.1,1.1,1.0   13 hours ago
ShellSession

Conclusion

OpenShift Build Shipwright lets you perform the entire application image build process on the OpenShift cluster in a standardized manner. Cloud Native Buildpacks is a popular mechanism for building images without writing a Dockerfile yourself. In this case, support for Buildpacks on the OpenShift side is an interesting alternative to the Source-to-Image approach.

The post Quarkus with Buildpacks and OpenShift Builds appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/feed/ 0 15806
Java Flight Recorder on Kubernetes https://piotrminkowski.com/2024/02/13/java-flight-recorder-on-kubernetes/ https://piotrminkowski.com/2024/02/13/java-flight-recorder-on-kubernetes/#respond Tue, 13 Feb 2024 07:44:13 +0000 https://piotrminkowski.com/?p=14957 In this article, you will learn how to continuously monitor apps on Kubernetes with Java Flight Recorder and Cryostat. Java Flight Recorder (JFR) is a tool for collecting diagnostic and profiling data generated by the Java app. It is designed for use even in heavily loaded production environments since it causes almost no performance overhead. […]

The post Java Flight Recorder on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to continuously monitor apps on Kubernetes with Java Flight Recorder and Cryostat. Java Flight Recorder (JFR) is a tool for collecting diagnostic and profiling data generated by the Java app. It is designed for use even in heavily loaded production environments since it causes almost no performance overhead. We can say that Java Flight Recorder acts similarly to an airplane’s black box. Even if the JVM crashes, we can analyze the diagnostic data collected just before the failure. This fact makes JFR especially usable in an environment with many running apps – like Kubernetes.

Assuming that we are running many Java apps on Kubernetes, we should interested in the tool that helps to automatically gather data generated by Java Flight Recorder. Here comes Cryostat. It allows us to securely manage JFR recordings for the containerized Java workloads. With the built-in discovery mechanism, it can detect all the apps that expose JFR data. Depending on the use case, we can store and analyze recordings directly on the Kubernetes cluster Cryostat Dashboard or export recorded data to perform a more in-depth analysis.

If you are interested in more topics related to Java apps on Kubernetes, you can take a look at some other posts on my blog. The following article describes a list of best practices for running Java apps Kubernetes. You can also read e.g. on how to resize CPU limit to speed up Java startup on Kubernetes here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you need to go to the callme-service directory. After that, you should just follow my instructions. Let’s begin.

Install Cryostat on Kubernetes

In the first step, we install Cryostat on Kubernetes using its operator. In order to use and manage operators on Kubernetes, we should have the Operator Lifecycle Manager (OLM) installed on the cluster. The operator-sdk binary provides a command to easily install and uninstall OLM:

$ operator-sdk olm install

Alternatively, you can use Helm chart for Cryostat installation on Kubernetes. Firstly, let’s add the following repository:
$ helm repo add openshift https://charts.openshift.io/

Then, install the chart with the following command:
$ helm install my-cryostat openshift/cryostat --version 0.4.0

Once the OLM is running on our cluster, we can proceed to the Cryostat installation. We can find the required YAML manifest with the Subscription declaration in the Operator Hub. Let’s just apply the manifest to the target with the following command:

$ kubectl create -f https://operatorhub.io/install/cryostat-operator.yaml

By default, this operator will be installed in the operators namespace and will be usable from all namespaces in the cluster. After installation, we can verify if the operator works fine by executing the following command:

$ kubectl get csv -n operators

In order to simplify the Cryostat installation process, we can use OpenShift. With OpenShift we don’t need to install OLM, since it is already there. We just need to find the “Red Hat build of Cryostat” operator in the Operator Hub and install it using OpenShift Console. By default, the operator is available in the openshift-operators namespace.

Then, let’s create a namespace dedicated to running Cryostat and our sample app. The name of the namespace is demo-jfr.

$ kubectl create ns demo-jfr

Cryostat recommends using a cert-manager for traffic encryption. In our exercise, we disable that integration for simplification purposes. However, in the production environment, you should install “cert-manager” unless you do not use another solution for encrypting traffic. In order to run Cryostat in the selected namespace, we need to create the Cryostat object. The parameter spec.enableCertManager should be set to false.

apiVersion: operator.cryostat.io/v1beta1
kind: Cryostat
metadata:
  name: cryostat-sample
  namespace: demo-jfr
spec:
  enableCertManager: false
  eventTemplates: []
  minimal: false
  reportOptions:
    replicas: 0
  storageOptions:
    pvc:
      annotations: {}
      labels: {}
      spec: {}
  trustedCertSecrets: []

If everything goes fine, you should see the following pod in the demo-jfr namespace:

$ kubectl get po -n demo-jfr
NAME                               READY   STATUS    RESTARTS   AGE
cryostat-sample-5c57c9b8b8-smzx9   3/3     Running   0          60s

Here’s a list of Kubernetes Services. The Cryostat Dashboard is exposed by the cryostat-sample Service under the 8181 port.

$ kubectl get svc -n demo-jfr
NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
cryostat-sample           ClusterIP   172.31.56.83    <none>        8181/TCP,9091/TCP   70m
cryostat-sample-grafana   ClusterIP   172.31.155.26   <none>        3000/TCP            70m

We can access the Cryostat dashboard using the Kubernetes Ingress or OpenShift Route. Currently, there are no apps to monitor.

Create Sample Java App

We build a sample Java app using the Spring Boot framework. Our app exposes a single REST endpoint. As you see the endpoint implementation is very simple. The pingWithRandomDelay() method adds a random delay between 0 and 3 seconds and returns the string. However, there is one interesting thing inside that method. We are creating the ProcessingEvent object (1). Then, we call its begin method just before sleeping the thread (2). After the method is resumed we call the commit method on the ProcessingEvent object (3). In this inconspicuous way, we are generating our first custom JFR event. This event aims to monitor the processing time of our method.

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);

   private Random random = new Random();
   private AtomicInteger index = new AtomicInteger();

   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping-with-random-delay")
   public String pingWithRandomDelay() throws InterruptedException {
      int r = new Random().nextInt(3000);
      int i = index.incrementAndGet();
      ProcessingEvent event = new ProcessingEvent(i); // (1)
      event.begin(); // (2)
      LOGGER.info("Ping with random delay: id={}, name={}, version={}, delay={}", i,
             buildProperties.isPresent() ? buildProperties.get().getName() : "callme-service", version, r);
      Thread.sleep(r);
      event.commit(); // (3)
      return "I'm callme-service " + version;
   }

}

Let’s switch to the ProcessingEvent implementation. Our custom event needs to extend the jdk.jfr.Event abstract class. It contains a single parameter id. We can use some additional labels to improve the event presentation in the JFR graphical tools. The event will be visible under the name set in the @Name annotation and under the category set in the @Category annotation. We also need to annotate the parameter @Label to make it visible as part of the event.

@Name("ProcessingEvent")
@Category("Custom Events")
@Label("Processing Time")
public class ProcessingEvent extends Event {
    @Label("Event ID")
    private Integer id;

    public ProcessingEvent(Integer id) {
        this.id = id;
    }

    public Integer getId() {
        return id;
    }

    public void setId(Integer id) {
        this.id = id;
    }
}

Of course, our app will generate a lot of standard JFR events useful for profiling and monitoring. But we could also monitor our custom event.

Build App Image and Deploy on Kubernetes

Once we finish the implementation, we may build the container image of our Spring Boot app. Spring Boot comes with a feature for building container images based on the Cloud Native Buildpacks. In the Maven pom.xml you will find a dedicated profile under the build-image id. Once you activate such a profile, it will build the image using the Paketo builder-jammy-base image.

<profile>
  <id>build-image</id>
  <build>
    <plugins>
      <plugin>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-maven-plugin</artifactId>
        <configuration>
          
        </configuration>
        <executions>
          <execution>
            <goals>
              <goal>build-image</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</profile>

Before running the build we should start Docker on the local machine. After that, we should execute the following Maven command:

$ mvn clean package -Pbuild-image -DskipTests

With the build-image profile activated, Spring Boot Maven Plugin builds the image of our app. You should have a similar result as shown below. In my case, the image tag is piomin/callme-service:1.2.1.

By default, Paketo Java Buildpacks uses BellSoft Liberica JDK. With the Paketo BellSoft Liberica Buildpack, we can easily enable Java Flight Recorder for the container using the BPL_JFR_ENABLED environment variable. In order to expose data for Cryostat, we also need to enable the JMX port. In theory, we could use BPL_JMX_ENABLED and BPL_JMX_PORT environment variables for that. However, that option includes some additional configuration to the java command parameters that break the Cryostat discovery. This issue has been already described here. Therefore we will use the JAVA_TOOL_OPTIONS environment variable to set the required JVM parameters directly on the running command.

Instead of exposing the JMX port for discovery, we can include the Cryostat agent in the app dependencies. In that case, we should set the address of the Cryostat API in the Kubernetes Deployment manifest. However, I prefer an approach that doesn’t require any changes on the app side.

Now, let’s back to the Cryostat app discovery. Cryostat is able to automatically detect pods with a JMX port exposed. It requires the concrete configuration of the Kubernetes Service. We need to set the name of the port to jfr-jmx. In theory, we can expose JMX on any port we want, but for me anything other than 9091 caused discovery problems on Cryostat. In the Deployment definition, we have to set the BPL_JFR_ENABLED env to true, and the JAVA_TOOL_OPTIONS to -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9091.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
  template:
    metadata:
      labels:
        app: callme-service
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service:1.2.1
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
            - containerPort: 9091
          env:
            - name: VERSION
              value: "v1"
            - name: BPL_JFR_ENABLED
              value: "true"
            - name: JAVA_TOOL_OPTIONS
              value: "-Dcom.sun.management.jmxremote.port=9091 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  - port: 9091
    name: jfr-jmx
  selector:
    app: callme-service

Let’s apply our deployment manifest to the demo-jfr namespace:

$ kubectl apply -f k8s/deployment-jfr.yaml -n demo-jfr

Here’s a list of pods of our callme-service app:

$ kubectl get po -n demo-jfr -l app=callme-service -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP            NODE
callme-service-6bc5745885-kvqfr   1/1     Running   0          31m   10.134.0.29   worker-cluster-lvsqq-1

Using Cryostat with JFR

View Default Dashboards

Cryostat automatically detects all the pods related to the Kubernetes Service that expose the JMX port. Once we switch to the Cryostat Dashboard, we will see the name of our pod in the “Target” dropdown. The default dashboard shows diagrams illustrating CPU load, heap memory usage, and a number of running Java threads.

java-flight-recorder-kubernetes-dashboard

Then, we can go to the “Recordings” section. It shows a list of active recordings made by Java Flight Recorder for our app running on Kubernetes. By default, Cryostat creates and starts a single recording per each detected target.

We can expand the selected record to see a detailed view. It provides a summarized panel divided into several different categories like heap, memory leak, or exceptions. It highlights warnings with a yellow color and problems with a red color.

java-flight-recorder-kubernetes-panel

We can display a detailed description of each case. We just need to click on the selected field with a problem name. The detailed description will appear in the context menu.

java-flight-recorder-kubernetes-description

Create and Use a Custom Event Template

We can create a custom recording strategy by defining a new event template. Firstly, we need to go to the “Events” section, and then to the “Event Templates” tab. There are three built-in templates. We can use each of them as a base for our custom template. After deciding which of them to choose we can download it to our laptop. The default file extension is *.jfc.

java-flight-recorder-kubernetes-event-templates

In order to edit the *.jfc files we need a special tool called JDK Mission Control. Each vendor provides such a tool for their distribution of JDK. In our case, it is BellSoft Liberica. Once we download and install Liberica Mission Control on the laptop we should go to Window -> Flight Recording Template Manager.

java-flight-recorder-kubernetes-mission-control

With the Flight Recording Template Manager, we can import and edit an exported event template. I choose the higher monitoring for “Garbage Collection”, “Allocation Profiling”, “Compiler”, and “Thread Dump”.

java-flight-recorder-kubernetes-template-manager

Once a new template is ready, we should save it under the selected name. For me, it is the “Continuous Detailed” name. After that, we need to export the template to the file.

Then, we need to switch to the Cryostat Dashboard. We have to import the newly created template exported to the *.jfc file.

Once you import the template, you should see a new strategy in the “Event Templates” section.

We can create a recording based on our custom “Continuous_Detailed” template. After some time, Cryostat should gather data generated by the Java Flight Recorder for the app running on Kubernetes. However, this time we want to make some advanced analysys using Liberica Mission Control rather than just with the Cryostat Dashboard. Therefore we will export the recording to the *.jfr file. Such a file may be then imported to the JDK Mission Control tool.

Use the JDK Mission Control Tool

Let’s open the exported *.jfr file with Liberica Mission Control. Once we do it, we can analyze all the important aspects related to the performance of our Java app. We can display a table with memory allocation per the object type.

We can display a list of running Java threads.

Finally, we go to the “Event Browser” section. In the “Custom Events” category we should find our custom event under the name determined by the @Label annotation on the ProcessingEvent class. We can see the history of all generated JFR events together with the duration, start time, and the name of the processing thread.

Final Thoughts

Cryostat helps you to manage the Java Flight Recorder on Kubernetes at scale. It provides a graphical dashboard that allows to monitoring of all the Java workloads that expose JFR data over JMX. The important thing is that even after an app crash we can export the archived monitoring report and analyze it using advanced tools like JDK Mission Control.

The post Java Flight Recorder on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/02/13/java-flight-recorder-on-kubernetes/feed/ 0 14957
OpenShift Builds with Shipwright and Cloud Native Buildpacks https://piotrminkowski.com/2024/02/08/openshift-builds-with-shipwright-and-cloud-native-buildpacks/ https://piotrminkowski.com/2024/02/08/openshift-builds-with-shipwright-and-cloud-native-buildpacks/#respond Thu, 08 Feb 2024 13:04:38 +0000 https://piotrminkowski.com/?p=14928 In this article, you will learn how to build the images of your apps on OpenShift with Shipwright and Cloud Native Buildpacks. Cloud Native Buildpacks allow us to easily build container images from the app source code. On the other hand, Shipwright is a Kubernetes-native tool that provides mechanisms for declaring and reusing several build […]

The post OpenShift Builds with Shipwright and Cloud Native Buildpacks appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to build the images of your apps on OpenShift with Shipwright and Cloud Native Buildpacks. Cloud Native Buildpacks allow us to easily build container images from the app source code. On the other hand, Shipwright is a Kubernetes-native tool that provides mechanisms for declaring and reusing several build image strategies. Such a strategy is correlated with the tool used for building images. Shipwright supports Kaniko, OpenShift Source-to-image (S2I), Buildah, ko, BuiltKit, and of course, Cloud Native Buildpacks.

You can also run Cloud Native Buildpacks on “vanilla” Kubernetes together with e.g. Tekton. In the following article on my blog, you will find information on how to use Buildpacks in your Tekton CI pipeline for building Java app images.

Introduction

If you already have some experience with OpenShift, you probably know that it comes with its own mechanism for building container images. Such a build is fully executed on the OpenShift cluster. OpenShift provides the BuildConfig CRD responsible for a build configuration. There are three primary build strategies available: Docker, S2I, and custom build. By the way, you can build the image in any way you want and still run such a container on OpenShift. However, today we are discussing the OpenShift recommended approach for building container images called “OpenShift Builds”.

Recently, Red Hat has introduced a new supported way of using “OpenShift Builds” based on the Shipwright project. Therefore, you can currently find two chapters in the OpenShift documentation: “Builds with BuildConfig” and “Builds with Shipwright”. Maybe Shipwright will become a single, default strategy in the future, but for now, we can decide which of them to use.

In this article, I’ll show you how to install and manage Shipwright on OpenShift. By default, Red Hat supports Source-to-Image (S2I) and Buildah build strategies. We will create our own strategy for using the Cloud Native Buildpacks.

Install Builds for Red Hat OpenShift Operator

In order to enable Shiwright builds on OpenShift, we need to install the operator called “Builds for Red Hat OpenShift”. It depends on the OpenShift Pipelines operator. However, we don’t have to take care of it, since that operator is automatically installed together with the “Builds for Red Hat OpenShift” operator. In the OpenShift Console go to the Operators -> Operator Hub, find and install the required operator.

openshift-shipwright-operators

After that, we need to create the ShiwrightBuild object. It needs to contain the name of the target namespace for running Shipwright in the targetNamespace field. In my case, the target namespace is openshift-builds.

apiVersion: operator.shipwright.io/v1alpha1
kind: ShipwrightBuild
metadata:
  name: openshift-builds
spec:
  targetNamespace: openshift-builds

We can easily create the object on the operator page in OpenShift Console. If the status is “Ready” it means that Shipwright is running on our cluster.

Let’s display a list of pods inside the openshift-builds namespace. As you see, two pods are running there.

$ oc get po -n openshift-builds
NAME                                           READY   STATUS    RESTARTS   AGE
shipwright-build-controller-5d5b4f7d97-llj4f   1/1     Running   0          2m31s
shipwright-build-webhook-76775cf988-pjk6k      1/1     Running   0          2m31s

This step is optional. We can install the Shipwright CLI on our laptop to interact with the cluster. The release binary is available in the following repository under the Releases section. Once you install it, you can execute the following command for verification:

$ shp version

OpenShift Console supports Shipwright Builds. Once you install and configure the operator, you will have a dedicated space inside the “Builds” section. You can create a new build, run a build, and display a history of previous runs.

openshift-shipwright-builds-section

Configure Build Strategy for Buildpacks

Before we will define any build, we need to configure the build strategy. From a high-level perspective, build strategy defines how to build an application with an image-building tool. We can create the BuildStrategy object for the namespace-scoped strategy, and the ClusterBuildStrategy object for the cluster-wide strategy. The “Builds for Red Hat OpenShift” operator comes with two predefined global strategies: buildah and source-to-image.

However, our goal is to create a strategy for Cloud Native Buildpacks. Fortunately, we can just copy such a strategy from the samples provided in the Shiwright repository on GitHub. We won’t get into the details of that strategy implementation. The YAML manifest is visible below. It uses the paketobuildpacks/builder-jammy-full:latest image during the build. We should apply that manifest to our OpenShift cluster.

apiVersion: shipwright.io/v1beta1
kind: ClusterBuildStrategy
metadata:
  name: buildpacks-v3
spec:
  volumes:
    - name: platform-env
      emptyDir: {}
  parameters:
    - name: platform-api-version
      description: The referenced version is the minimum version that all relevant buildpack implementations support.
      default: "0.7"
  steps:
    - name: build-and-push
      image: docker.io/paketobuildpacks/builder-jammy-full:latest
      env: 
        - name: CNB_PLATFORM_API
          value: $(params.platform-api-version)
        - name: PARAM_SOURCE_CONTEXT
          value: $(params.shp-source-context)
        - name: PARAM_OUTPUT_IMAGE
          value: $(params.shp-output-image)
      command:
        - /bin/bash
      args:
        - -c
        - |
          set -euo pipefail

          echo "> Processing environment variables..."
          ENV_DIR="/platform/env"

          envs=($(env))

          # Denying the creation of non required files from system environments.
          # The creation of a file named PATH (corresponding to PATH system environment)
          # caused failure for python source during pip install (https://github.com/Azure-Samples/python-docs-hello-world)
          block_list=("PATH" "HOSTNAME" "PWD" "_" "SHLVL" "HOME" "")

          for env in "${envs[@]}"; do
            blocked=false

            IFS='=' read -r key value string <<< "$env"

            for str in "${block_list[@]}"; do
              if [[ "$key" == "$str" ]]; then
                blocked=true
                break
              fi
            done

            if [ "$blocked" == "false" ]; then
              path="${ENV_DIR}/${key}"
              echo -n "$value" > "$path"
            fi
          done

          LAYERS_DIR=/tmp/.shp/layers
          CACHE_DIR=/tmp/.shp/cache

          mkdir -p "$CACHE_DIR" "$LAYERS_DIR"

          function announce_phase {
            printf "===> %s\n" "$1" 
          }

          announce_phase "ANALYZING"
          /cnb/lifecycle/analyzer -layers="$LAYERS_DIR" "${PARAM_OUTPUT_IMAGE}"

          announce_phase "DETECTING"
          /cnb/lifecycle/detector -app="${PARAM_SOURCE_CONTEXT}" -layers="$LAYERS_DIR"

          announce_phase "RESTORING"
          /cnb/lifecycle/restorer -cache-dir="$CACHE_DIR" -layers="$LAYERS_DIR"

          announce_phase "BUILDING"
          /cnb/lifecycle/builder -app="${PARAM_SOURCE_CONTEXT}" -layers="$LAYERS_DIR"

          exporter_args=( -layers="$LAYERS_DIR" -report=/tmp/report.toml -cache-dir="$CACHE_DIR" -app="${PARAM_SOURCE_CONTEXT}")
          grep -q "buildpack-default-process-type" "$LAYERS_DIR/config/metadata.toml" || exporter_args+=( -process-type web ) 

          announce_phase "EXPORTING"
          /cnb/lifecycle/exporter "${exporter_args[@]}" "${PARAM_OUTPUT_IMAGE}"

          # Store the image digest
          grep digest /tmp/report.toml | tail -n 1 | tr -d ' \"\n' | sed s/digest=// > "$(results.shp-image-digest.path)"
      volumeMounts:
        - mountPath: /platform/env
          name: platform-env
      resources:
        limits:
          cpu: 500m
          memory: 1Gi
        requests:
          cpu: 250m
          memory: 65Mi
  securityContext:
    runAsUser: 1001
    runAsGroup: 1000

As you see, a new build strategy is available under the buildpacks-v3 name.

openshift-shipwright-strategies

Create Build for the Sample App

Once we create a strategy, we can proceed to the build creation. We will push the image to the registry on quay.io. It requires to have an existing account there. The quay.io registry allows us to create a robot account on a private account. Then we may use it to authenticate against the repository during the build. Firstly, go to your account settings and find the “Robot Accounts” section there. If you don’t have any robot account you need to create it. On the existing account choose the settings icon, and then the “View Credentials” item in the context menu.

In the “Kubernetes Secret” section download the YAML manifest containing the authentication credentials.

Our robot account must have the Write access to the target repository. In my case, it is the pminkows/sample-kotlin-spring registry.

Once you download the manifest you have to apply it to the OpenShift cluster. Let’s also create a new project with the demo-builds name.

$ oc new-project demo-builds
$ oc apply -f pminkows-piomin-secret.yml

The manifest contains a single Secret with our Quay robot account authentication credentials.

openshift-shipwright-pull-secret

Finally, we can create the Build object. It contains three sections. In the first of them, we are defining the address of the container image repository to push our output image (1). We need to authenticate against the repository with the previously created pminkows-piomin-pull-secret Secret. In the source section, we should provide the address of the repository with the app source code (2). The repository with a sample Spring Boot app is located on my GitHub account. In the last section, we need to set the build strategy (3). We choose the previously created buildpacks-v3 strategy for Cloud Native Buildpacks.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: sample-spring-kotlin-build
  namespace: demo-builds
spec:
  # (1)
  output:
    image: quay.io/pminkows/sample-kotlin-spring:1.0-shipwright
    pushSecret: pminkows-piomin-pull-secret
  # (2)
  source:
    git:
      url: https://github.com/piomin/sample-spring-kotlin-microservice.git
  # (3)
  strategy:
    name: buildpacks-v3
    kind: ClusterBuildStrategy

Once you apply the sample-spring-kotlin-build Build object you can switch to the “Builds” section in the OpenShift Console. In the “Shipwright Builds” tab, we can display a list of builds in the demo-builds namespace. In order to run the build, we just need to choose the “Start” option in the context menu.

openshift-shipwright-build-run

Running the Shipwright Build on OpenShift

Before we run the build, we have to set some permissions for the ServiceAccount. By default, Shipwright uses the pipeline ServiceAccount to run builds. You should already have the pipeline account created in the demo-builds namespace automatically with the project creation.

Shipwright requires a privileged SCC assigned to the pipeline ServiceAccount to run the build pods. Let’s add the required permission with the following commands:

$ oc adm policy add-scc-to-user privileged -z pipeline -n demo-builds
$ oc adm policy add-role-to-user edit -z pipeline -n demo-builds

We can start the build in the OpenShift Console or with the Shipwright CLI. Firstly, let’s display a list of builds using the following shp command:

$ shp build list
NAME				OUTPUT							STATUS
sample-spring-kotlin-build	quay.io/pminkows/sample-kotlin-spring:1.0-shipwright	all validations succeeded

Let’s run the build with the following command:

$ shp build run sample-spring-kotlin-build --follow

We can switch to the OpenShift Console. As you see, our build is running.

Shipwright runs the pod with Paketo Buildpacks Builder according to the strategy.

Here’s the output of the shp command. Once the image is ready, it is pushed to the Quay registry.

The build has been finished successfully. We can see it in the Shipwright “Builds” section in the OpenShift Console (status Succeeded).

openshift-shipwright-history

Let’s switch to the quai.io dashboard. As you see, the image tag 1.0-shipwright is already there.

Of course, we can build many times. The history of executions is available in the BuildRuns tab for each Build.

Final Thoughts

With Shipwright you can easily switch between different build image strategies on OpenShift. You are not limited only to the Source-to-image (S2I) or Docker strategy, but you can use different tools for that including e.g. Cloud Native Buildpacks. For now, the strategy based on Buildpacks is not supported by Red Hat as the Shipwright builds feature. However, I hope it will change soon 🙂

The post OpenShift Builds with Shipwright and Cloud Native Buildpacks appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/02/08/openshift-builds-with-shipwright-and-cloud-native-buildpacks/feed/ 0 14928
Which JDK to Choose on Kubernetes https://piotrminkowski.com/2023/02/17/which-jdk-to-choose-on-kubernetes/ https://piotrminkowski.com/2023/02/17/which-jdk-to-choose-on-kubernetes/#comments Fri, 17 Feb 2023 13:53:19 +0000 https://piotrminkowski.com/?p=14015 In this article, we will make a performance comparison between several most popular JDK implementations for the app running on Kubernetes. This post also answers some questions and concerns about my Twitter publication you see below. I compared Oracle JDK with Eclipse Temurin. The result was quite surprising for me, so I decided to tweet […]

The post Which JDK to Choose on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, we will make a performance comparison between several most popular JDK implementations for the app running on Kubernetes. This post also answers some questions and concerns about my Twitter publication you see below. I compared Oracle JDK with Eclipse Temurin. The result was quite surprising for me, so I decided to tweet to get some opinions and feedback.

jdk-kubernetes-tweet

Unfortunately, those results were wrong. Or maybe I should say, were not averaged well enough. After this publication, I also received interesting materials presented on London Java Community. It compares the performance of the Payara application server running on various JDKs. Here’s the link to that presentation (~1h). The results showed there seem to confirm my results. Or at least they confirm the general rule – there are some performance differences between Open JDK implementations. Let’s check it out.

This time I’ll do a very accurate comparison with several repeats to get reproducible results. I’ll test the following JVM implementations:

  • Adoptium Eclipse Temurin
  • Alibaba Dragonwell
  • Amazon Corretto
  • Azul Zulu
  • BellSoft Liberica
  • IBM Semeru OpenJ9
  • Oracle JDK
  • Microsoft OpenJDK

For all the tests I’ll use Paketo Java buildpack. We can easily switch between several JVM implementations with Paketo. I’ll test a simple Spring Boot 3 app that uses Spring Data to interact with the Mongo database. Let’s proceed to the details!

If you have already built images with Dockerfile it is possible that you were using the official OpenJDK base image from the Docker Hub. However, currently, the announcement on the image site says that it is officially deprecated and all users should find suitable replacements. In this article, we will compare all the most popular replacements, so I hope it may help you to make a good choice 🙂

Testing Environment

Before we run tests it is important to have a provisioned environment. I’ll run all the tests locally. In order to build images, I’m going to use Paketo Buildpacks. Here are some details of my environment:

  1. Machine: MacBook Pro 32G RAM Intel 
  2. OS: macOS Ventura 13.1
  3. Kubernetes (v1.25.2) on Docker Desktop: 14G RAM + 4vCPU

We will use Java 17 for app compilation. In order to run load tests, I’m going to leverage the k6 tool. Our app is written in Spring Boot. It connects to the Mongo database running on the same instance of Kubernetes. Each time I’m testing a new JVM provider I’m removing the previous version of the app and database. Then I’m deploying the new, full configuration once again. We will measure the following parameters:

  1. App startup time (the best  
  2. result and average) – we will read it directly from the Spring Boot logs 
  3. Throughput – with k6 we will simulate 5 and 10 virtual users. It will measure the number of processing requests 
  4. The size of the image
  5. The RAM memory consumed by the pod during the load tests. Basically, we will execute the kubectl top pod command

We will also set the memory limit for the container to 1G. In our load tests, the app will insert data into the Mongo database. It is exposing the REST endpoint invoked during the tests. To measure startup time as accurately as possible I’ll restart the app several times.

Let’s take a look at the Deployment YAML manifest. It injects credentials to the Mongo database and set the memory limit to 1G (as I already mentioned):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-boot-on-kubernetes-deployment
spec:
  selector:
    matchLabels:
      app: sample-spring-boot-on-kubernetes
  template:
    metadata:
      labels:
        app: sample-spring-boot-on-kubernetes
    spec:
      containers:
      - name: sample-spring-boot-on-kubernetes
        image: piomin/sample-spring-boot-on-kubernetes
        ports:
        - containerPort: 8080
        env:
          - name: MONGO_DATABASE
            valueFrom:
              configMapKeyRef:
                name: mongodb
                key: database-name
          - name: MONGO_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-user
          - name: MONGO_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-password
          - name: MONGO_URL
            value: mongodb
        readinessProbe:
          httpGet:
            port: 8080
            path: /readiness
            scheme: HTTP
          timeoutSeconds: 1
          periodSeconds: 10
          successThreshold: 1
          failureThreshold: 3
        resources:
          limits:
            memory: 1024Mi

Source Code and Images

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. You will also find all the images in my Docker Hub repository piomin/sample-spring-boot-on-kubernetes. Every single image is tagged with the vendor’s name.

Our Spring Boot app exposes several endpoints, but I’ll test the POST /persons endpoint for inserting data into Mongo. In the integration with Mongo, I’m using the Spring Data MongoDB project and its CRUD repository pattern.

// controller

@RestController
@RequestMapping("/persons")
public class PersonController {

   private PersonRepository repository;

   PersonController(PersonRepository repository) {
      this.repository = repository;
   }

   @PostMapping
   public Person add(@RequestBody Person person) {
      return repository.save(person);
   }

   // other endpoints implementation
}


// repository

public interface PersonRepository extends CrudRepository<Person, String> {

   Set<Person> findByFirstNameAndLastName(String firstName, 
                                          String lastName);
   Set<Person> findByAge(int age);
   Set<Person> findByAgeGreaterThan(int age);

}

The Size of the Image

The size of the image is the simplest option to measure. If you would like to check what is exactly inside the image you can use the dive tool. The difference in the size between vendors results from the number of java tools and binaries included inside. From my perspective, the smaller the size the better. I’d rather not use anything that is inside the image. Of course, except all the staff required to run my app successfully. But you may have a different case. Anyway, here’s the content of the app for the Oracle JDK after executing the dive piomin/sample-spring-boot-on-kubernetes:oracle command. As you see, JDK takes up most of the space.

jdk-kubernetes-dive

On the other hand, we can analyze the smallest image. I think it explains the differences in image size since Zulu contains JRE, not the whole JDK.

Here are the result ordered from the smallest image to the biggest.

  • Azul Zulu: 271MB
  • IBM Semeru OpenJ9: 275MB
  • Eclipse Temurin: 286MB
  • BellSoft Liberica: 286MB
  • Oracle OpenJDK: 446MB
  • Alibaba Dragonwell: 459MB
  • Microsoft OpenJDK: 461MB
  • Amazon Corretto: 463MB

Let’s visualize our first results. I think it excellent shows which image contains JDK and which JRE.

jdk-kubernetes-memory

Startup Time

Honestly, it is not very easy to measure a startup time, since the difference between the vendors is not large. Also, the subsequent results for the same provider may differ a lot. For example, on the first try the app starts in 5.8s and after the pod restart 8.4s. My methodology was pretty simple. I restarted the app several times for each JDK provider to measure the average startup time and the fastest startup in the series. Then I repeated the same exercise again to verify if the results are repeatable. The proportions between the first and second series of startup time between corresponding vendors were similar. In fact, the difference between the fastest and the slowest average startup time is not large. I get the best result for Eclipse Temurin (7.2s) and the worst for IBM Semeru OpenJ9 (9.05s).

Let’s see the full list of results. It shows the average startup time of the application from the fastest one.

  • Eclipse Temurin: 7.20s
  • Oracle OpenJDK: 7.22s
  • Amazon Corretto: 7.27s
  • BellSoft Liberica: 7.44s
  • Oracle OpenJDK: 7.77s
  • Alibaba Dragonwell: 8.03s
  • Microsoft OpenJDK: 8.18s
  • IBM Semeru OpenJ9: 9.05s

Once again, here’s the graphical representation of our results. The differences between vendors are sometimes rather cosmetic. Maybe, if the same exercise once again from the beginning the results would be quite different.

jdk-kubernetes-startup

As I mentioned before, I also measured the fastest attempt. This time the best top 3 are Eclipse Temurin, Amazon Corretto, and BellSoft Liberica.

  • Eclipse Temurin: 5.6s
  • Amazon Corretto: 5.95s
  • BellSoft Liberica: 6.05s
  • Oracle OpenJDK: 6.1s
  • Azul Zulu: 6.2s
  • Alibaba Dragonwell: 6.45s
  • Microsoft OpenJDK: 6.9s
  • IBM Semero OpenJ9: 7.85s

Memory

I’m measuring the memory usage of the app under the heavy load with a test simulating 10 users continuously sending requests. It gives me a really large throughput at the level of the app – around 500 requests per second. The results are in line with the expectations. Almost all the vendors have very similar memory usage except IBM Semeru, which uses OpenJ9 JVM. In theory, OpenJ9 should also give us a better startup time. However, in my case, the significant difference is just in the memory footprint. For IBM Semeru the memory usage is around 135MB, while for other vendors it varies in the range of 210-230MB.

  • IBM Semero OpenJ9: 135M
  • Oracle OpenJDK: 211M
  • Azul Zulu: 215M
  • Alibaba DragonwellOracle OpenJDK: 216M
  • BellSoft Liberica: 219M
  • Microsoft OpenJDK: 219M
  • Amazon Corretto: 220M
  • Eclipse Temurin: 230M

Here’s the graphical visualization of our results:

Throughput

In order to generate high incoming traffic to the app I used the k6 tool. It allows us to create tests in JavaScript. Here’s the implementation of our test. It is calling the HTTP POST /persons endpoint with input data in JSON. Then it verifies if the request has been successfully processed on the server side.

import http from 'k6/http';
import { check } from 'k6';

export default function () {

  const payload = JSON.stringify({
      firstName: 'aaa',
      lastName: 'bbb',
      age: 50,
      gender: 'MALE'
  });

  const params = {
    headers: {
      'Content-Type': 'application/json',
    },
  };

  const res = http.post(`http://localhost:8080/persons`, payload, params);

  check(res, {
    'is status 200': (res) => res.status === 200,
    'body size is > 0': (r) => r.body.length > 0,
  });
}

Here’s the k6 command for running our test. It is possible to define the duration and number of simultaneous virtual users. In the first step, I’m simulating 5 virtual users:

$ k6 run -d 90s -u 5 load-tests.js

Then, I’m running the tests for 10 virtual users twice per vendor.

$ k6 run -d 90s -u 10 load-tests.js

Here are the sample results printed after executing the k6 test:

I repeated the exercise per the JDK vendor. Here are the throughput results for 5 virtual users:

  • BellSoft Liberica: 451req/s
  • Amazon Corretto: 433req/s
  • IBM Semeru OpenJ9: 432req/s
  • Oracle OpenJDK: 420req/s
  • Microsoft OpenJDK: 418req/s
  • Azul Zulu: 414req/s
  • Eclipse Temurin: 407req/s
  • Alibaba Dragonwell: 405req/s

Here are the throughput results for 10 virtual users:

  • Eclipse Temurin: 580req/s
  • Azul Zulu: 567req/s
  • Microsoft OpenJDK: 561req/s
  • Oracle OpenJDK: 561req/s
  • IBM Semeru OpenJ9: 552req/s
  • Amazon Corretto: 552req/s
  • Alibaba Dragonwell: 551req/s
  • BellSoft Liberica: 540req/s

Final Thoughts

After repeating the load tests several times I need to admit that there are no significant differences in performance between all JDK vendors. We were using the same JVM settings for testing (set by the Paketo Buildpack). Probably, the more tests I will run, the results between different vendors would be even more similar. So, in summary, the results from my tweet have not been confirmed. Ok, so let’s back to the question – which JDK to choose on Kubernetes?

Probably it somehow depends on where you are running your cluster. If for example, it’s Kubernetes EKS on AWS it’s worth using Amazon Corretto. However, if you are looking for the smallest image size you should choose between Azul Zulu, IBM Semeru, BellSoft Liberica, and Adoptium Eclipse Temurin. Additionally, IBM Semeru will consume significantly less memory than other distributions, since it is built on top of OpenJ9.

Don’t forget about best practices when deploying Java apps on Kubernetes. Here’s my article about it.

The post Which JDK to Choose on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/02/17/which-jdk-to-choose-on-kubernetes/feed/ 18 14015
Spring Boot on Kubernetes with Buildpacks and Skaffold https://piotrminkowski.com/2020/12/18/spring-boot-on-kubernetes-with-buildpacks-and-skaffold/ https://piotrminkowski.com/2020/12/18/spring-boot-on-kubernetes-with-buildpacks-and-skaffold/#comments Fri, 18 Dec 2020 14:40:11 +0000 https://piotrminkowski.com/?p=9268 In this article, you will learn how to run the Spring Boot application on Kubernetes using Buildpacks and Skaffold. Since version 2.3 Spring Boot supports Buildpacks. The main goal of Buildpack is to detect how to transform a source code into a runnable image. Once we created an image we may run it on Kubernetes. […]

The post Spring Boot on Kubernetes with Buildpacks and Skaffold appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to run the Spring Boot application on Kubernetes using Buildpacks and Skaffold. Since version 2.3 Spring Boot supports Buildpacks. The main goal of Buildpack is to detect how to transform a source code into a runnable image. Once we created an image we may run it on Kubernetes. Fortunately, we may use Skaffold to automate a whole process starting from a source code and ending on a Kubernetes deployment. Skaffold supports builds with Cloud Native Buildpacks, that require only a local Docker daemon.

In order to deploy Spring Boot on Kubernetes, we may also use Skaffold with Jib Maven Plugin. Jib is a tool from Google that builds optimized images without a Docker daemon. To look at how to integrate it with Skaffold in a Kubernetes deployment process you should read the article Local Java Development on Kubernetes. Furthermore, we may include a tool called Dekorate. It generates Kubernetes manifests based on a source code. You will find interesting details about it here.

Step 1. Build a Docker image with Spring Boot and Buildpacks

We don’t have to change anything in the build process to create a Docker image of a Spring Boot application. Of course, we need to declare spring-boot-maven-plugin in Maven pom.xml as always.

<build>
   <plugins>
      <plugin>
         <groupId>org.springframework.boot</groupId>
         <artifactId>spring-boot-maven-plugin</artifactId>
         <executions>
            <execution>
               <goals>
                  <goal>build-info</goal>
               </goals>
            </execution>
         </executions>
      </plugin>
   </plugins>
</build>

Since Spring Boot 2.3+ includes buildpack support directly for Maven, you may just type a single command to build a Docker image. So, let’s run the following command.

$ mvn package spring-boot:build-image

By default, Spring Boot uses Paketo Java buildpack to create Docker images. The name of a builder image is paketobuildpacks/builder:base. We can verify in the build logs visible below.

spring-boot-buildpacks-build-image

To clarify, this step is not required to deploy our application on Kubernetes. I just wanted to demonstrate how to use Cloud Native Buildpacks with Spring Boot. However, we should remember the name of a builder image. We will use the same builder in Step 3.

Step 2. Create Spring Boot application

I use the latest stable version of Spring Boot. For Step 1, we need at least version 2.3. For the next steps, it is not required. Moreover, we will use JDK 11 for compilation.

<parent>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-parent</artifactId>
   <version>2.4.1</version>
</parent>

<groupId>pl.piomin.samples</groupId>
<artifactId>sample-spring-boot-on-kubernetes</artifactId>
<version>1.0-SNAPSHOT</version>

<properties>
   <java.version>11</java.version>
</properties>

We will create a simple web application that exposes a REST API and connects to the Mongo database. Therefore, we need to include the following dependencies.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

Here’s a controller class. It implements methods for adding, updating, deleting, and searching data. There are multiple find methods available.

@RestController
@RequestMapping("/persons")
public class PersonController {

   private PersonRepository repository;

   PersonController(PersonRepository repository) {
      this.repository = repository;
   }

   @PostMapping
   public Person add(@RequestBody Person person) {
      return repository.save(person);
   }

   @PutMapping
   public Person update(@RequestBody Person person) {
      return repository.save(person);
   }

   @DeleteMapping("/{id}")
   public void delete(@PathVariable("id") String id) {
      repository.deleteById(id);
   }

   @GetMapping
   public Iterable<Person> findAll() {
      return repository.findAll();
   }

   @GetMapping("/{id}")
   public Optional<Person> findById(@PathVariable("id") String id) {
      return repository.findById(id);
   }

   @GetMapping("/first-name/{firstName}/last-name/{lastName}")
   public Set<Person> findByFirstNameAndLastName(@PathVariable("firstName") String firstName,
         @PathVariable("lastName") String lastName) {
      return repository.findByFirstNameAndLastName(firstName, lastName);
   }

   @GetMapping("/age-greater-than/{age}")
   public Set<Person> findByAgeGreaterThan(@PathVariable("age") int age) {
      return repository.findByAgeGreaterThan(age);
   }

   @GetMapping("/age/{age}")
   public Set<Person> findByAge(@PathVariable("age") int age) {
      return repository.findByAge(age);
   }

}

Step 3. Configure Skaffold

Firstly, we need to create a Skaffold configuration file in the project root directory. To do that we may execute the following command.

$ skaffold init --XXenableBuildpacksInit

At least we need to set the name of builder in the buildpacks section. Let’s configure the same builder as used by Spring Boot – paketobuildpacks/builder:base. We will also override a default location of Kubernetes manifests. Skaffold will detect two YAML manifests k8s/mongodb-deployment.yaml and k8s/deployment.yaml instead of a whole k8s directory.

apiVersion: skaffold/v2beta5
kind: Config
metadata:
  name: sample-spring-boot-on-kubernetes
build:
  artifacts:
    - image: piomin/sample-spring-boot-on-kubernetes
      buildpacks:
        builder: paketobuildpacks/builder:base
  tagPolicy:
    gitCommit: {}
deploy:
  kubectl:
    manifests:
      - k8s/mongodb-deployment.yaml
      - k8s/deployment.yaml

Step 4. Deploy on Kubernetes

Finally, we may proceed to the last step – deployment. The deployment manifest is visible below. Since, our application connects with the Mongo database, we inject its address and login credentials using ConfigMap and Secret references.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-boot-on-kubernetes-deployment
spec:
  selector:
    matchLabels:
      app: sample-spring-boot-on-kubernetes
  template:
    metadata:
      labels:
        app: sample-spring-boot-on-kubernetes
    spec:
      containers:
      - name: sample-spring-boot-on-kubernetes
        image: piomin/sample-spring-boot-on-kubernetes
        ports:
        - containerPort: 8080
        env:
          - name: MONGO_DATABASE
            valueFrom:
              configMapKeyRef:
                name: mongodb
                key: database-name
          - name: MONGO_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-user
          - name: MONGO_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-password

Let’s deploy our Spring Boot application using the following command. With port-forward option enabled we will be able to test HTTP endpoints using local port 8080.

$ skaffold dev --port-forward

As you see below, our sample application has succesfully started on Kubernetes.

spring-boot-buildpacks-skaffold

Let’s verify a list of deployments.

Finally, we may send some tests requests using http://localhost:8080 address.

The post Spring Boot on Kubernetes with Buildpacks and Skaffold appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/12/18/spring-boot-on-kubernetes-with-buildpacks-and-skaffold/feed/ 2 9268