Docker Archives - Piotr's TechBlog https://piotrminkowski.com/tag/docker/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 06 Feb 2026 10:00:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Docker Archives - Piotr's TechBlog https://piotrminkowski.com/tag/docker/ 32 32 181738725 Spring AI with External MCP Servers https://piotrminkowski.com/2026/02/06/spring-ai-with-external-mcp-servers/ https://piotrminkowski.com/2026/02/06/spring-ai-with-external-mcp-servers/#respond Fri, 06 Feb 2026 10:00:53 +0000 https://piotrminkowski.com/?p=15974 This article explains how to integrate Spring AI with external MCP servers that provide APIs for popular tools such as GitHub and SonarQube. Spring AI provides built-in support for MCP clients and servers. In this article, we will use only the Spring MCP client. If you are interested in more details on building MCP servers, […]

The post Spring AI with External MCP Servers appeared first on Piotr's TechBlog.

]]>
This article explains how to integrate Spring AI with external MCP servers that provide APIs for popular tools such as GitHub and SonarQube. Spring AI provides built-in support for MCP clients and servers. In this article, we will use only the Spring MCP client. If you are interested in more details on building MCP servers, please refer to the following post on my blog. MCP has recently become very popular, and you can easily find an MCP server implementation for almost any existing technology.

You can actually run MCP servers in many different ways. Ultimately, they are just ordinary applications whose task is to make a given tool available via an API compatible with the MCP protocol. The most popular AI IDE tools, such as Cloud Code, Codex, and Cursor, make it easy to run any MCP server. I will take a slightly unusual approach and use the support provided with Docker Desktop, namely the MCP Toolkit.

My idea for today is to build a simple Spring AI application that communicates with MCP servers for GitHub, SonarQube, and CircleCI to retrieve information about my repositories and projects hosted on those platforms. The Docker MCP Toolkit provides a single gateway that distributes incoming requests among running MCP servers. Let’s see how it works in practice!

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions. This repository contains several sample applications. The correct application for this article is in the spring-ai-mcp/external-mcp-sample-client directory.

Getting Started with Docker MCP Toolkit

First, run your Docker Desktop. You can find more than 300 popular MCP servers to run in the “Catalog” bookmark. Next, you should search for SonarQube, CircleCI, and GitHub Official servers (note that there are additional GitHub servers). To be honest, I encountered unexpected issues running the CircleCI server, so for now, I based the application on MCP communication with GitHub and SonarCloud.

spring-ai-mcp-docker-toolkit

Each MCP server usually requires configuration, such as your authorization token or service address. Therefore, before adding a server to Docker Toolkit, you must first configure it as described below. Only then should you click the “Add MCP server” button.

spring-ai-mcp-sonarqube-server

For the GitHub MCP server, in addition to entering the token itself, you must also authorize it via OAuth. Here, too, the MCP Toolkit provides graphical support. After entering the token, go to the “OAuth” tab to complete the process.

This is what your final result should look like before moving on to implementing the Spring Boot application. You have added two MCP servers, which together offer 65 tools.

To make both MCP servers available outside of Docker, you need to run the Docker MCP gateway. In the default stdio mode, the API is not exposed outside Docker. Therefore, you need to change the mode to streaming using the transport parameter, as shown below. The gateway is exposed on port 8811.

docker mcp gateway run --port 8811 --transport streaming
ShellSession

This is what it looks like after launch. Additionally, the Docker MCP gateway is secured by an API token. This will require appropriate settings on the MCP client side in the Spring AI application.

spring-ai-mcp-docker-gateway-start

Integrate Spring AI with External MCP Clients

Prepare the MCP Client with Spring AI

Let’s move on to implementing our sample application. We need to include the Spring AI MCP client and the library that communicates with the LLM model. For me, it’s OpenAI, but you can use many other options available through Spring AI’s integration with popular chat models.

<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-starter-mcp-client-webflux</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-starter-model-openai</artifactId>
  </dependency>
</dependencies>
XML

Our MCP client must authenticate itself to the Docker MCP gateway using an API token. Therefore, we need to modify the Spring WebClient used by Spring AI to communicate with MCP servers. It is best to use the ExchangeFilterFunction interface to create an HTTP filter that adds the appropriate Authorization header with the bearer token to the outgoing request. The token will be injected from the application properties.

@Component
public class McpSyncClientExchangeFilterFunction implements ExchangeFilterFunction {

    @Value("${mcp.token}")
    private String token;

    @Override
    public Mono<ClientResponse> filter(ClientRequest request, 
                                       ExchangeFunction next) {

            var requestWithToken = ClientRequest.from(request)
                    .headers(headers -> headers.setBearerAuth(token))
                    .build();
            return next.exchange(requestWithToken);

    }

}
Java

Then, let’s set the previously implemented filter for the default WebClient builder.

@SpringBootApplication
public class ExternalMcpSampleClient {

    public static void main(String[] args) {
        SpringApplication.run(ExternalMcpSampleClient.class, args);
    }

    @Bean
    WebClient.Builder webClientBuilder(McpSyncClientExchangeFilterFunction filterFunction) {
        return WebClient.builder()
                .filter(filterFunction);
    }
}
Java

After that, we must configure the MCP gateway address and token in the application properties. To achieve that, we must use the spring.ai.mcp.client.streamable-http.connections property. The MCP gateway listens on port 8811. The token value will be read from the MCP_TOKEN environment variable.

spring.ai.mcp.client.streamable-http.connections:
  docker-mcp-gateway:
    url: http://localhost:8811

mcp.token: ${MCP_TOKEN}
YAML

Implement Application Logic with Spring AI and OpenAI Support

The concept behind the sample application is quite simple. It involves creating a @RestController per tool provided by each MCP server. For each, I will create a simple prompt to request the number of repositories or projects in my account on a given platform. Let’s start with SonCloud. Each implementation uses the Spring AI ToolCallbackProvider bean to enable the available MCP server to communicate with the LLM model.

@RestController
@RequestMapping("/sonarcloud")
public class SonarCloudController {

    private final static Logger LOG = LoggerFactory
        .getLogger(SonarCloudController.class);
    private final ChatClient chatClient;

    public SonarCloudController(ChatClient.Builder chatClientBuilder,
                                ToolCallbackProvider tools) {
        this.chatClient = chatClientBuilder
                .defaultToolCallbacks(tools)
                .build();
    }

    @GetMapping("/count")
    String countRepositories() {
        PromptTemplate pt = new PromptTemplate("""
                How many projects in Sonarcloud do I have ?
                """);
        Prompt p = pt.create();
        return this.chatClient.prompt(p)
                .call()
                .content();
    }

}
Java

Below is a very similar implementation for GitHub MCP. This controller is exposed under the /github context path.

@RestController
@RequestMapping("/github")
public class GitHubController {

    private final static Logger LOG = LoggerFactory
        .getLogger(GitHubController.class);
    private final ChatClient chatClient;

    public GitHubController(ChatClient.Builder chatClientBuilder,
                            ToolCallbackProvider tools) {
        this.chatClient = chatClientBuilder
                .defaultToolCallbacks(tools)
                .build();
    }

    @GetMapping("/count")
    String countRepositories() {
        PromptTemplate pt = new PromptTemplate("""
                How many repositories in GitHub do I have ?
                """);
        Prompt p = pt.create();
        return this.chatClient.prompt(p)
                .call()
                .content();
    }

}
Java

Finally, there is the controller implementation for CircleCI MCP. It is available externally under the /circleci context path.

@RestController
@RequestMapping("/circleci")
public class CircleCIController {

    private final static Logger LOG = LoggerFactory
        .getLogger(CircleCIController.class);
    private final ChatClient chatClient;

    public CircleCIController(ChatClient.Builder chatClientBuilder,
                              ToolCallbackProvider tools) {
        this.chatClient = chatClientBuilder
                .defaultToolCallbacks(tools)
                .build();
    }

    @GetMapping("/count")
    String countRepositories() {
        PromptTemplate pt = new PromptTemplate("""
                How many projects in CircleCI do I have ?
                """);
        Prompt p = pt.create();
        return this.chatClient.prompt(p)
                .call()
                .content();
    }

}
Java

The last controller implementation is a bit more complex. First, I need to instruct the LLM model to generate project names in SonarQube and specify my GitHub username. This will not be part of the main prompt. Rather, it will be the system role, which guides the AI’s behavior and response style. Therefore, I’ll create the SystemPromptTemplate first. The user role prompt accepts an input parameter specifying the name of my GitHub repository. The response should combine data on the last commit in a given repository with the status of the most recent SonarQube analysis. In this case, the LLM will need to communicate with two MCP servers running with Docker MCP simultaneously.

@RestController
@RequestMapping("/global")
public class GlobalController {

    private final static Logger LOG = LoggerFactory
        .getLogger(CircleCIController.class);
    private final ChatClient chatClient;

    public GlobalController(ChatClient.Builder chatClientBuilder,
                            ToolCallbackProvider tools) {
        this.chatClient = chatClientBuilder
                .defaultToolCallbacks(tools)
                .build();
    }

    @GetMapping("/status/{repo}")
    String repoStatus(@PathVariable String repo) {
        SystemPromptTemplate st = new SystemPromptTemplate("""
                My username in GitHub is piomin.
                Each my project key in SonarCloud contains the prefix with my organization name and _ char.
                """);
        var stMsg = st.createMessage();

        PromptTemplate pt = new PromptTemplate("""
                When was the last commit made in my GitHub repository {repo} ?
                What is the latest analyze status in SonarCloud for that repo ?
                """);
        var usMsg = pt.createMessage(Map.of("repo", repo));

        Prompt prompt = new Prompt(List.of(usMsg, stMsg));
        return this.chatClient.prompt(prompt)
                .call()
                .content();
    }
}
Java

Before running the app, we must set two required environment variables that contain the OpenAI and Docker MCP gateway tokens.

export MCP_TOKEN=by1culxc6sctmycxtyl9xh7499mb8pctbsdb3brha1hvmm4d8l
export SPRING_AI_OPENAI_API_KEY=<YOUR_OPEN_AI_TOKEN>
Plaintext

Finally, we can run our Spring Boot app with the following command.

mvn spring-boot:run
ShellSession

Firstly, I’m going to ask about the number of my GitHub repositories.

curl http://localhost:8080/github/count
ShellSession

Then, I can check the number of projects in my SonarCloud account.

curl http://localhost:8080/github/sonarcloud
ShellSession

Finally, I can choose a specific repository and verify the last commit and the current analysis status in SonarCloud.

curl http://localhost:8080/global/status/sample-spring-boot-kafka
ShellSession

Here’s the LLM answer for my sample-spring-boot-kafka repository. You can perform the same exercise for your repositories and projects.

Conclusion

Spring AI, combined with the MCP client, opens a powerful path toward building truly tool-aware AI applications. By using the Docker MCP Gateway, we can easily host and manage MCP servers such as GitHub or SonarQube consistently and reproducibly, without tightly coupling them to our application runtime. Docker provides a user-friendly interface for managing MCP servers, giving users access to everything through a single MCP gateway. This approach appears to have advantages, particularly during application development.

The post Spring AI with External MCP Servers appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2026/02/06/spring-ai-with-external-mcp-servers/feed/ 0 15974
A Book: Hands-On Java with Kubernetes https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/ https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/#respond Mon, 08 Dec 2025 16:05:58 +0000 https://piotrminkowski.com/?p=15892 My book about Java and Kubernetes has finally been published! The book “Hands-On Java with Kubernetes” is the result of several months of work and, in fact, a summary of my experiences over the last few years of research and development. In this post, I want to share my thoughts on this book, explain why […]

The post A Book: Hands-On Java with Kubernetes appeared first on Piotr's TechBlog.

]]>
My book about Java and Kubernetes has finally been published! The book “Hands-On Java with Kubernetes” is the result of several months of work and, in fact, a summary of my experiences over the last few years of research and development. In this post, I want to share my thoughts on this book, explain why I chose to write and publish it, and briefly outline its content and concept. To purchase the latest version, go to this link.

Here is a brief overview of all my published books.

Motivation

I won’t hide that this post is mainly directed at my blog subscribers and people who enjoy reading it and value my writing style. As you know, all posts and content on my blog, along with sample application repositories on GitHub, are always accessible to you for free. Over the past eight years, I have worked to publish high-quality content on my blog, and I plan to keep doing so. It is a part of my life, a significant time commitment, but also a lot of fun and a hobby.

I want to explain why I decided to write this book, why now, and why in this way. But first, a bit of background. I wrote my last book first, then my first book, over seven years ago. It focused on topics I was mainly involved with at the time, specifically Spring Boot and Spring Cloud. Since then, a lot of time has passed, and much has changed – not only in the technology itself but also a little in my personal life. Today, I am more involved in Kubernetes and container topics than, for example, Spring Cloud. For years, I have been helping various organizations transition from traditional application architectures to cloud-native models based on Kubernetes. Of course, Java remains my main area of expertise. Besides Spring Boot, I also really like the Quarkus framework. You can read a lot about both in my book on Kubernetes.

Based on my experience over the past few years, involving development teams is a key factor in the success of the Kubernetes platform within an organization. Ultimately, it is the applications developed by these teams that are deployed there. For developers to be willing to use Kubernetes, it must be easy for them to do so. That is why I persuade organizations to remove barriers to using Kubernetes and to design it in a way that makes it easier for development teams. On my blog and in this book, I aim to demonstrate how to quickly and simply launch applications on Kubernetes using frameworks such as Spring Boot and Quarkus.

It’s an unusual time to publish a book. AI agents are producing more and more technical content online. More often than not, instead of grabbing a book, people turn to an AI chatbot for a quick answer, though not always the best one. Still, a book that thoroughly introduces a topic and offers a step-by-step guide remains highly valuable.

Content of the Book

This book demonstrates that Java is an excellent choice for building applications that run on Kubernetes. In the first chapter, I’ll show you how to quickly build your application, create its image, and run it on Kubernetes without writing a single line of YAML or Dockerfile. This chapter also covers the minimum Kubernetes architecture you must understand to manage applications effectively in this environment. The second chapter, on the other hand, demonstrates how to effectively organize your local development environment to work with a Kubernetes cluster. You’ll see several options for running a distribution of your cluster locally and learn about the essential set of tools you should have. The third chapter outlines best practices for building applications on the Kubernetes platform. Most of the presented requirements are supported by simple examples and explanations of the benefits of meeting them. The fourth chapter presents the most valuable tools for the inner development loop with Kubernetes. After reading the first four chapters, you will understand the main Kubernetes components related to application management, enabling you to navigate the platform efficiently. You’ll also learn to leverage Spring Boot and Quarkus features to adapt your application to Kubernetes requirements.

In the following chapters, I will focus on the benefits of migrating applications to Kubernetes. The first area to cover is security. Chapter five discusses mechanisms and tools for securing applications running in a cluster. Chapter six describes Spring and Quarkus projects that enable native integration with the Kubernetes API from within applications. In chapter seven, you’ll learn about the service mesh tool and the benefits of using it to manage HTTP traffic between microservices. Chapter eight addresses the performance and scalability of Java applications in a Kubernetes environment. Chapter Eight demonstrates how to design a CI/CD process that runs entirely within the cluster, leveraging Kubernetes-native tools for pipeline building and the GitOps approach. This book also covers AI. In the final, ninth chapter, you’ll learn how to run a simple Java application that integrates with an AI model deployed on Kubernetes.

Publication

I decided to publish my book on Leanpub. Leanpub is a platform for writing, publishing, and selling books, especially popular among technical content authors. I previously published a book with Packt, but honestly, I was alone during the writing process. Leanpub is similar but offers several key advantages over publishers like Packt. First, it allows you to update content collaboratively with readers and keep it current. Even though my book is finished, I don’t rule out adding more chapters, such as on AI on Kubernetes. I also look forward to your feedback and plan to improve the content and examples in the repository continuously. Overall, this has been another exciting experience related to publishing technical content.

And when you buy such a book, you can be sure that most of the royalties go to me as the author, unlike with other publishers, where most of the royalties go to them as promoters. So, I’m looking forward to improving my book with you!

Conclusion

My book aims to bring together all the most interesting elements surrounding Java application development on Kubernetes. It is intended not only for developers but also for architects and DevOps teams who want to move to the Kubernetes platform.

The post A Book: Hands-On Java with Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/feed/ 0 15892
Quarkus with Buildpacks and OpenShift Builds https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/ https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/#respond Wed, 19 Nov 2025 08:50:04 +0000 https://piotrminkowski.com/?p=15806 In this article, you will learn how to build Quarkus application images using Cloud Native Buildpacks and OpenShift Builds. Some time ago, I published a blog post about building with OpenShift Builds based on the Shipwright project. At that time, Cloud Native Buildpacks were not supported at the OpenShift Builds level. It was only supported […]

The post Quarkus with Buildpacks and OpenShift Builds appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to build Quarkus application images using Cloud Native Buildpacks and OpenShift Builds. Some time ago, I published a blog post about building with OpenShift Builds based on the Shipwright project. At that time, Cloud Native Buildpacks were not supported at the OpenShift Builds level. It was only supported in the community project. I demonstrated how to add the appropriate build strategy yourself and use it to build an image for a Spring Boot application. However, OpenShift Builds, since version 1.6, support building with Cloud Native Buildpacks. Currently, Quarkus, Go, Node.js, and Python are supported. In this article, we will focus on Quarkus and also examine the built-in support for Buildpacks within Quarkus itself.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

Quarkus Buildpacks Extension

Recently, support for Cloud Native Buildpacks in Quarkus has been significantly enhanced. Here you can access the repository containing the source code for the Paketo Quarkus buildpack. To implement this solution, add one dependency to your application.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-container-image-buildpack</artifactId>
</dependency>
XML

Next, run the build command with Maven and activate the quarkus.container-image.build parameter. Also, set the appropriate Java version needed for your application. For the sample Quarkus application in this article, the Java version is 21.

mvn clean package \
  -Dquarkus.container-image.build=true \
  -Dquarkus.buildpack.builder-env.BP_JVM_VERSION=21
ShellSession

To build, you need Docker or Podman running. Here’s the output from the command run earlier.

As you can see, Quarkus uses, among other buildpacks, the buildpack as mentioned earlier.

The new image is now available for use.

$ docker images sample-quarkus/person-service:1.0.0-SNAPSHOT
REPOSITORY                      TAG              IMAGE ID       CREATED        SIZE
sample-quarkus/person-service   1.0.0-SNAPSHOT   e0b58781e040   45 years ago   160MB
ShellSession

Quarkus with OpenShift Builds Shipwright

Install the Openshift Build Operator

Now, we will move the image building process to the OpenShift cluster. OpenShift offers built-in support for creating container images directly within the cluster through OpenShift Builds, using the BuildConfig solution. For more details, please refer to my previous article. However, in this article, we explore a new technology for building container images called OpenShift Builds with Shipwright. To enable this solution on OpenShift, you need to install the following operator.

After installing this operator, you will see a new item in the “Build” menu called “Shiwright”. Switch to it, then select the “ClusterBuildStrategies” tab. There are two strategies on the list designed for Cloud Native Buildpacks. We are interested in the buildpacks strategy.

Create and Run Build with Shipwright

Finally, we can create the Shiwright Build object. It contains three sections. In the first step, we define the address of the container image repository where we will push our output image. For simplicity, we will use the internal registry provided by the OpenShift cluster itself. In the source section, we specify the repository address where the application source code is located. In the last section, we need to set the build strategy. We chose the previously mentioned buildpacks strategy for Cloud Native Buildpacks. Some parameters need to be set for the buildpacks strategy: run-image and cnb-builder-image. The cnb-builder-image indicates the name of the builder image containing the buildpacks. The run-image refers to a base image used to run the application. We will also activate the buildpacks Maven profile during the build to set the Quarkus property that switches from fast-jar to uber-jar packaging.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildpack-quarkus-build
spec:
  env:
    - name: BP_JVM_VERSION
      value: '21'
  output:
    image: 'image-registry.openshift-image-registry.svc:5000/builds/sample-quarkus-microservice:1.0'
  paramValues:
    - name: run-image
      value: 'paketobuildpacks/run-java-21-ubi9-base:latest'
    - name: cnb-builder-image
      value: 'paketobuildpacks/builder-jammy-java-tiny:latest'
    - name: env-vars
      values:
        - value: BP_MAVEN_ADDITIONAL_BUILD_ARGUMENTS=-Pbuildpacks
  retention:
    atBuildDeletion: true
  source:
    git:
      url: 'https://github.com/piomin/sample-quarkus-microservice.git'
    type: Git
  strategy:
    kind: ClusterBuildStrategy
    name: buildpacks
YAML

Here’s the Maven buildpacks profile that sets a single Quarkus property quarkus.package.jar.type. We must change it to uber-jar, because the paketobuildpacks/builder-jammy-java-tiny builder expects a single jar instead of the multi-folder layout used by the default fast-jar format. Of course, I would prefer to use the paketocommunity/builder-ubi-base builder, which can recognize the fast-jar format. However, at this time, it does not function correctly with OpenShift Builds.

<profiles>
  <profile>
    <id>buildpacks</id>
    <activation>
      <property>
        <name>buildpacks</name>
      </property>
    </activation>
    <properties>
      <quarkus.package.jar.type>uber-jar</quarkus.package.jar.type>
    </properties>
  </profile>
</profiles>
XML

To start the build, you can use the OpenShift console or execute the following command:

shp build run buildpack-quarkus-build --follow
ShellSession

We can switch to the OpenShift Console. As you can see, our build is running.

The history of such builds is available on OpenShift. You can also review the build logs.

Finally, you should see your image in the list of OpenShift internal image streams.

$ oc get imagestream
NAME                          IMAGE REPOSITORY                                                                                                    TAGS                UPDATED
sample-quarkus-microservice   default-route-openshift-image-registry.apps.pminkows.95az.p1.openshiftapps.com/builds/sample-quarkus-microservice   1.2,0.0.1,1.1,1.0   13 hours ago
ShellSession

Conclusion

OpenShift Build Shipwright lets you perform the entire application image build process on the OpenShift cluster in a standardized manner. Cloud Native Buildpacks is a popular mechanism for building images without writing a Dockerfile yourself. In this case, support for Buildpacks on the OpenShift side is an interesting alternative to the Source-to-Image approach.

The post Quarkus with Buildpacks and OpenShift Builds appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/feed/ 0 15806
Spring Boot on Kubernetes with Eclipse JKube https://piotrminkowski.com/2024/10/03/spring-boot-on-kubernetes-with-eclipse-jkube/ https://piotrminkowski.com/2024/10/03/spring-boot-on-kubernetes-with-eclipse-jkube/#comments Thu, 03 Oct 2024 16:51:39 +0000 https://piotrminkowski.com/?p=15398 This article will teach you how to use the Eclipse JKube project to build images and generate Kubernetes manifests for the Spring Boot application. Eclipse JKube is a collection of plugins and libraries that we can use to build container images using Docker, Jib, or source-2-image (S2I) build strategies. It also generates and deploys Kubernetes […]

The post Spring Boot on Kubernetes with Eclipse JKube appeared first on Piotr's TechBlog.

]]>
This article will teach you how to use the Eclipse JKube project to build images and generate Kubernetes manifests for the Spring Boot application. Eclipse JKube is a collection of plugins and libraries that we can use to build container images using Docker, Jib, or source-2-image (S2I) build strategies. It also generates and deploys Kubernetes and OpenShift manifests at compile time. We can include it as the Maven or Gradle plugin to use it during our build process. On the other hand, Spring Boot doesn’t provide any built-in tools to simplify deployment to Kubernetes. It only provides the build-image goal within the Spring Boot Maven and Gradle plugins dedicated to building container images with Cloud Native Buildpacks. Let’s check out how Eclipse JKube can simplify our interaction with Kubernetes. By the way, it also provides tools for watching, debugging, and logging. 

You can find other interesting articles on my blog if you are interested in the tools for generating Kubernetes manifests. Here’s the article that shows how to use the Dekorate library to generate Kubernetes manifests for the Spring Boot app.

Source Code

If you would like to try this exercise by yourself, you may always take a look at my source code. First, you need to clone the following GitHub repository. It contains several sample Java applications for a Kubernetes showcase. You must go to the “inner-dev-loop” directory, to proceed with exercise. Then you should follow my further instructions.

Prerequisites

Before we start the development, we must install some tools on our laptops. Of course, we should have Maven and at least Java 21 installed. We must also have access to the container engine (like Docker or Podman) and a Kubernetes cluster. I have everything configured on my local machine using Podman Desktop and Minikube. Finally, we need to install the Helm CLI. It can be used to deploy the Postgres database on Kubernetes using the popular Bitnami Helm chart. In summary, we need to have:

  • Maven
  • OpenJDK 21+
  • Podman or Docker
  • Kubernetes
  • Helm CLI

Once we have everything in place, we can proceed to the next steps.

Create Spring Boot Application

In this exercise, we create a typical Spring Boot application that connects to the relational database and exposes REST endpoints for the basic CRUD operations. Both the application and database will run on Kubernetes. We install the Postgres database using the Bitnami Helm chart. To build and deploy the application in the Kubernetes cluster, we will use Maven and Eclipse JKube features. First, let’s take a look at the source code of our application. Here’s the list of included dependencies. It’s worth noting that Spring Boot Actuator is responsible for generating Kubernetes liveness and readiness health checks. JKube will be able to detect it and generate the required elements in the Deployment manifest.

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>org.springdoc</groupId>
    <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
    <version>2.6.0</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-devtools</artifactId>
    <scope>runtime</scope>
    <optional>true</optional>
</dependency>
XML

Here’s our Person entity class:

@Entity
public class Person {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer id;
    private String firstName;
    private String lastName;
    private int age;
    @Enumerated(EnumType.STRING)
    private Gender gender;
    private Integer externalId;

    // getters and setters
}
Java

We use the well-known Spring Data repository pattern to implement the data access layer. Here’s our PersonRepository interface. There are the additional method to search persons by age.

public interface PersonRepository extends CrudRepository<Person, Long> {
    List<Person> findByAgeGreaterThan(int age);
}
Java

Finally, we can implement the REST controller using the previously created PersonRepository to interact with the database.

@RestController
@RequestMapping("/persons")
public class PersonController {

    private static final Logger LOG = LoggerFactory
       .getLogger(PersonController.class);
    private final PersonRepository repository;

    public PersonController(PersonRepository repository) {
        this.repository = repository;
    }

    @GetMapping
    public List<Person> getAll() {
        LOG.info("Get all persons");
        return (List<Person>) repository.findAll();
    }

    @GetMapping("/{id}")
    public Person getById(@PathVariable("id") Long id) {
        LOG.info("Get person by id={}", id);
        return repository.findById(id).orElseThrow();
    }

    @GetMapping("/age/{age}")
    public List<Person> getByAgeGreaterThan(@PathVariable("age") int age) {
        LOG.info("Get person by age={}", age);
        return repository.findByAgeGreaterThan(age);
    }

    @DeleteMapping("/{id}")
    public void deleteById(@PathVariable("id") Long id) {
        LOG.info("Delete person by id={}", id);
        repository.deleteById(id);
    }

    @PostMapping
    public Person addNew(@RequestBody Person person) {
        LOG.info("Add new person: {}", person);
        return repository.save(person);
    }
    
}
Java

Here’s the full list of configuration properties. The database name and connection credentials are configured through environment variables: DATABASE_NAME, DATABASE_USER, and DATABASE_PASS. We should enable the exposure of the Kubernetes liveness and readiness health checks. After that, we include the database component status in the readiness probe.

spring:
  application:
    name: inner-dev-loop
  datasource:
    url: jdbc:postgresql://person-db-postgresql:5432/${DATABASE_NAME}
    username: ${DATABASE_USER}
    password: ${DATABASE_PASS}
  jpa:
    hibernate:
      ddl-auto: create
    properties:
      hibernate:
        show_sql: true
        format_sql: true

management:
  info.java.enabled: true
  endpoints:
    web:
      exposure:
        include: "*"
  endpoint.health:
    show-details: always
    group:
      readiness:
        include: db
    probes:
      enabled: true
YAML

Install Postgres on Kubernetes

We begin our interaction with Kubernetes from the database installation. We use the Bitnami Helm chart for that. In the first step, we must add the Bitnami repository:

helm repo add bitnami https://charts.bitnami.com/bitnami 
ShellSession

Then, we can install the Postgres chart under the person-db name. During the installation, we create the spring user and the database under the same name.

helm install person-db bitnami/postgresql \
   --set auth.username=spring \
   --set auth.database=spring
ShellSession

Postgres is accessible inside the cluster under the person-db-postgresql name.

$ kubectl get svc
NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
person-db-postgresql      ClusterIP   10.96.115.19   <none>        5432/TCP   23s
person-db-postgresql-hl   ClusterIP   None           <none>        5432/TCP   23s
ShellSession

Helm chart generates a Kubernetes Secret under the same name person-db-postgresql. The password for the spring user is automatically generated during installation. We can retrieve that password from the password field.

$ kubectl get secret person-db-postgresql -o yaml
apiVersion: v1
data:
  password: UkRaalNYU3o3cA==
  postgres-password: a1pBMFFuOFl3cQ==
kind: Secret
metadata:
  annotations:
    meta.helm.sh/release-name: person-db
    meta.helm.sh/release-namespace: demo
  creationTimestamp: "2024-10-03T14:39:19Z"
  labels:
    app.kubernetes.io/instance: person-db
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: postgresql
    app.kubernetes.io/version: 17.0.0
    helm.sh/chart: postgresql-16.0.0
  name: person-db-postgresql
  namespace: demo
  resourceVersion: "61646"
  uid: 00b0cf7e-8521-4f53-9c69-cfd3c942004c
type: Opaque
ShellSession

JKube with Spring Boot in Action

With JKube, we can build a Spring Boot app image and deploy it on Kubernetes with a single command without creating any YAML or Dockerfile. To use Eclipse JKube, we must include the org.eclipse.jkube:kubernetes-maven-plugin plugin to the Maven pom.xml. The plugin configuration contains values for two environment variables DATABASE_USER and DATABASE_NAME required by our Spring Boot application. We also set the memory and CPU request for the Deployment, which is obviusly a good practice.

<plugin>
    <groupId>org.eclipse.jkube</groupId>
    <artifactId>kubernetes-maven-plugin</artifactId>
    <version>1.17.0</version>
    <configuration>
        <resources>
            <controller>
                <env>
                    <DATABASE_USER>spring</DATABASE_USER>
                    <DATABASE_NAME>spring</DATABASE_NAME>
                </env>
                <containerResources>
                    <requests>
                        <memory>256Mi</memory>
                        <cpu>200m</cpu>
                    </requests>
                </containerResources>
            </controller>
        </resources>
    </configuration>
</plugin>
XML

We can use the resource fragments to generate a more advanced YAML manifest with properties not covered by the plugin’s XML fields. Such a fragment of the YAML manifest must be placed in the src/main/jkube directory. In our case, the password to the database must be injected from the Kubernetes person-db-postgresql Secret generated by the Bitnami Helm chart. Here’s the fragment of Deployment YAML in the deployment.yml file:

spec:
  template:
    spec:
      containers:
        - env:
          - name: DATABASE_PASS
            valueFrom:
              secretKeyRef:
                key: password
                name: person-db-postgresql
src/main/jkube/deployment.yml

If we want to build the image and generate Kubernetes manifests without applying them to the cluster we can use the goals k8s:build and k8s:resource during the Maven build.

mvn clean package -DskipTests k8s:build k8s:resource
ShellSession

Let’s take a look at the logs from the k8s:build phase. JKube reads the image group from the last part of the Maven group ID and replaces the version that contains the -SNAPSHOT suffix with the latest tag.

spring-boot-jkube-build

Here are the logs from k8s:resource phase. As you see, JKube reads the Spring Boot management.health.probes.enabled configuration property and includes /actuator/health/liveness and /actuator/health/readiness endpoints as the probes.

spring-boot-jkube-resource

Here’s the Deployment object generated by the JKube plugin for our Spring Boot application.

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    jkube.eclipse.org/scm-url: https://github.com/spring-projects/spring-boot/inner-dev-loop
    jkube.eclipse.org/scm-tag: HEAD
    jkube.eclipse.org/git-commit: 92b2e11f7ddb134323133aee0daa778135500113
    jkube.eclipse.org/git-url: https://github.com/piomin/kubernetes-quickstart.git
    jkube.eclipse.org/git-branch: master
  labels:
    app: inner-dev-loop
    provider: jkube
    version: 1.0-SNAPSHOT
    group: pl.piomin
    app.kubernetes.io/part-of: pl.piomin
    app.kubernetes.io/managed-by: jkube
    app.kubernetes.io/name: inner-dev-loop
    app.kubernetes.io/version: 1.0-SNAPSHOT
  name: inner-dev-loop
spec:
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: inner-dev-loop
      provider: jkube
      group: pl.piomin
      app.kubernetes.io/name: inner-dev-loop
      app.kubernetes.io/part-of: pl.piomin
      app.kubernetes.io/managed-by: jkube
  template:
    metadata:
      annotations:
        jkube.eclipse.org/scm-url: https://github.com/spring-projects/spring-boot/inner-dev-loop
        jkube.eclipse.org/scm-tag: HEAD
        jkube.eclipse.org/git-commit: 92b2e11f7ddb134323133aee0daa778135500113
        jkube.eclipse.org/git-url: https://github.com/piomin/kubernetes-quickstart.git
        jkube.eclipse.org/git-branch: master
      labels:
        app: inner-dev-loop
        provider: jkube
        version: 1.0-SNAPSHOT
        group: pl.piomin
        app.kubernetes.io/part-of: pl.piomin
        app.kubernetes.io/managed-by: jkube
        app.kubernetes.io/name: inner-dev-loop
        app.kubernetes.io/version: 1.0-SNAPSHOT
    spec:
      containers:
      - env:
        - name: DATABASE_PASS
          valueFrom:
            secretKeyRef:
              key: password
              name: person-db-postgresql
        - name: KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: DATABASE_NAME
          value: spring
        - name: DATABASE_USER
          value: spring
        - name: HOSTNAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        image: piomin/inner-dev-loop:latest
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /actuator/health/liveness
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 180
          successThreshold: 1
        name: spring-boot
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        - containerPort: 9779
          name: prometheus
          protocol: TCP
        - containerPort: 8778
          name: jolokia
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /actuator/health/readiness
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 10
          successThreshold: 1
        securityContext:
          privileged: false
YAML

In order to deploy the application to Kubernetes, we need to add the k8s:apply to the previously executed command.

mvn clean package -DskipTests k8s:build k8s:resource k8s:apply
ShellSession

After that, JKube applies the generated YAML manifests to the cluster.

We can verify a list of running applications by displaying a list of pods:

$ kubectl get pod
NAME                              READY   STATUS    RESTARTS   AGE
inner-dev-loop-5cbcf7dfc6-wfdr6   1/1     Running   0          17s
person-db-postgresql-0            1/1     Running   0          106m
ShellSession

It is also possible to display the application logs by executing the following command:

mvn k8s:log
ShellSession

Here’s the output after running the mvn k8s:log command:

spring-boot-jkube-log

We can also do other things, like to undeploy the app from the Kubernetes cluster.

Final Thoughts

Eclipse JKube simplifies Spring Boot deployment on the Kubernetes cluster. Except for the presented features, it also provides mechanisms for the inner development loop with the k8s:watch and k8s:remote-dev goals.

The post Spring Boot on Kubernetes with Eclipse JKube appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/10/03/spring-boot-on-kubernetes-with-eclipse-jkube/feed/ 4 15398
Multi-node Kubernetes Cluster with Minikube https://piotrminkowski.com/2024/07/09/multi-node-kubernetes-cluster-with-minikube/ https://piotrminkowski.com/2024/07/09/multi-node-kubernetes-cluster-with-minikube/#comments Tue, 09 Jul 2024 10:20:59 +0000 https://piotrminkowski.com/?p=15346 This article will teach you how to run and manage a multi-node Kubernetes cluster locally with Minikube. We will run this cluster on Docker. After that, we will enable some useful add-ons, install Kubernetes-native tools for monitoring and observability, and run a sample app that requires storage. You can compare this article with a similar […]

The post Multi-node Kubernetes Cluster with Minikube appeared first on Piotr's TechBlog.

]]>
This article will teach you how to run and manage a multi-node Kubernetes cluster locally with Minikube. We will run this cluster on Docker. After that, we will enable some useful add-ons, install Kubernetes-native tools for monitoring and observability, and run a sample app that requires storage. You can compare this article with a similar post about the Azure Kubernetes Service.

Prerequisites

Before you begin, you need to install Docker on your local machine. Then you need to download and install Minikube. On macOS, we can do it using the Homebrew command as shown below:

$ brew install minikube
ShellSession

Once we successfully installed Minikube, we can use its CLI. Let’s verify the version used in this article:

$ minikube version
minikube version: v1.33.1
commit: 5883c09216182566a63dff4c326a6fc9ed2982ff
ShellSession

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. This time, we won’t work much with the source code. However, you can access the repository with the sample Spring Boot app that uses storage exposed on the Kubernetes cluster. Once you clone the repository, go to the volumes/files-app directory. Then you should follow my instructions.

Create a Multi-node Kubernetes Cluster with Minikube

In order to create a multi-node Kubernetes cluster with Minikube, we need to use the --nodes or -n parameter in the minikube start command. Additionally, we can increase the default value of memory and CPUs reserved for the cluster with the --memory and --cpus parameters. Here’s the required command to execute:

$ minikube start --memory='12g' --cpus='4' -n 3
ShellSession

By the way, if you increase the resources assigned to the Minikube instance, you should also take care of resource reservations for Docker.

Once we run the minikube start command, the cluster creation begins. You should see a similar output, if everything goes fine.

minikube-kubernetes-create

Now, we can use Minikube with the kubectl tool:

$ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:52879
CoreDNS is running at https://127.0.0.1:52879/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ShellSession

We can display a list of running nodes:

$ kubectl get nodes
NAME           STATUS   ROLES           AGE     VERSION
minikube       Ready    control-plane   4h10m   v1.30.0
minikube-m02   Ready    <none>          4h9m    v1.30.0
minikube-m03   Ready    <none>          4h9m    v1.30.0
ShellSession

Sample Spring Boot App

Our Spring Boot app is simple. It exposes some REST endpoints for file-based operations on the target directory attached as a mounted volume. In order to expose REST endpoints, we need to include the Spring Boot Web starter. We will build the image using the Jib Maven plugin.

<properties>
  <spring-boot.version>3.3.1</spring-boot.version>
</properties>

<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
  </dependency>
</dependencies>

<build>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
  </plugins>
</build>

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-dependencies</artifactId>
      <version>${spring-boot.version}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>
    
<build>
  <plugins>
    <plugin>
      <groupId>com.google.cloud.tools</groupId>
      <artifactId>jib-maven-plugin</artifactId>
      <version>3.4.3</version>
    </plugin>
  </plugins>
</build>
XML

Let’s take a look at the main @RestController in our app. It exposes endpoints for listing all the files inside the target directory (GET /files/all), another one for creating a new file (POST /files/{name}), and also for adding a new string line to the existing file (POST /files/{name}/line).

package pl.piomin.services.files.controller;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.*;

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.List;

import static java.nio.file.Files.list;
import static java.nio.file.Files.writeString;

@RestController
@RequestMapping("/files")
public class FilesController {

    private static final Logger LOG = LoggerFactory.getLogger(FilesController.class);

    @Value("${MOUNT_PATH:/mount/data}")
    String root;

    @GetMapping("/all")
    public List<String> files() throws IOException {
        return list(Path.of(root)).map(Path::toString).toList();
    }

    @PostMapping("/{name}")
    public String createFile(@PathVariable("name") String name) throws IOException {
        return Files.createFile(Path.of(root + "/" + name)).toString();
    }

    @PostMapping("/{name}/line")
    public void addLine(@PathVariable("name") String name,
                        @RequestBody String line) {
        try {
            writeString(Path.of(root + "/" + name), line, StandardOpenOption.APPEND);
        } catch (IOException e) {
            LOG.error("Error while writing to file", e);
        }
    }
}
Java

Usually, I deploy the apps on Kubernetes with Skaffold. But this time, there are some issues with integration between the multi-node Minikube cluster and Skaffold. You can find a detailed description of those issues here. Therefore we build the image directly with the Jib Maven plugin and then just run the app with kubectl CLI.

Install Addons and Tools

Minikube comes with a set of predefined add-ons for Kubernetes. We can install each of them with a single minikube addons enable <ADDON_NAME> command. Although there are several plugins available, we still need to install some useful Kubernetes-native tools like Prometheus, for example using the Helm chart. In order to list all available plugins, we should execute the following command:

$ minikube addons list
ShellSession

Install Addon for Storage

The default storage provider in Minikube doesn’t support the multi-node clusters. It also doesn’t implement the CSI interface and is not able to handle volume snapshots. Fortunately, Minikube offers the csi-hostpath-driver addon for deploying the “CSI Hostpath Driver”. Since this plugin is disabled, we need to enable it.

$ minikube addons enable csi-hostpath-driver
ShellSession

Then, we can set the csi-hostpath-driver as a default storage class for the dynamic volume claims.

$ kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
$ kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
ShellSession

Install Monitoring Stack with Helm

The monitoring stack is not available as an add-on. However, we can easily install it using the Helm chart. We will use the official community chart for that kube-prometheus-stack. Firstly, let’s add the required repository.

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
ShellSession

Then, we can install the Prometheus monitoring stack in the monitoring namespace by executing the following command:

$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
  -n monitoring --create-namespace
ShellSession

Once you install Prometheus in your Minikube, you take advantage of the several default metrics exposed by this tool. For example, the Lens IDE automatically integrates with Prometheus metrics and displays the graphs with cluster overview.

minikube-kubernetes-cluster-metrics

We can also see the visualization of resource usage for all running pods, deployments, or stateful sets.

minikube-kubernetes-pod-metrics

Install Postgres with Helm

We will also install the Postgres database for multi-node cluster testing purposes. Once again, there is a Helm chart that simplifies Postgres installation on Kubernetes. It is published in the Bitnami repository. Firstly, let’s add the required repository:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
ShellSession

Then, we can install Postgres in the db namespace. We increase the default number of instances to 3.

$ helm install postgresql bitnami/postgresql \
  --set readReplicas.replicaCount=3 \
  -n db --create-namespace
ShellSession

The chart creates the StatefulSet object with 3 replicas.

$ kubectl get statefulset -n db
NAME         READY   AGE
postgresql   3/3     55m
ShellSession

We can display a list of running pods. As you see, Kubernetes scheduled 2 pods on the minikube-m02 node, and a single pod on the minikube node.

$ kubectl get po -n db -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP            NODE 
postgresql-0   1/1     Running   0          56m   10.244.1.9    minikube-m02
postgresql-1   1/1     Running   0          23m   10.244.1.10   minikube-m02
postgresql-2   1/1     Running   0          23m   10.244.0.4    minikube
ShellSession

Under the hood, there are 3 persistence volumes created. They use a default csi-hostpath-sc storage class and RWO mode.

$ kubectl get pvc -n db
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data-postgresql-0   Bound    pvc-e9b55ce8-978a-44ae-8fab-d5d6f911f1f9   8Gi        RWO            csi-hostpath-sc   <unset>                 65m
data-postgresql-1   Bound    pvc-d93af9ad-a034-4fbb-8377-f39005cddc99   8Gi        RWO            csi-hostpath-sc   <unset>                 32m
data-postgresql-2   Bound    pvc-b683f1dc-4cd9-466c-9c99-eb0d356229c3   8Gi        RWO            csi-hostpath-sc   <unset>                 32m
ShellSession

Build and Deploy Sample Spring Boot App on Minikube

In the first step, we build the app image. We use the Jib Maven plugin for that. I’m pushing the image to my own Docker registry under the piomin name. So you can change to your registry account.

$ cd volumes/files-app
$ mvn clean compile jib:build -Dimage=piomin/files-app:latest
ShellSession

The image is successfully pushed to the remote registry and is available under the piomin/files-app:latest tag.

Let’s create a new namespace on Minikube. We will run our app in the demo namespace.

$ kubectl create ns demo
ShellSession

Then, let’s create the PersistenceVolumeClaim. Since we will run multiple app pods distributed across all the Kubernetes nodes and the same volume is shared between all the instances we need the ReadWriteMany mode.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data
  namespace: demo
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
YAML

xxx

$ kubectl get pvc -n demo
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data   Bound    pvc-08fe242a-6599-4282-b03c-ee38e092431e   1Gi        RWX            csi-hostpath-sc
ShellSession

After that, we can deploy our app. In order, to spread the pods across all the cluster nodes we need to define the PodAntiAffinity rule (1). We will enable the running of only a single pod on each node. The deployment also mounts the data volume into all the app pods (2) (3).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: files-app
  namespace: demo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: files-app
  template:
    metadata:
      labels:
        app: files-app
    spec:
      # (1)
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - files-app
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: files-app
        image: piomin/files-app:latest
        imagePullPolicy: Always
        resources:
          requests:
            memory: 200Mi
            cpu: 100m
        ports:
        - containerPort: 8080
        env:
          - name: MOUNT_PATH
            value: /mount/data
        # (2)
        volumeMounts:
          - name: data
            mountPath: /mount/data
      # (3)
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: data
YAML

Let’s verify a list of running pods deploying the app.

$ kubectl get po -n demo
NAME                         READY   STATUS    RESTARTS   AGE
files-app-84897d9b57-5qqdr   0/1     Pending   0          36m
files-app-84897d9b57-7gwgp   1/1     Running   0          36m
files-app-84897d9b57-bjs84   0/1     Pending   0          36m
ShellSession

Although, we created the RWX volume, only a single pod is running. As you see, the CSI Hostpath Provider doesn’t fully support the read-write-many mode on Minikube.

In order to solve that problem, we can enable the Storage Provisioner Gluster addon in Minikube.

$ minikube addons enable storage-provisioner-gluster
ShellSession

After enabling it, several new pods are running in the storage-gluster namespace.

$ kubectl -n storage-gluster get pods
NAME                                       READY   STATUS    RESTARTS   AGE
glusterfile-provisioner-79cf7f87d5-87p57   1/1     Running   0          5m25s
glusterfs-d8pfp                            1/1     Running   0          5m25s
glusterfs-mp2qx                            1/1     Running   0          5m25s
glusterfs-rlnxz                            1/1     Running   0          5m25s
heketi-778d755cd-jcpqb                     1/1     Running   0          5m25s
ShellSession

Also, there is a new default StorageClass with the glusterfile name.

$ kubectl get sc
NAME                    PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-hostpath-sc         hostpath.csi.k8s.io        Delete          Immediate           false                  20h
glusterfile (default)   gluster.org/glusterfile    Delete          Immediate           false                  19s
standard                k8s.io/minikube-hostpath   Delete          Immediate           false                  21h
ShellSession

Once we redeploy our app and recreate the PVC using a new default storage class, we can expose our sample Spring Boot app as a Kubernetes service:

apiVersion: v1
kind: Service
metadata:
  name: files-app
spec:
  selector:
    app: files-app
  ports:
  - port: 8080
    protocol: TCP
    name: http
  type: ClusterIP
YAML

Then, let’s enable port forwarding for that service to access it over the localhost:8080:

$ kubectl port-forward svc/files-app 8080 -n demo
ShellSession

Finally, we can run some tests to list and create some files on the target volume:

$ curl http://localhost:8080/files/all
[]

$ curl http://localhost:8080/files/test1.txt -X POST
/mount/data/test1.txt

$ curl http://localhost:8080/files/test2.txt -X POST
/mount/data/test2.txt

$ curl http://localhost:8080/files/all
["/mount/data/test1.txt","/mount/data/test2.txt"]

$ curl http://localhost:8080/files/test1.txt/line -X POST -d "hello1"
$ curl http://localhost:8080/files/test1.txt/line -X POST -d "hello2"
ShellSession

And verify the content of a particular inside the volume:

Final Thoughts

In this article, I wanted to share my experience working with the multi-node Kubernetes cluster simulation on Minikube. It was a very quick introduction. I hope it helps 🙂

The post Multi-node Kubernetes Cluster with Minikube appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/07/09/multi-node-kubernetes-cluster-with-minikube/feed/ 2 15346
Migrate from Kubernetes to OpenShift in the GitOps Way https://piotrminkowski.com/2024/04/15/migrate-from-kubernetes-to-openshift-in-the-gitops-way/ https://piotrminkowski.com/2024/04/15/migrate-from-kubernetes-to-openshift-in-the-gitops-way/#comments Mon, 15 Apr 2024 12:09:50 +0000 https://piotrminkowski.com/?p=15190 In this article, you will learn how to migrate your apps from Kubernetes to OpenShift in the GitOps way using tools like Kustomize, Helm, operators, and Argo CD. We will discuss the best practices in that area. This requires us to avoid approaches like starting a pod in the privileged mode. We will focus not […]

The post Migrate from Kubernetes to OpenShift in the GitOps Way appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to migrate your apps from Kubernetes to OpenShift in the GitOps way using tools like Kustomize, Helm, operators, and Argo CD. We will discuss the best practices in that area. This requires us to avoid approaches like starting a pod in the privileged mode. We will focus not just on running your custom apps, but mostly on the popular pieces of cloud-native or legacy software including:

  • Argo CD
  • Istio
  • Apache Kafka
  • Postgres
  • HashiCorp Vault
  • Prometheus
  • Redis
  • Cert Manager

Finally, we will migrate our sample Spring Boot app. I will also show you how to build such an app on Kubernetes and OpenShift in the same way using the Shipwright tool. However, before we start, let’s discuss some differences between “vanilla” Kubernetes and OpenShift.

Introduction

What are the key differences between Kubernetes and OpenShift? That’s probably the first question you will ask yourself when considering migration from Kubernetes. Today, I will focus only on those aspects that impact running the apps from our list. First of all, OpenShift is built on top of Kubernetes and is fully compatible with Kubernetes APIs and resources. If you can do something on Kubernetes, you can do it on OpenShift in the same way unless it doesn’t compromise security policy. OpenShift comes with additional security policies out of the box. For example, by default, it won’t allow you to run containers with the root user.

Apart from security reasons, only the fact that you can do something doesn’t mean that you should do it in that way. So, you can run images from Docker Hub, but Red Hat provides many supported container images built from Red Hat Enterprise Linux. You can find a full list of supported images here. Although you can install popular software on OpenShift using Helm charts, Red Hat provides various supported Kubernetes operators for that. With those operators, you can be sure that the installation will go without any problems and the solution might be integrated with OpenShift better. We will analyze all those things based on the examples from the tools list.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. I will explain the structure of our sample in detail later. So after cloning the Git repository you should just follow my instructions.

Install Argo CD

Use Official Helm Chart

In the first step, we will install Argo CD on OpenShift. I’m assuming that on Kubernetes, you’re using the official Helm chart for that. In order to install that chart, we need to add the following Helm repository:

$ helm repo add argo https://argoproj.github.io/argo-helm
ShellSession

Then, we can install the Argo CD in the argocd namespace on OpenShift with the following command. The Argo CD Helm chart provides some parameters dedicated to OpenShift. We need to enable arbitrary uid for the repo server by setting the openshift.enabled property to true. If we want to access the Argo CD dashboard outside of the cluster we should expose it as the Route. In order to do that, we need to enable the server.route.enabled property and set the hostname using the server.route.hostname parameter (piomin.eastus.aroapp.io is my OpenShift domain).

$ helm install argocd argo/argo-cd -n argocd --create-namespace \
    --set openshift.enabled=true \
    --set server.route.enabled=true \
    --set server.route.hostname=argocd.apps.piomin.eastus.aroapp.io
ShellSession

After that, we can access the Argo CD dashboard using the Route address as shown below. The admin user password may be taken from the argocd-initial-admin-secret Secret generated by the Helm chart.

Use the OpenShift GitOps Operator (Recommended Way)

The solution presented in the previous section works fine. However, it is not the optimal approach for OpenShift. In that case, the better idea is to use OpenShift GitOps Operator. Firstly, we should find the “Red Hat GitOps Operator” inside the “Operator Hub” section in the OpenShift Console. Then, we have to install the operator.

During the installation, the operator automatically creates the Argo CD instance in the openshift-gitops namespace.

OpenShift GitOps operator automatically exposes the Argo CD dashboard through the Route. It is also integrated with OpenShift auth, so we can use cluster credentials to sign in there.

kubernetes-to-openshift-argocd

Install Redis, Postgres and Apache Kafka

OpenShift Support in Bitnami Helm Charts

Firstly, let’s assume that we use Bitnami Helm charts to install all three tools from the chapter title (Redis, Postgres, Kafka) on Kubernetes. Fortunately, the latest versions of Bitnami Helm charts provide out-of-the-box compatibility with the OpenShift platform. Let’s analyze what it means.

Beginning from the 4.11 version OpenShift introduces new Security Context Constraints (SCC) called restricted-v2. In OpenShift, security context constraints allow us to control permissions assigned to the pods. The restricted-v2 SCC includes a minimal set of privileges usually required for a generic workload to run. It is the most restrictive policy that matches the current pod security standards. As I mentioned before, the latest version of the most popular Bitnami Helm charts supports the restricted-v2 SCC. We can check which of the charts support that feature by checking if they provide the global.compatibility.openshift.adaptSecurityContext parameter. The default value of that parameter is auto. It means that it is applied only if the detected running cluster is Openshift.

So, in short, we don’t have to change anything in the Helm chart configuration used on Kubernetes to make it work also on OpenShift. However, it doesn’t mean that we won’t change that configuration. Let’s analyze it tool after tool.

Install Redis on OpenShift with Helm Chart

In the first step, let’s add the Bitnami Helm repository with the following command:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
ShellSession

Then, we can install and run a three-node Redis cluster with a single master node in the redis namespace using the following command:

$ helm install redis bitnami/redis -n redis --create-namespace
ShellSession

After installing the chart we can display a list of pods running the redis namespace:

$ oc get po
NAME               READY   STATUS    RESTARTS   AGE
redis-master-0     1/1     Running   0          5m31s
redis-replicas-0   1/1     Running   0          5m31s
redis-replicas-1   1/1     Running   0          4m44s
redis-replicas-2   1/1     Running   0          4m3s
ShellSession

Let’s take a look at the securityContext section inside one of the Redis cluster pods. It contains characteristic fields for the restricted-v2 SCC, which removes runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs.

kubernetes-to-openshift-security-context

However, let’s stop for a moment to analyze the current situation. We installed Redis on OpenShift using the Bitnami Helm chart. By default, this chart is based on the Redis Debian image provided by Bitnami in the Docker Hub.

On the other hand, Red Hat provides its build of Redis image based on RHEL 9. Consequently, this image would be more suitable for running on OpenShift.

kubernetes-to-openshift-redis

In order to use a different Redis image with the Bitnami Helm chart, we need to override the registry, repository, and tag fields in the image section. The full address of the current latest Red Hat Redis image is registry.redhat.io/rhel9/redis-7:1-16. In order to make the Bitnami chart work with that image, we need to override the default data path to /var/lib/redis/data and disable the container’s Security Context read-only root filesystem for the slave pods.

image:
  tag: 1-16
  registry: registry.redhat.io
  repository: rhel9/redis-7

master:
  persistence:
    path: /var/lib/redis/data

replica:
  persistence:
    path: /var/lib/redis/data
  containerSecurityContext:
    readOnlyRootFilesystem: false
YAML

Install Postgres on OpenShift with Helm Chart

With Postgres, we have every similar as before with Redis. The Bitnami Helm chart also supports OpenShift restricted-v2 SCC and Red Hat provide the Postgres image based on RHEL 9. Once again, we need to override some chart parameters to adapt to a different image than the default one provided by Bitnami.

image:
  tag: 1-54
  registry: registry.redhat.io
  repository: rhel9/postgresql-15

primary:
  containerSecurityContext:
    readOnlyRootFilesystem: false
  persistence:
    mountPath: /var/lib/pgsql
  extraEnvVars:
    - name: POSTGRESQL_ADMIN_PASSWORD
      value: postgresql123

postgresqlDataDir: /var/lib/pgsql/data
YAML

Of course, we can consider switching to one of the available Postgres operators. From the “Operator Hub” section we can install e.g. Postgres using Crunchy or EDB operators. However, these are not operators provided by Red Hat. Of course, you can use them on “vanilla” Kubernetes as well. In that case, the migration to OpenShift also won’t be complicated.

Install Kafka on OpenShift with the Strimzi Operator

The situation is slightly different in the case of Apache Kafka. Of course, we can use the Kafka Helm chart provided by Bitnami. However, Red Hat provides a supported version of Kafka through the Strimzi operator. This operator is a part of the Red Hat product ecosystem and is available commercially as the AMQ Streams. In order to install Kafka with AMQ Streams on OpenShift, we need to install the operator first.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: amq-streams
  namespace: openshift-operators
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  channel: stable
  installPlanApproval: Automatic
  name: amq-streams
  source: redhat-operators
  sourceNamespace: openshift-marketplace
YAML

Once we install the operator with the Strimzi CRDs we can provision the Kafka instance on OpenShift. In order to do that, we need to define the Kafka object. The name of the cluster is my-cluster. We should install it after a successful installation of the operator CRD, so we set the higher value of the Argo CD sync-wave parameter than for the amq-streams Subscription object. Argo CD should also ignore missing CRDs installed by the operator during sync thanks to the SkipDryRunOnMissingResource option.

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
  namespace: kafka
  annotations:
    argocd.argoproj.io/sync-wave: "3"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
  kafka:
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      default.replication.factor: 3
      min.insync.replicas: 2
      inter.broker.protocol.version: '3.6'
    storage:
      type: persistent-claim
      size: 5Gi
      deleteClaim: true
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    version: 3.6.0
    replicas: 3
  entityOperator:
    topicOperator: {}
    userOperator: {}
  zookeeper:
    storage:
      type: persistent-claim
      deleteClaim: true
      size: 2Gi
    replicas: 3
YAML

GitOps Strategy for Kubernetes and OpenShift

In this section, we will focus on comparing differences in the GitOps manifest between Kubernetes and Openshift. We will use Kustomize to configure two overlays: openshift and kubernetes. Here’s the structure of our configuration repository:

.
├── base
│   ├── kustomization.yaml
│   └── namespaces.yaml
└── overlays
    ├── kubernetes
    │   ├── kustomization.yaml
    │   ├── namespaces.yaml
    │   ├── values-cert-manager.yaml
    │   └── values-vault.yaml
    └── openshift
        ├── cert-manager-operator.yaml
        ├── kafka-operator.yaml
        ├── kustomization.yaml
        ├── service-mesh-operator.yaml
        ├── values-postgres.yaml
        ├── values-redis.yaml
        └── values-vault.yaml
ShellSession

Configuration for Kubernetes

In addition to the previously discussed tools, we will also install “cert-manager”, Prometheus, and Vault using Helm charts. Kustomize allows us to define a list of managed charts using the helmCharts section. Here’s the kustomization.yaml file containing a full set of installed charts:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base
  - namespaces.yaml

helmCharts:
  - name: redis
    repo: https://charts.bitnami.com/bitnami
    releaseName: redis
    namespace: redis
  - name: postgresql
    repo: https://charts.bitnami.com/bitnami
    releaseName: postgresql
    namespace: postgresql
  - name: kafka
    repo: https://charts.bitnami.com/bitnami
    releaseName: kafka
    namespace: kafka
  - name: cert-manager
    repo: https://charts.jetstack.io
    releaseName: cert-manager
    namespace: cert-manager
    valuesFile: values-cert-manager.yaml
  - name: vault
    repo: https://helm.releases.hashicorp.com
    releaseName: vault
    namespace: vault
    valuesFile: values-vault.yaml
  - name: prometheus
    repo: https://prometheus-community.github.io/helm-charts
    releaseName: prometheus
    namespace: prometheus
  - name: istio
    repo: https://prometheus-community.github.io/helm-charts
    releaseName: istio
    namespace: istio-system
overlays/kubernetes/kustomization.yaml

For some of them, we need to override default Helm parameters. Here’s the values-vault.yaml file with the parameters for Vault. We enable development mode and UI dashboard:

server:
  dev:
    enabled: true
ui:
  enabled: true
overlays/kubernetes/values-vault.yaml

Let’s also customize the default behavior of the “cert-manager” chart with the following values:

installCRDs: true
startupapicheck:
  enabled: false
overlays/kubernetes/values-cert-manager.yaml

Configuration for OpenShift

Then, we can switch to the configuration for Openshift. Vault has to be installed with the Helm chart, but for “cert-manager” we can use the operator provided by Red Hat. Since Openshift comes with built-in Prometheus, we don’t need to install it. We will also replace the Helm chart with Istio with the Red Hat-supported OpenShift Service Mesh operator. Here’s the kustomization.yaml for OpenShift:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base
  - kafka-operator.yaml
  - cert-manager-operator.yaml
  - service-mesh-operator.yaml

helmCharts:
  - name: redis
    repo: https://charts.bitnami.com/bitnami
    releaseName: redis
    namespace: redis
    valuesFile: values-redis.yaml
  - name: postgresql
    repo: https://charts.bitnami.com/bitnami
    releaseName: postgresql
    namespace: postgresql
    valuesFile: values-postgres.yaml
  - name: vault
    repo: https://helm.releases.hashicorp.com
    releaseName: vault
    namespace: vault
    valuesFile: values-vault.yaml
overlays/openshift/kustomization.yaml

For Vault we should enable integration with Openshift and support for the Route object. Red Hat provides a Vault image based on UBI in the registry.connect.redhat.com/hashicorp/vault registry. Here’s the values-vault.yaml file for OpenShift:

server:
  dev:
    enabled: true
  route:
    enabled: true
    host: ""
    tls: null
  image:
    repository: "registry.connect.redhat.com/hashicorp/vault"
    tag: "1.16.1-ubi"
global:
  openshift: true
injector:
  enabled: false
overlays/openshift/values-vault.yaml

In order to install operators we need to define at least the Subscription object. Here’s the subscription for the OpenShift Service Mesh. After installing the operator we can create a control plane in the istio-system namespace using the ServiceMeshControlPlane CRD object. In order to apply the CRD after installing the operator, we need to use the Argo CD sync waves and define the SkipDryRunOnMissingResource parameter:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: servicemeshoperator
  namespace: openshift-operators
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  channel: stable
  installPlanApproval: Automatic
  name: servicemeshoperator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
---
apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
  name: basic
  namespace: istio-system
  annotations:
    argocd.argoproj.io/sync-wave: "3"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
  tracing:
    type: None
    sampling: 10000
  policy:
    type: Istiod
  addons:
    grafana:
      enabled: false
    jaeger:
      install:
        storage:
          type: Memory
    kiali:
      enabled: false
    prometheus:
      enabled: false
  telemetry:
    type: Istiod
  version: v2.5
overlays/openshift/service-mesh-operator.yaml

Since the “cert-manager” operator is installed in a different namespace than openshift-operators, we also need to define the OperatorGroup object.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-cert-manager-operator
  namespace: cert-manager
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  channel: stable-v1
  installPlanApproval: Automatic
  name: openshift-cert-manager-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
---
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
  name: cert-manager-operator
  namespace: cert-manager
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  targetNamespaces:
    - cert-manager
overlays/openshift/cert-manager-operator.yaml

Finally, OpenShift comes with built-in Prometheus monitoring, so we don’t need to install it.

Apply the Configuration with Argo CD

Here’s the Argo CD Application responsible for installing our sample configuration on OpenShift. We should create it in the openshift-gitops namespace.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: install
  namespace: openshift-gitops
spec:
  destination:
    server: 'https://kubernetes.default.svc'
  project: default
  source:
    path: overlays/openshift
    repoURL: 'https://github.com/piomin/kubernetes-to-openshift-argocd.git'
    targetRevision: HEAD
YAML

Before that, we need to enable the use of the Helm chart inflator generator with Kustomize in Argo CD. In order to do that, we can add the kustomizeBuildOptions parameter in the openshift-gitops ArgoCD object as shown below.

apiVersion: argoproj.io/v1beta1
kind: ArgoCD
metadata:
  name: openshift-gitops
  namespace: openshift-gitops
spec:
  # ...
  kustomizeBuildOptions: '--enable-helm'
YAML

After creating the Argo CD Application and triggering the sync process, the installation starts on OpenShift.

kubernetes-to-openshift-gitops

Build App Images

We installed several software solutions including the most popular databases, message brokers, and security tools. However, now we want to build and run our own apps. How to migrate them from Kubernetes to OpenShift? Of course, we can run the app images exactly in the same way as in Kubernetes. On the other hand, we can build them on OpenShift using the Shipwright project. We can install it on OpenShift using the “Builds for Red Hat OpenShift Operator”.

kubernetes-to-openshift-shipwright

After that, we need to create the ShiwrightBuild object. It needs to contain the name of the target namespace for running Shipwright in the targetNamespace field. In my case, the target namespace is builds-demo. For a detailed description of the Shipwright build, you can refer to that article on my blog.

apiVersion: operator.shipwright.io/v1alpha1
kind: ShipwrightBuild
metadata:
  name: openshift-builds
spec:
  targetNamespace: builds-demo
YAML

With Shipwright we can easily switch between multiple build strategies on Kubernetes, and on OpenShift as well. For example, on OpenShift we can use a built-in source-to-image (S2I) strategy, while on Kubernetes e.g. Kaniko or Cloud Native Buildpacks.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: sample-spring-kotlin-build
  namespace: builds-demo
spec:
  output:
    image: quay.io/pminkows/sample-kotlin-spring:1.0-shipwright
    pushSecret: pminkows-piomin-pull-secret
  source:
    git:
      url: https://github.com/piomin/sample-spring-kotlin-microservice.git
  strategy:
    name: source-to-image
    kind: ClusterBuildStrategy
YAML

Final Thoughts

Migration from Kubernetes to Openshift is not a painful process. Many popular Helm charts support OpenShift restricted-v2 SCC. Thanks to that, in some cases, you don’t need to change anything. However, sometimes it’s worth switching to the version of the particular tool supported by Red Hat.

The post Migrate from Kubernetes to OpenShift in the GitOps Way appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/04/15/migrate-from-kubernetes-to-openshift-in-the-gitops-way/feed/ 2 15190
Slim Docker Images for Java https://piotrminkowski.com/2023/11/07/slim-docker-images-for-java/ https://piotrminkowski.com/2023/11/07/slim-docker-images-for-java/#comments Tue, 07 Nov 2023 21:26:26 +0000 https://piotrminkowski.com/?p=14643 In this article, you will learn how to build slim Docker images for your Java apps using Alpine Linux and the jlink tool. We will leverage the latest Java 21 base images provided by Eclipse Temurin and BellSoft Liberica. We are going to compare those providers with Alpaquita Linux also delivered by BellSoft. That comparison […]

The post Slim Docker Images for Java appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to build slim Docker images for your Java apps using Alpine Linux and the jlink tool. We will leverage the latest Java 21 base images provided by Eclipse Temurin and BellSoft Liberica. We are going to compare those providers with Alpaquita Linux also delivered by BellSoft. That comparison will also include security scoring based on the number of vulnerabilities. As an example, we will use a simple Spring Boot app that exposes some REST endpoints.

If you are interested in Java in the containerization context you may find some similar articles on my blog. For example, you can read how to speed up Java startup on Kubernetes with CRaC in that post. There is also an article comparing different JDK providers used for running the Java apps by Paketo Buildpacks.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you need to go to the spring-microservice directory. After that, you should just follow my instructions.

Introduction

I probably don’t need to convince anyone that keeping Docker images slim and light is important. It speeds up the build process and deployment of containers. Decreasing image size and removing unnecessary files eliminate vulnerable components and therefore reduce the risk of security issues. Usually, the first step to reduce the target image size is to choose a small base image. Our choice will not be surprising – Alpine Linux. It is a Linux distribution built around musl libc and BusyBox. The image has only 5 MB.

Also, Java in itself consumes some space inside the image. Fortunately, we can reduce that size by using the jlink tool. With jlink we can choose only the modules required by our app, and link them into a runtime image. Our main goal today is to create as small as possible Docker image for our sample Spring Boot app.

Sample Spring Boot App

As I mentioned before, our Java app is not complicated. It uses Spring Boot Web Starter to expose REST endpoints over HTTP. I made some small improvements in the dependencies. Tomcat has been replaced with Undertow to reduce the target JAR file size. I also imported the latest version of the org.yaml:snakeyaml library to avoid a CVE issue related to the 1.X release of that project. Of course, I’m using Java 21 for compilation:

<properties>
  <java.version>21</java.version>
</properties>

<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <exclusions>
      <exclusion>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-tomcat</artifactId>
      </exclusion>
    </exclusions>
  </dependency>
  <dependency>
    <groupId>org.yaml</groupId>
    <artifactId>snakeyaml</artifactId>
    <version>2.2</version>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-undertow</artifactId>
  </dependency>
</dependencies>

Here’s the implementation of the @RestController responsible for exposing several endpoints:

@RestController
@RequestMapping("/persons")
public class Api {

   protected Logger logger = Logger.getLogger(Api.class.getName());

   private List<Person> persons;

   public Api() {
      persons = new ArrayList<>();
      persons.add(new Person(1, "Jan", "Kowalski", 22));
      persons.add(new Person(2, "Adam", "Malinowski", 33));
      persons.add(new Person(3, "Tomasz", "Janowski", 25));
      persons.add(new Person(4, "Alina", "Iksińska", 54));
   }

   @GetMapping
   public List<Person> findAll() {
      logger.info("Api.findAll()");
      return persons;
   }

   @GetMapping("/{id}")
   public Person findById(@PathVariable("id") Integer id) {
      logger.info(String.format("Api.findById(%d)", id));
      return persons.stream()
                    .filter(p -> (p.getId().intValue() == id))
                    .findAny()
                    .orElseThrow();
   }

}

In the next step, we will prepare and build several Docker images for our Java app and compare them with each other.

Build Alpine Image with BellSoft Liberica OpenJDK

Let’s take a look at the Dockerfile. We are using a feature called multi-stage Docker builds. In the first step, we are the Java runtime for our app (1). We download and unpack the latest LTS version of OpenJDK from BellSoft (2). We need a release targeted for Alpine Linux (with the musl suffix). Then, we are running the jlink command to create a custom image with JDK (3). In order to run the app, we need to include at least the following Java modules: java.base, java.logging, java.naming, java.desktop, jdk.unsupported (4). You can verify a list of required modules by running the jdeps command e.g. on your JAR file. The jlink tool will place our custom JDK runtime in the springboot-runtime directory (the --output parameter).

Finally, we can proceed to the main phase of the image build (5). We are placing the optimized version of JDK in the /opt/jdk path by copying it from the directory created during the previous build phase (6). Then we are just running the app using the java -jar command.

# (1)
FROM alpine:latest AS build 
ENV JAVA_HOME /opt/jdk/jdk-21.0.1
ENV PATH $JAVA_HOME/bin:$PATH

# (2)
ADD https://download.bell-sw.com/java/21.0.1+12/bellsoft-jdk21.0.1+12-linux-x64-musl.tar.gz /opt/jdk/
RUN tar -xzvf /opt/jdk/bellsoft-jdk21.0.1+12-linux-x64-musl.tar.gz -C /opt/jdk/

# (3)
RUN ["jlink", "--compress=2", \
     "--module-path", "/opt/jdk/jdk-21.0.1/jmods/", \
# (4)
     "--add-modules", "java.base,java.logging,java.naming,java.desktop,jdk.unsupported", \
     "--no-header-files", "--no-man-pages", \
     "--output", "/springboot-runtime"]

# (5)
FROM alpine:latest
# (6)
COPY --from=build  /springboot-runtime /opt/jdk 
ENV PATH=$PATH:/opt/jdk/bin
EXPOSE 8080
COPY ../target/spring-microservice-1.0-SNAPSHOT.jar /opt/app/
CMD ["java", "-showversion", "-jar", "/opt/app/spring-microservice-1.0-SNAPSHOT.jar"]

Let’s build the image by executing the following command. We are tagging the image with bellsoft and preparing it for pushing to the quay.io registry:

$ docker build -t quay.io/pminkows/spring-microservice:bellsoft . 

Here’s the result:

We can examine the image using the dive tool. If you don’t have any previous experience with dive CLI you can read more about it here. We need to run the following command to analyze the current image:

$ dive quay.io/pminkows/spring-microservice:bellsoft

Here’s the result. As you see our image has 114MB. Java is consuming 87 MB, the app JAR file 20MB, and Alpine Linux 7.3.MB. You can also take a look at the list of modules and the whole directory structure.

docker-images-java-dive

In the end, let’s push our image to the Quay registry. Quay will automatically perform a security scan of the image. We will discuss it later.

$ docker push quay.io/pminkows/spring-microservice:bellsoft

Build Alpine Image with Eclipse Temurin OpenJDK

Are you still not satisfied with the image size? Me too. I expected something below 100MB. Let’s experiment a little bit. I will use almost the same Dockerfile as before, but instead of BellSoft Liberica, I will download and optimize the Eclipse Temurin OpenJDK for Alpine Linux. Here’s the current Dockerfile. As you see the only difference is in the JDK URL.

FROM alpine:latest AS build
ENV JAVA_HOME /opt/jdk/jdk-21.0.1+12
ENV PATH $JAVA_HOME/bin:$PATH

ADD https://github.com/adoptium/temurin21-binaries/releases/download/jdk-21.0.1%2B12/OpenJDK21U-jdk_x64_alpine-linux_hotspot_21.0.1_12.tar.gz /opt/jdk/
RUN tar -xzvf /opt/jdk/OpenJDK21U-jdk_x64_alpine-linux_hotspot_21.0.1_12.tar.gz -C /opt/jdk/
RUN ["jlink", "--compress=2", \
     "--module-path", "/opt/jdk/jdk-21.0.1+12/jmods/", \
     "--add-modules", "java.base,java.logging,java.naming,java.desktop,jdk.unsupported", \
     "--no-header-files", "--no-man-pages", \
     "--output", "/springboot-runtime"]

FROM alpine:latest
COPY --from=build  /springboot-runtime /opt/jdk
ENV PATH=$PATH:/opt/jdk/bin
EXPOSE 8080
COPY ../target/spring-microservice-1.0-SNAPSHOT.jar /opt/app/
CMD ["java", "-showversion", "-jar", "/opt/app/spring-microservice-1.0-SNAPSHOT.jar"]

The same as before, we will build the image. This time we are tagging it with temurin. We also need to override the default location of the Docker manifest since we use the Dockerfile_temurin:

$ docker build -f Dockerfile_temurin \
    -t quay.io/pminkows/spring-microservice:temurin .

Once the image is ready we can proceed to the next steps:

Let’s analyze it with the dive tool:

$ dive quay.io/pminkows/spring-microservice:temurin

The results look much better. The difference is of course in the JDK space. It took just 64MB instead of 87MB like in Liberica. The total image size is 91MB.

Finally, let’s push the image to the Quay registry for the security score comparison:

$ docker push quay.io/pminkows/spring-microservice:temurin

Build Image with BellSoft Alpaquita

BellSoft Alpaquita is a relatively new solution introduced in 2022. It is advertised as a full-featured operating system optimized for Java. We can use Alpaquita Linux in combination with Liberica JDK Lite. This time we won’t create a custom JDK runtime, but we will get the ready image provided by BellSoft in their registry: bellsoft/liberica-runtime-container:jdk-21-slim-musl. It is built on top of Alpaquita Linux. Here’s our Dockerfile:

FROM bellsoft/liberica-runtime-container:jdk-21-slim-musl
COPY target/spring-microservice-1.0-SNAPSHOT.jar /opt/app/
EXPOSE 8080
CMD ["java", "-showversion", "-jar", "/opt/app/spring-microservice-1.0-SNAPSHOT.jar"]

Let’s build the image. The current Docker manifest is available in the repository as the Dockerfile_alpaquita file:

$ docker build -f Dockerfile_alpaquita \
    -t quay.io/pminkows/spring-microservice:alpaquita .

Here’s the build result:

Let’s examine our image with dive once again. The current image has 125MB. Of course, it is more than two previous images, but still not much.

Finally, let’s push the image to the Quay registry for the security score comparison:

$ docker push quay.io/pminkows/spring-microservice:alpaquita

Now, we can switch to the quay.io. In the repository view, we can compare the results of security scanning for all three images. As you see, there are no detected vulnerabilities for the image tagged with alpaquita and two issues for another two images.

docker-images-java-quay

Paketo Buildpacks for Alpaquita

BellSoft provides a dedicated buildpack based on the Alpaquita image. As you probably know, Spring Boot offers the ability to integrate the build process with Paketo Buildpacks through the spring-boot-maven-plugin. The plugin configuration in Maven pom.xml is visible below. We need to the bellsoft/buildpacks.builder:musl as a builder image. We can also enable jlink optimization by setting the environment variable BP_JVM_JLINK_ENABLED to true. In order to make the build work, I need to decrease the Java version to 17.

<plugin>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-maven-plugin</artifactId>
  <configuration>
    
  </configuration>
</plugin>

Let’s build the image with the following command:

$ mvn clean spring-boot:build-image

You should have a similar output if everything finishes successfully:

docker-images-java-buildpacks

After that, we can examine the image with the dive CLI. I was able to get an even better image size than for the corresponding build based on the Dockerfile with an alpine image with BellSoft Liberica OpenJDK (103MB vs 114MB). However, now I was using the JDK 17 instead of JDK 21 as in the previous build.

Finally, let’s push the image to the Quay registry:

 $ docker push quay.io/pminkows/spring-microservice:alpaquita-pack

Security Scans of Java Docker Images

We can use a more advanced tool for security scanning than Quay. Personally, I’m using Advanced Cluster Security for Kubernetes. It can be used not only to monitor containers running on the Kubernetes cluster but also to watch particular images in the selected registry. We can add all our previously built images in the “Manage watched images” section.

Here’s the security report for all our Docker Java images. It looks very good. There is only one security issue detected for both images based on alpine. There are no any CVEs fond for alpaquita-based images.

docker-images-java-acs

We can get into the details of every CVE. The issue detected for both images tagged with temurin and bellsoft is related to the jackson-databind Java library used by the Spring Web dependency.

Final Thoughts

As you see we can easily create slim Docker images for the Java apps without any advanced tools. The size of such an image can be even lower than 100MB (including ~20MB JAR file). BellSoft Alpaquita is also a very interesting alternative to Linux Alpine. We can use it with Paketo Buildpacks and take advantage of Spring Boot support for building images with CNB.

The post Slim Docker Images for Java appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/11/07/slim-docker-images-for-java/feed/ 8 14643
Spring Cloud Kubernetes with Spring Boot 3 https://piotrminkowski.com/2023/06/08/spring-cloud-kubernetes-with-spring-boot-3/ https://piotrminkowski.com/2023/06/08/spring-cloud-kubernetes-with-spring-boot-3/#comments Thu, 08 Jun 2023 08:29:52 +0000 https://piotrminkowski.com/?p=14232 In this article, you will learn how to create, test, and run apps with Spring Cloud Kubernetes, and Spring Boot 3. You will see how to use tools like Skaffold, Testcontainers, Spring Boot Admin, and the Fabric8 client in the Kubernetes environment. The main goal of this article is to update you with the latest […]

The post Spring Cloud Kubernetes with Spring Boot 3 appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create, test, and run apps with Spring Cloud Kubernetes, and Spring Boot 3. You will see how to use tools like Skaffold, Testcontainers, Spring Boot Admin, and the Fabric8 client in the Kubernetes environment. The main goal of this article is to update you with the latest version of the Spring Cloud Kubernetes project. There are several other posts on my blog with similar content. You can refer to the following article describing the best practices for running Java apps on Kubernetes. You can also read about microservices with Spring Cloud Kubernetes in the post published some years ago. It is quite outdated. I’ll show some changes since then. Let’s begin!

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

Firstly, let’s discuss our repository. It contains five apps. There are three microservices (employee-service, department-service, organization-service) communicating with each other through the REST client and connecting to the Mongo database. There is also the API gateway (gateway-service) created with the Spring Cloud Gateway project. Finally, the admin-service directory contains the Spring Boot Admin app used for monitoring all other apps. You can easily deploy all the apps from the source code using a single Skaffold command. If you run the following command from the repository root directory it will build the images with Jib Maven Plugin and deploy all apps on your Kubernetes cluster:

$ skaffold run

On the other hand, you can go to the particular app directory and deploy only it using exactly the same command. All the required Kubernetes YAML manifests for each app are placed inside the k8s directories. There is also a global configuration with e.g. Mongo deployment in the project root k8s directory. Here’s the structure of our sample repo:

How It Works

In our sample architecture, we will use Spring Cloud Kubernetes Config for injecting configuration via ConfigMap and Secret and Spring Cloud Kubernetes Discovery for inter-service communication with the OpenFeign client. All our apps are running within the same namespace, but we could as well deploy them across several different namespaces and handle communication between them with OpenFeign. The only thing we should do in that case is to set the property spring.cloud.kubernetes.discovery.all-namespaces to true. For more details, you can refer to the following article.

In front of our services, there is an API gateway. It is a separate app, but we could as well install it on Kubernetes using the native CRD integration. For more details, you can refer to the following post on the Spring blog. In our case, this is a standard Spring Boot 3 app that just includes and uses the Spring Cloud Gateway module. It also uses Spring Cloud Kubernetes Discovery together with Spring Cloud OpenFeign to locate and call the downstream services. Here’s the diagram that illustrates our architecture.

spring-cloud-kubernetes-arch

Using Spring Cloud Kubernetes Config

I’ll describe implementation details by the example of department-service. It exposes some REST endpoints but also calls the endpoints exposed by the employee-service. Besides the standard modules, we need to include Spring Cloud Kubernetes in the Maven dependencies. Here, we have to decide if we use the Fabric8 client or the Kubernetes Java Client. Personally, I have an experience with Fabric8, so I’ll use the spring-cloud-starter-kubernetes-fabric8-all starter to include both config and discovery modules.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-kubernetes-fabric8-all</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

As you see our app is connecting to the Mongo database. Let’s provide connection details and credentials required by the app. In the k8s directory, you will find the configmap.yaml file. It contains the address of Mongo and the database name. Those properties are injected into the pod as the application.properties file. And now the most important thing. The name of the ConfigMap has to be the same as the name of our app. The name of the Spring Boot is indicated by the spring.application.name property.

kind: ConfigMap
apiVersion: v1
metadata:
  name: department
data:
  application.properties: |-
    spring.data.mongodb.host: mongodb
    spring.data.mongodb.database: admin
    spring.data.mongodb.authentication-database: admin

In the current case, the name of the app is department. Here’s the application.yml file inside the app:

spring:
  application:
    name: department

The same naming rule applies to Secret. We are keeping sensitive data like the username and password to the Mongo database inside the following Secret. You can also find that content inside the secret.yaml file in the k8s directory.

kind: Secret
apiVersion: v1
metadata:
  name: department
data:
  spring.data.mongodb.password: UGlvdF8xMjM=
  spring.data.mongodb.username: cGlvdHI=
type: Opaque

Now, let’s proceed to the Deployment manifest. We will clarify the two first points here later. Spring Cloud Kubernetes requires special privileges on Kubernetes to interact with the master API (1). We don’t have to provide a tag for the image – Skaffold will handle it (2). In order to enable loading properties from ConfigMap we need to set the spring.config.import=kubernetes: property (a new way) or set the property spring.cloud.bootstrap.enabled to true (the old way). Instead of using properties directly, we will set the corresponding environment variables on the Deployment (3). By default, consuming secrets through the API is not enabled for security reasons. In order to enable it, we will set the SPRING_CLOUD_KUBERNETES_SECRETS_ENABLEAPI environment variable to true (4).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: department
  labels:
    app: department
spec:
  replicas: 1
  selector:
    matchLabels:
      app: department
  template:
    metadata:
      labels:
        app: department
    spec:
      serviceAccountName: spring-cloud-kubernetes # (1)
      containers:
      - name: department
        image: piomin/department # (2)
        ports:
        - containerPort: 8080
        env:
          - name: SPRING_CLOUD_BOOTSTRAP_ENABLED # (3)
            value: "true"
          - name: SPRING_CLOUD_KUBERNETES_SECRETS_ENABLEAPI # (4)
            value: "true"

Using Spring Cloud Kubernetes Discovery

We have already included the Spring Cloud Kubernetes Discovery module in the previous section using the spring-cloud-starter-kubernetes-fabric8-all starter. In order to provide a declarative REST client we will also include the Spring Cloud OpenFeign module:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

Now, we can declare the @FeignClient interface. The important thing here is the name of a discovered service. It should be the same as the name of the Kubernetes Service defined for the employee-service app.

@FeignClient(name = "employee")
public interface EmployeeClient {

    @GetMapping("/department/{departmentId}")
    List<Employee> findByDepartment(@PathVariable("departmentId") String departmentId);

    @GetMapping("/department-with-delay/{departmentId}")
    List<Employee> findByDepartmentWithDelay(@PathVariable("departmentId") String departmentId);
}

Here’s the Kubernetes Service manifest for the employee-service app. The name of the service is employee (1). The label spring-boot is set for Spring Boot Admin discovery purposes (2). You can find the following YAML in the employee-service/k8s directory.

apiVersion: v1
kind: Service
metadata:
  name: employee # (1)
  labels:
    app: employee
    spring-boot: "true" # (2)
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: employee
  type: ClusterIP

Just to clarify – here’s the implementation of the employee-service API methods called by the OpenFeign client in the department-service.

@RestController
public class EmployeeController {

    private static final Logger LOGGER = LoggerFactory
        .getLogger(EmployeeController.class);
	
    @Autowired
    EmployeeRepository repository;

    // ... other endpoints implementation 

    @GetMapping("/department/{departmentId}")
    public List<Employee> findByDepartment(@PathVariable("departmentId") String departmentId) {
        LOGGER.info("Employee find: departmentId={}", departmentId);
        return repository.findByDepartmentId(departmentId);
    }

    @GetMapping("/department-with-delay/{departmentId}")
    public List<Employee> findByDepartmentWithDelay(@PathVariable("departmentId") String departmentId) throws InterruptedException {
        LOGGER.info("Employee find: departmentId={}", departmentId);
        Thread.sleep(2000);
        return repository.findByDepartmentId(departmentId);
    }
	
}

That’s all that we have to do. Now, we can just call the endpoint using the OpenFeign client from department-service. For example on the “delayed” endpoint, we can use Spring Cloud Circuit Breaker with Resilience4J.

@RestController
public class DepartmentController {

    private static final Logger LOGGER = LoggerFactory
        .getLogger(DepartmentController.class);

    DepartmentRepository repository;
    EmployeeClient employeeClient;
    Resilience4JCircuitBreakerFactory circuitBreakerFactory;

    public DepartmentController(
        DepartmentRepository repository, 
        EmployeeClient employeeClient,
        Resilience4JCircuitBreakerFactory circuitBreakerFactory) {
            this.repository = repository;
            this.employeeClient = employeeClient;
            this.circuitBreakerFactory = circuitBreakerFactory;
    }

    @GetMapping("/{id}/with-employees-and-delay")
    public Department findByIdWithEmployeesAndDelay(@PathVariable("id") String id) {
        LOGGER.info("Department findByIdWithEmployees: id={}", id);
        Department department = repository.findById(id).orElseThrow();
        CircuitBreaker circuitBreaker = circuitBreakerFactory.create("delayed-circuit");
        List<Employee> employees = circuitBreaker.run(() ->
                employeeClient.findByDepartmentWithDelay(department.getId()));
        department.setEmployees(employees);
        return department;
    }

    @GetMapping("/organization/{organizationId}/with-employees")
    public List<Department> findByOrganizationWithEmployees(@PathVariable("organizationId") String organizationId) {
        LOGGER.info("Department find: organizationId={}", organizationId);
        List<Department> departments = repository.findByOrganizationId(organizationId);
        departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
        return departments;
    }

}

Testing with Fabric8 Kubernetes

We have already finished the implementation of our service. All the Kubernetes YAML manifests are prepared and ready to deploy. Now, the question is – can we easily test that everything works fine before we proceed to the deployment on the real cluster? The answer is – yes. Moreover, we can choose between several tools. Let’s begin with the simplest option – Kubernetes mock server. In order to use it, we to include an additional Maven dependency:

<dependency>
  <groupId>io.fabric8</groupId>
  <artifactId>kubernetes-server-mock</artifactId>
  <version>6.7.1</version>
  <scope>test</scope>
</dependency>

Then, we can proceed to the test. In the first step, we need to provide several test annotations. Inside @SpringBootTest we should simulate the Kubernetes platform with spring.main.cloud-platform property set to KUBERNETES (1). Normally Spring Boot is able to autodetect if it is running on Kubernetes. In that case, we need “trick him”, because we are just simulating the API, not running the test on Kubernetes. We also need to enable the old way of ConfigMap injection with the spring.cloud.bootstrap.enabled=true property.

Once we annotate the test method with @EnableKubernetesMockClient (2) we can use an auto-configured static instance of the Fabric8 KubernetesClient (3). During the test Fabric8 library runs a web server that mocks all the API requests sent by the client. By the way, we are using Testcontainers for running Mongo (4). In the next step, we are creating the ConfigMap that injects Mongo connection settings into the Spring Boot app (5). Thanks to the Spring Cloud Kubernetes Config it is automatically loaded by the app and the app is able to connect the Mongo database on the dynamically generated port.

Spring Cloud Kubernetes comes with auto-configured Fabric8 KubernetesClient. We need to force it to connect to the mock API server. Therefore we should override kubernetes.master property used by the Fabric8 KubernetesClient into the master URL taken from the test “mocked” instance (6). Finally, we can just implement test methods in the standard way.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT,
        properties = {
                "spring.main.cloud-platform=KUBERNETES",
                "spring.cloud.bootstrap.enabled=true"}) // (1)
@EnableKubernetesMockClient(crud = true) // (2)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class EmployeeKubernetesMockTest {

    private static final Logger LOG = LoggerFactory
        .getLogger(EmployeeKubernetesMockTest.class);

    static KubernetesClient client; // (3)

    @Container // (4)
    static MongoDBContainer mongodb = new MongoDBContainer("mongo:5.0");

    @BeforeAll
    static void setup() {

        ConfigMap cm = client.configMaps()
                .create(buildConfigMap(mongodb.getMappedPort(27017)));
        LOG.info("!!! {}", cm); // (5)

        // (6)
        System.setProperty(Config.KUBERNETES_MASTER_SYSTEM_PROPERTY, 
            client.getConfiguration().getMasterUrl());
        System.setProperty(Config.KUBERNETES_TRUST_CERT_SYSTEM_PROPERTY, "true");
        System.setProperty(Config.KUBERNETES_NAMESPACE_SYSTEM_PROPERTY, "default");
    }

    private static ConfigMap buildConfigMap(int port) {
        return new ConfigMapBuilder().withNewMetadata()
                .withName("employee").withNamespace("default")
                .endMetadata()
                .addToData("application.properties",
                        """
                        spring.data.mongodb.host=localhost
                        spring.data.mongodb.port=%d
                        spring.data.mongodb.database=test
                        spring.data.mongodb.authentication-database=test
                        """.formatted(port))
                .build();
    }

    @Autowired
    TestRestTemplate restTemplate;

    @Test
    @Order(1)
    void addEmployeeTest() {
        Employee employee = new Employee("1", "1", "Test", 30, "test");
        employee = restTemplate.postForObject("/", employee, Employee.class);
        assertNotNull(employee);
        assertNotNull(employee.getId());
    }

    @Test
    @Order(2)
    void addAndThenFindEmployeeByIdTest() {
        Employee employee = new Employee("1", "2", "Test2", 20, "test2");
        employee = restTemplate.postForObject("/", employee, Employee.class);
        assertNotNull(employee);
        assertNotNull(employee.getId());
        employee = restTemplate
                .getForObject("/{id}", Employee.class, employee.getId());
        assertNotNull(employee);
        assertNotNull(employee.getId());
    }

    @Test
    @Order(3)
    void findAllEmployeesTest() {
        Employee[] employees =
                restTemplate.getForObject("/", Employee[].class);
        assertEquals(2, employees.length);
    }

    @Test
    @Order(3)
    void findEmployeesByDepartmentTest() {
        Employee[] employees =
                restTemplate.getForObject("/department/1", Employee[].class);
        assertEquals(1, employees.length);
    }

    @Test
    @Order(3)
    void findEmployeesByOrganizationTest() {
        Employee[] employees =
                restTemplate.getForObject("/organization/1", Employee[].class);
        assertEquals(2, employees.length);
    }

}

Now, after running the tests we can take a look at the logs. As you see, our test is loading properties from the employee ConfigMap.

Finally, it is able to successfully connect Mongo on the dynamic port and run all the tests against that instance.

Testing with Testcontainers on k3s

As I mentioned before, there are several tools we can use for testing with Kubernetes. This time we will see how to do it with Testcomntainers. We have already used it in the previous section for running the Mongo database. But there is also the Testcontainers module for Rancher’s k3s Kubernetes distribution. Currently, it is in the incubating state, but it doesn’t bother us to try it. In order to use it in the project we need to include the following Maven dependency:

<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>k3s</artifactId>
  <scope>test</scope>
</dependency>

Here’s the implementation of the same tests as in the previous section, but this time with the k3s container. We don’t have to create any mocks. Instead, we will create the K3sContainer object (1). Before running the tests we need to create and initialize KubernetesClient. Testcontainers K3sContainer provides the getKubeConfigYaml() method for getting kubeconfig data. With the Fabric8 Config object we can initialize the client from that kubeconfig (2) (3). After that, we will create the ConfigMap with Mongo connection details (4). Finally, we have to override the master URL for Spring Cloud Kubernetes auto-configured Fabric8 client. In comparison to the previous section, we also need to set Kubernetes client certificates and keys (5).

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT,
        properties = {
                "spring.main.cloud-platform=KUBERNETES",
                "spring.cloud.bootstrap.enabled=true"})
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class EmployeeKubernetesTest {

   private static final Logger LOG = LoggerFactory
      .getLogger(EmployeeKubernetesTest.class);

   @Container
   static MongoDBContainer mongodb = new MongoDBContainer("mongo:5.0");
   @Container
   static K3sContainer k3s = new K3sContainer(DockerImageName
      .parse("rancher/k3s:v1.21.3-k3s1")); // (1)

   @BeforeAll
   static void setup() {
      Config config = Config
         .fromKubeconfig(k3s.getKubeConfigYaml()); // (2)
      DefaultKubernetesClient client = new 
         DefaultKubernetesClient(config); // (3)

      ConfigMap cm = client.configMaps().inNamespace("default")
         .create(buildConfigMap(mongodb.getMappedPort(27017)));
      LOG.info("!!! {}", cm); // (4)

      System.setProperty(Config.KUBERNETES_MASTER_SYSTEM_PROPERTY, 
         client.getConfiguration().getMasterUrl());
      
      // (5) 
      System.setProperty(Config.KUBERNETES_CLIENT_CERTIFICATE_DATA_SYSTEM_PROPERTY,
         client.getConfiguration().getClientCertData());
      System.setProperty(Config.KUBERNETES_CA_CERTIFICATE_DATA_SYSTEM_PROPERTY,
         client.getConfiguration().getCaCertData());
       System.setProperty(Config.KUBERNETES_CLIENT_KEY_DATA_SYSTEM_PROPERTY,
         client.getConfiguration().getClientKeyData());
      System.setProperty(Config.KUBERNETES_TRUST_CERT_SYSTEM_PROPERTY, 
         "true");
      System.setProperty(Config.KUBERNETES_NAMESPACE_SYSTEM_PROPERTY, 
         "default");
    }

    private static ConfigMap buildConfigMap(int port) {
        return new ConfigMapBuilder().withNewMetadata()
                .withName("employee").withNamespace("default")
                .endMetadata()
                .addToData("application.properties",
                        """
                        spring.data.mongodb.host=localhost
                        spring.data.mongodb.port=%d
                        spring.data.mongodb.database=test
                        spring.data.mongodb.authentication-database=test
                        """.formatted(port))
                .build();
    }

    @Autowired
    TestRestTemplate restTemplate;

    @Test
    @Order(1)
    void addEmployeeTest() {
        Employee employee = new Employee("1", "1", "Test", 30, "test");
        employee = restTemplate.postForObject("/", employee, Employee.class);
        assertNotNull(employee);
        assertNotNull(employee.getId());
    }

    @Test
    @Order(2)
    void addAndThenFindEmployeeByIdTest() {
        Employee employee = new Employee("1", "2", "Test2", 20, "test2");
        employee = restTemplate
           .postForObject("/", employee, Employee.class);
        assertNotNull(employee);
        assertNotNull(employee.getId());
        employee = restTemplate
                .getForObject("/{id}", Employee.class, employee.getId());
        assertNotNull(employee);
        assertNotNull(employee.getId());
    }

    @Test
    @Order(3)
    void findAllEmployeesTest() {
        Employee[] employees =
                restTemplate.getForObject("/", Employee[].class);
        assertEquals(2, employees.length);
    }

    @Test
    @Order(3)
    void findEmployeesByDepartmentTest() {
        Employee[] employees =
                restTemplate.getForObject("/department/1", Employee[].class);
        assertEquals(1, employees.length);
    }

    @Test
    @Order(3)
    void findEmployeesByOrganizationTest() {
        Employee[] employees =
                restTemplate.getForObject("/organization/1", Employee[].class);
        assertEquals(2, employees.length);
    }

}

Run Spring Kubernetes Apps on Minikube

In this exercise, I’m using Minikube, but you can as well use any other distribution like Kind or k3s. Spring Cloud Kubernetes requires additional privileges on Kubernetes to be able to interact with the master API. So, before running the apps we will create the spring-cloud-kubernetes ServiceAccount with the required privileges. Our role needs to have access to the configmaps, pods, services, endpoints and secrets. If we do not enable discovery across all namespaces (the spring.cloud.kubernetes.discovery.all-namespaces property), it can be Role within the namespace. Otherwise, we should create a ClusterRole.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: spring-cloud-kubernetes
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: spring-cloud-kubernetes
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["configmaps", "pods", "services", "endpoints", "secrets"]
    verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: spring-cloud-kubernetes
  namespace: default
subjects:
  - kind: ServiceAccount
    name: spring-cloud-kubernetes
    namespace: default
roleRef:
  kind: ClusterRole
  name: spring-cloud-kubernetes

Of course, you don’t have to apply the manifests visible above by yourself. As I mentioned at the beginning of the article, there is a skaffold.yaml file in the repository root directory file that contains the whole configuration. It runs manifests with Mongo Deployment (1) and with privileges (2) together with all the services.

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: sample-spring-microservices-kubernetes
build:
  artifacts:
    - image: piomin/admin
      jib:
        project: admin-service
    - image: piomin/department
      jib:
        project: department-service
        args:
          - -DskipTests
    - image: piomin/employee
      jib:
        project: employee-service
        args:
          - -DskipTests
    - image: piomin/gateway
      jib:
        project: gateway-service
    - image: piomin/organization
      jib:
        project: organization-service
        args:
          - -DskipTests
  tagPolicy:
    gitCommit: {}
manifests:
  rawYaml:
    - k8s/mongodb-*.yaml # (1)
    - k8s/privileges.yaml # (2)
    - admin-service/k8s/*.yaml
    - department-service/k8s/*.yaml
    - employee-service/k8s/*.yaml
    - gateway-service/k8s/*.yaml
    - organization-service/k8s/*.yaml

All we need to do it to deploy all the apps by executing the following skaffold command:

$ skaffold dev

Once we will do it we can display a list of running s pods:

kubectl get pod
NAME                            READY   STATUS    RESTARTS   AGE
admin-5f8c8498f-vtstx           1/1     Running   0          2m38s
department-746774879b-llrdn     1/1     Running   0          2m38s
employee-5bbf6b765f-7hsv7       1/1     Running   0          2m37s
gateway-578cb64558-m9n7f        1/1     Running   0          2m37s
mongodb-7f68b8b674-dbfnb        1/1     Running   0          2m38s
organization-5688c58656-bv8n6   1/1     Running   0          2m37s

We can also display a list of services. Some of them, like admin or gateway, are exposed as NodePort. Thanks to that we can easily access them outside of our Kubernetes cluster.

kubectl get svc
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
admin          NodePort    10.101.220.141   <none>        8080:31368/TCP   3m53s
department     ClusterIP   10.108.144.90    <none>        8080/TCP         3m52s
employee       ClusterIP   10.99.75.2       <none>        8080/TCP         3m52s
gateway        NodePort    10.96.7.237      <none>        8080:31518/TCP   3m52s
kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP          38h
mongodb        ClusterIP   10.108.198.233   <none>        27017/TCP        3m53s
organization   ClusterIP   10.107.102.26    <none>        8080/TCP         3m52s

Let’s obtain the Minikube IP address on our local machine:

$ minikube ip

Now, we can use that IP address to access e.g. Spring Boot Admin Server on the target port. For me its 31368. Spring Boot Admin should successfully discover all three microservices and connect to the /actuator endpoints exposed by that apps.

spring-cloud-kubernetes-admin

We can go to the details of each Spring Boot app. As you depatment-service is running on my local Minikube.

spring-cloud-kubernetes-services

Once you stop the skaffold dev command, all the apps and configured will be removed from your Kubernetes cluster.

Final Thoughts

If you are running only Spring Boot apps on your Kubernetes cluster, Spring Cloud Kubernetes is an interesting option. It allows us to easily integrate with Kubernetes discovery, config maps, and secrets. Thanks to that we can take advantage of other Spring Cloud components like load balancer, circuit breaker, etc. However, if you are running apps written in different languages and frameworks, and using language-agnostic tools like service mesh (Istio, Linkerd), Spring Cloud Kubernetes may not be the best choice.

The post Spring Cloud Kubernetes with Spring Boot 3 appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/06/08/spring-cloud-kubernetes-with-spring-boot-3/feed/ 20 14232
Spring Boot Development Mode with Testcontainers and Docker https://piotrminkowski.com/2023/05/26/spring-boot-development-mode-with-testcontainers-and-docker/ https://piotrminkowski.com/2023/05/26/spring-boot-development-mode-with-testcontainers-and-docker/#comments Fri, 26 May 2023 14:26:38 +0000 https://piotrminkowski.com/?p=14207 In this article, you will learn how to use Spring Boot built-in support for Testcontainers and Docker Compose to run external services in development mode. Spring Boot introduces those features in the current latest version 3.1. Of course, you can already take advantage of Testcontainers in your Spring Boot app tests. However, the ability to […]

The post Spring Boot Development Mode with Testcontainers and Docker appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use Spring Boot built-in support for Testcontainers and Docker Compose to run external services in development mode. Spring Boot introduces those features in the current latest version 3.1. Of course, you can already take advantage of Testcontainers in your Spring Boot app tests. However, the ability to run external databases, message brokers, or other external services on app startup was something I was waiting for. Especially, since the competitive framework, Quarkus, already provides a similar feature called Dev Services, which is very useful during my development. Also, we should not forget about another exciting feature – integration with Docker Compose. Let’s begin.

If you are looking for more articles related to Spring Boot 3 you can refer to the following one, about microservices with Spring Cloud.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. Since I’m using Testcontainers often, you can find examples in my several repositories. Here’s a list of repositories we will use today:

You can clone them and then follow my instruction to see how to leverage Spring Boot built-in support for Testcontainers and Docker Compose in development mode.

Use Testcontainers in Tests

Let’s start with the standard usage example. The first repository has a single Spring Boot app that connects to the Mongo database. In order to build automated tests we have to include the following Maven dependencies:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-test</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>mongodb</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>junit-jupiter</artifactId>
  <scope>test</scope>
</dependency>

Now, we can create the tests. We need to annotate our test class with @Testcontainers. Then, we have to declare the MongoDBContainer bean. Before Spring Boot 3.1, we would have to use DynamicPropertyRegistry to set the Mongo address automatically generated by Testcontainers.

@SpringBootTest(webEnvironment = 
   SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerTest {

   @Container
   static MongoDBContainer mongodb = 
      new MongoDBContainer("mongo:5.0");

   @DynamicPropertySource
   static void registerMongoProperties(DynamicPropertyRegistry registry) {
      registry.add("spring.data.mongodb.uri", mongodb::getReplicaSetUrl);
   }

   // ... test methods

}

Fortunately, beginning from Spring Boot 3.1 we can simplify that notation with @ServiceConnection annotation. Here’s the full test implementation with the latest approach. It verifies some REST endpoints exposed by the app.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerTest {

    private static String id;

    @Container
    @ServiceConnection
    static MongoDBContainer mongodb = new MongoDBContainer("mongo:5.0");

    @Autowired
    TestRestTemplate restTemplate;

    @Test
    @Order(1)
    void add() {
        Person p = new Person(null, "Test", "Test", 100, Gender.FEMALE);
        Person personAdded = restTemplate
            .postForObject("/persons", p, Person.class);
        assertNotNull(personAdded);
        assertNotNull(personAdded.getId());
        assertEquals(p.getLastName(), personAdded.getLastName());
        id = personAdded.getId();
    }

    @Test
    @Order(2)
    void findById() {
        Person person = restTemplate
            .getForObject("/persons/{id}", Person.class, id);
        assertNotNull(person);
        assertNotNull(person.getId());
        assertEquals(id, person.getId());
    }

    @Test
    @Order(2)
    void findAll() {
        Person[] persons = restTemplate
            .getForObject("/persons", Person[].class);
        assertEquals(6, persons.length);
    }

}

Now, we can build the project with the standard Maven command. Then Testcontainers will automatically start the Mongo database before the test. Of course, we need to have Docker running on our machine.

$ mvn clean package

Tests run fine. But what will happen if we would like to run our app locally for development? We can do it by running the app main class directly from IDE or with the mvn spring-boot:run Maven command. Here’s our main class:

@SpringBootApplication
@EnableMongoRepositories
public class SpringBootOnKubernetesApp implements ApplicationListener<ApplicationReadyEvent> {

    public static void main(String[] args) {
        SpringApplication.run(SpringBootOnKubernetesApp.class, args);
    }

    @Autowired
    PersonRepository repository;

    @Override
    public void onApplicationEvent(ApplicationReadyEvent applicationReadyEvent) {
        if (repository.count() == 0) {
            repository.save(new Person(null, "XXX", "FFF", 20, Gender.MALE));
            repository.save(new Person(null, "AAA", "EEE", 30, Gender.MALE));
            repository.save(new Person(null, "ZZZ", "DDD", 40, Gender.FEMALE));
            repository.save(new Person(null, "BBB", "CCC", 50, Gender.MALE));
            repository.save(new Person(null, "YYY", "JJJ", 60, Gender.FEMALE));
        }
    }
}

Of course, unless we start the Mongo database our app won’t be able to connect it. If we use Docker, we first need to execute the docker run command that runs MongoDB and exposes it on the local port.

spring-boot-testcontainers-logs

Use Testcontainers in Development Mode with Spring Boot

Fortunately, with Spring Boot 3.1 we can simplify that process. We don’t have to Mongo before starting the app. What we need to do – is to enable development mode with Testcontainers. Firstly, we should include the following Maven dependency in the test scope:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-testcontainers</artifactId>
  <scope>test</scope>
</dependency>

Then we need to prepare the @TestConfiguration class with the definition of containers we want to start together with the app. For me, it is just a single MongoDB container as shown below:

@TestConfiguration
public class MongoDBContainerDevMode {

    @Bean
    @ServiceConnection
    MongoDBContainer mongoDBContainer() {
        return new MongoDBContainer("mongo:5.0");
    }

}

After that, we have to “override” the Spring Boot main class. It should have the same name as the main class with the Test suffix. Then we pass the current main method inside the SpringApplication.from(...) method. We also need to set @TestConfiguration class using the with(...) method.

public class SpringBootOnKubernetesAppTest {

    public static void main(String[] args) {
        SpringApplication.from(SpringBootOnKubernetesApp::main)
                .with(MongoDBContainerDevMode.class)
                .run(args);
    }

}

Finally, we can start our “test” main class directly from the IDE or we can just execute the following Maven command:

$ mvn spring-boot:test-run

Once the app starts you will see that the Mongo container is up and running and connection to it is established.

Since we are in dev mode we will also include the Spring Devtools module to automatically restart the app after the source code change.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-devtools</artifactId>
  <optional>true</optional>
</dependency>

Let’s what happened. Once we provide a change in the source code Spring Devtools will restart the app and the Mongo container. You can verify it in the app logs and also on the list of running Docker containers. As you see the Testcontainer ryuk has been initially started a minute ago, while Mongo was restarted after the app restarted 9 seconds ago.

In order to prevent restarting the container on app restart with Devtools we need to annotate the MongoDBContainer bean with @RestartScope.

@TestConfiguration
public class MongoDBContainerDevMode {

    @Bean
    @ServiceConnection
    @RestartScope
    MongoDBContainer mongoDBContainer() {
        return new MongoDBContainer("mongo:5.0");
    }

}

Now, Devtools just restart the app without restarting the container.

spring-boot-testcontainers-containers

Sharing Container across Multiple Apps

In the previous example, we have a single app that connects to the database on a single container. Now, we will switch to the repository with some microservices that communicates with each other via the Kafka broker. Let’s say I want to develop and test all three apps simultaneously. Of course, our services need to have the following Maven dependencies:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-testcontainers</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>kafka</artifactId>
  <version>1.18.1</version>
  <scope>test</scope>
</dependency>

Then we need to do a very similar thing as before – declare the @TestConfiguration bean with a list of required containers. However, this time we need to make our Kafka container reusable between several apps. In order to do that, we will invoke the withReuse(true) on the KafkaContainer. By the way, it is also possible to use Kafka Raft mode instead of Zookeeper.

@TestConfiguration
public class KafkaContainerDevMode {

    @Bean
    @ServiceConnection
    public KafkaContainer kafka() {
        return new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:7.4.0"))
                .withKraft()
                .withReuse(true);
    }

}

The same as before we have to create a “test” main class that uses the @TestConfiguration bean. We will do the same thing for two other apps inside the repository: payment-service and stock-service.

public class OrderAppTest {

    public static void main(String[] args) {
        SpringApplication.from(OrderApp::main)
                .with(KafkaContainerDevMode.class)
                .run(args);
    }

}

Let’s run our three microservices. Just to remind you, it is possible to run the “test” main class directly from IDE or with the mvn spring-boot:test-run command. As you see, I run all three apps.

spring-boot-testcontainers-microservices

Now, if we display a list of running containers, there is only one Kafka broker shared between all the apps.

Use Spring Boot support for Docker Compose

Beginning from version 3.1 Spring Boot provides built-in support for Docker Compose. Let’s switch to our last sample repository. It consists of several microservices that connect to the Mongo database and the Netflix Eureka discovery server. We can go to the directory with one of the microservices, e.g. customer-service. Assuming we include the following Maven dependency, Spring Boot looks for a Docker Compose configuration file in the current working directory. Let’s activate that mechanism only for a specific Maven profile:

<profiles>
  <profile>
    <id>compose</id>
    <dependencies>
      <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-docker-compose</artifactId>
        <optional>true</optional>
      </dependency>
    </dependencies>
  </profile>
</profiles>

Our goal is to run all the required external services before running the customer-service app. The customer-service app connects to Mongo, Eureka, and calls endpoint exposed by the account-service. Here’s the implementation of the REST client that communicates to the account-service.

@FeignClient("account-service")
public interface AccountClient {

    @RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
    List<Account> getAccounts(@PathVariable("customerId") String customerId);

}

We need to prepare the docker-compose.yml with all required containers definition. As you see, there is the mongo service and two applications discovery-service and account-service, which uses local Docker images.

version: "3.8"
services:
  mongo:
    image: mongo:5.0
    ports:
      - "27017:27017"
  discovery-service:
    image: sample-spring-microservices-advanced/discovery-service:1.0-SNAPSHOT
    ports:
      - "8761:8761"
    healthcheck:
      test: curl --fail http://localhost:8761/eureka/v2/apps || exit 1
      interval: 4s
      timeout: 2s
      retries: 3
    environment:
      SPRING_PROFILES_ACTIVE: docker
  account-service:
    image: sample-spring-microservices-advanced/account-service:1.0-SNAPSHOT
    ports:
      - "8080"
    depends_on:
      discovery-service:
        condition: service_healthy
    links:
      - mongo
      - discovery-service
    environment:
      SPRING_PROFILES_ACTIVE: docker

Before we run the service, let’s build the images with our apps. We could as well use built-in Spring Boot mechanisms based on Buildpacks, but I’ve got some problems with it. Jib works fine in my case.

<profile>
  <id>build-image</id>
  <build>
    <plugins>
      <plugin>
        <groupId>com.google.cloud.tools</groupId>
        <artifactId>jib-maven-plugin</artifactId>
        <version>3.3.2</version>
        <configuration>
          <to>
            
          </to>
        </configuration>
        <executions>
          <execution>
            <goals>
              <goal>dockerBuild</goal>
            </goals>
            <phase>package</phase>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</profile>

Let’s execute the following command on the repository root directory:

$ mvn clean package -Pbuild-image -DskipTests

After a successful build, we can verify a list of available images with the docker images command. As you see, there are two images used in our docker-compose.yml file:

Finally, the only thing you need to do is to run the customer-service app. Let’s switch to the customer-service directory once again and execute the mvn spring-boot:run with a profile that includes the spring-boot-docker-compose dependency:

$ mvn spring-boot:run -Pcompose

As you see, our app locates docker-compose.yml.

spring-boot-testcontainers-docker-compose

Once we start our app, it also starts all required containers.

For example, we can take a look at the Eureka dashboard available at http://localhost:8761. There are two apps registered there. The account-service is running on Docker, while the customer-service has been started locally.

Final Thoughts

Spring Boot 3.1 comes with several improvements in the area of containerization. Especially the feature related to the ability to run Testcontainers in development together with the app was something that I was waiting for. I hope this article will clarify how you can take advantage of the latest Spring Boot features for better integration with Testcontainers and Docker Compose.

The post Spring Boot Development Mode with Testcontainers and Docker appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/05/26/spring-boot-development-mode-with-testcontainers-and-docker/feed/ 5 14207
Best Practices for Java Apps on Kubernetes https://piotrminkowski.com/2023/02/13/best-practices-for-java-apps-on-kubernetes/ https://piotrminkowski.com/2023/02/13/best-practices-for-java-apps-on-kubernetes/#comments Mon, 13 Feb 2023 16:18:43 +0000 https://piotrminkowski.com/?p=13990 In this article, you will read about the best practices for running Java apps on Kubernetes. Most of these recommendations will also be valid for other languages. However, I’m considering all the rules in the scope of Java characteristics and also showing solutions and tools available for JVM-based apps. Some of these Kubernetes recommendations are […]

The post Best Practices for Java Apps on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will read about the best practices for running Java apps on Kubernetes. Most of these recommendations will also be valid for other languages. However, I’m considering all the rules in the scope of Java characteristics and also showing solutions and tools available for JVM-based apps. Some of these Kubernetes recommendations are forced by design when using the most popular Java frameworks like Spring Boot or Quarkus. I’ll show you how to effectively leverage them to simplify developer life.

I’m writing a lot about topics related to both Kubernetes and Java. You can find many practical examples on my blog. Some time ago I published a similar article to that one – but mostly focused on best practices for microservices-based apps. You can find it here.

Don’t Set Limits Too Low

Should we set limits for Java apps on Kubernetes or not? The answer seems to be obvious. There are many tools that validate your Kubernetes YAML manifests, and for sure they will print a warning if you don’t set CPU or memory limits. However, there are some “hot discussions” in the community about that. Here’s an interesting article that does not recommend setting any CPU limits. Here’s another article written as a counterpoint to the previous one. They consider CPU limits, but we could as well begin a similar discussion for memory limits. Especially in the context of Java apps 🙂

However, for memory management, the proposition seems to be quite different. Let’s read the other article – this time about memory limits and requests. In short, it recommends always setting the memory limit. Moreover, the limit should be the same as the request. In the context of Java apps, it is also important that we may limit the memory with JVM parameters like -Xmx, -XX:MaxMetaspaceSize or -XX:ReservedCodeCacheSize. Anyway, from the Kubernetes perspective, the pod receives the resources it requests. The limit has nothing to do with it.

It all leads me to the first recommendation today – don’t set your limits too low. Even if you set a CPU limit, it shouldn’t impact your app. For example, as you probably know, even if your Java app doesn’t consume much CPU in normal work, it requires a lot of CPU to start fast. For my simple Spring Boot app that connects MongoDB on Kubernetes, the difference between no limit and even 0.5 core is significant. Normally it starts below 10 seconds:

kubernetes-java-startup

With the CPU limit set to 500 millicores, it starts ~30 seconds:

Of course, we could find some examples. But we will discuss them also in the next sections.

Beginning from the 1.27 version of Kubernetes you may take advantage of the feature called “In-Place Vertical Pod Scaling”. It allows users to resize CPU/memory resources allocated to pods without restarting the containers. Such an approach may help us to speed up Java startup on Kubernetes and keep adequate resource limits (especially CPU limits) for the app at the same time. You can read more about that in the following article: https://piotrminkowski.com/2023/08/22/resize-cpu-limit-to-speed-up-java-startup-on-kubernetes/.

Consider Memory Usage First

Let’s focus just on the memory limit. If you run a Java app on Kubernetes, you have two levels of limiting maximum usage: container and JVM. However, there are also some defaults if you don’t specify any settings for JVM. JVM sets its maximum heap size to approximately 25% of the available RAM in case you don’t set the -Xmx parameter. This value is counted based on the memory visible inside the container. Once you won’t set a limit at the container level, JVM will see the whole memory of the node.

Before running the app on Kubernetes, you should at least measure how much memory it consumes at the expected load. Fortunately, there are tools that may optimize memory configuration for Java apps running in containers. For example, Paketo Buildpacks comes with a built-in memory calculator. It calculates the -Xmx JVM flag using the formula Heap = Total Container Memory - Non-Heap - Headroom. On the other hand, the non-heap value is calculated using the following formula: Non-Heap = Direct Memory + Metaspace + Reserved Code Cache + (Thread Stack * Thread Count).

Paketo Buildpacks is currently a default option for building Spring Boot apps (with the mvn spring-boot:build-image command). Let’s try it for our sample app. Assuming we will set the memory limit to 512M it will calculate -Xmx at a level of 130M.

kubernetes-java-memory

Is it fine for my app? I should at least perform some load tests to verify how my app performs under heavy traffic. But once again – don’t set the limits too low. For example, with the 1024M limit, the -Xmx equals 650M.

As you see we take care of memory usage with JVM parameters. It prevents us from OOM kills described in the article mentioned in the first section. Therefore, setting the request at the same level as the limit does not make much sense. I would recommend setting it a little higher than normal usage – let’s say 20% more.

Proper Liveness and Readiness Probes

Introduction

It is essential to understand the difference between liveness and readiness probes in Kubernetes. If both these probes are not implemented carefully, they can degrade the overall operation of a service, for example by causing unnecessary restarts. The third type of probe, the startup probe, is a relatively new feature in Kubernetes. It allows us to avoid setting initialDelaySeconds on liveness or readiness probes and therefore is especially useful if your app startup takes a lot of time. For more details about Kubernetes probes in general and best practices, I can recommend that very interesting article.

A liveness probe is used to decide whether to restart the container or not. If an application is unavailable for any reason, restarting the container sometimes can make sense. On the other hand, a readiness probe is used to decide if a container can handle incoming traffic. If a pod has been recognized as not ready, it is removed from load balancing. Failure of the readiness probe does not result in pod restart. The most typical liveness or readiness probe for web applications is realized via an HTTP endpoint.

Since subsequent failures of the liveness probe result in pod restart, it should not check the availability of your app integrations. Such things should be verified by the readiness probe.

Configuration Details

The good news is that the most popular Java frameworks like Spring Boot or Quarkus provide an auto-configured implementation of both Kubernetes probes. They follow best practices, so we usually don’t have to take about basics. However, in Spring Boot besides including the Actuator module you need to enable them with the following property:

management:
  endpoint: 
    health:
      probes:
        enabled: true

Since Spring Boot Actuator provides several endpoints (e.g. metrics, traces) it is a good idea to expose it on a different port than a default (usually 8080). Of course, the same rule applies to other popular Java frameworks. On the other hand, a good practice is to check your main app port – especially in the readiness probe. Since it defines if our app is ready to process incoming requests, it should listen also on the main port. It looks just the opposite with the liveness probe. If let’s say the whole working thread pool is busy, I don’t want to restart my app. I just don’t want to receive incoming traffic for some time.

We can also customize other aspects of Kubernetes probes. Let’s say that our app connects to the external system, but we don’t verify that integration in our readiness probe. It is not critical and doesn’t have a direct impact on our operational status. Here’s a configuration that allows us to include in the probe only the selected set of integrations (1) and also exposes readiness on the main server port (2).

spring:
  application:
    name: sample-spring-boot-on-kubernetes
  data:
    mongodb:
      host: ${MONGO_URL}
      port: 27017
      username: ${MONGO_USERNAME}
      password: ${MONGO_PASSWORD}
      database: ${MONGO_DATABASE}
      authentication-database: admin

management:
  endpoint.health:
    show-details: always
    group:
      readiness:
        include: mongo # (1)
        additional-path: server:/readiness # (2)
    probes:
      enabled: true
  server:
    port: 8081

Hardly ever our application is able to exist without any external solutions like databases, message brokers, or just other applications. When configuring the readiness probe we should consider connection settings to that system carefully. Firstly you should consider the situation when external service is not available. How you will handle it? I suggest decreasing these timeouts to lower values as shown below.

spring:
  application:
    name: sample-spring-kotlin-microservice
  datasource:
    url: jdbc:postgresql://postgres:5432/postgres
    username: postgres
    password: postgres123
    hikari:
      connection-timeout: 2000
      initialization-fail-timeout: 0
  jpa:
    database-platform: org.hibernate.dialect.PostgreSQLDialect
  rabbitmq:
    host: rabbitmq
    port: 5672
    connection-timeout: 2000

Choose The Right JDK

If you have already built images with Dockerfile it is possible that you were using the official OpenJDK base image from the Docker Hub. However, currently, the announcement on the image site says that it is officially deprecated and all users should find suitable replacements. I guess it may be quite confusing, so you will find a detailed explanation of the reasons here.

All right, so let’s consider which alternative we should choose. Different vendors provide several replacements. If you are looking for a detailed comparison between them you should go to the following site. It recommends using Eclipse Temurin in the 21 version.

On the other hand, the most popular image build tools like Jib or Cloud Native Buildpacks automatically choose a vendor for you. By default, Jib uses Eclipse Temurin, while Paketo Buildpacks uses Bellsoft Liberica implementation. Of course, you can easily override these settings. I think it might make sense if you, for example, run your app in the environment matched to the JDK provider, like AWS and Amazon Corretto.

Let’s say we use Paketo Buildpacks and Skaffold for deploying Java apps on Kubernetes. In order to replace a default Bellsoft Liberica buildpack with another one we just need to set it literally in the buildpacks section. Here’s an example that leverages the Amazon Corretto buildpack.

apiVersion: skaffold/v2beta22
kind: Config
metadata:
  name: sample-spring-boot-on-kubernetes
build:
  artifacts:
    - image: piomin/sample-spring-boot-on-kubernetes
      buildpacks:
        builder: paketobuildpacks/builder:base
        buildpacks:
          - paketo-buildpacks/amazon-corretto
          - paketo-buildpacks/java
        env:
          - BP_JVM_VERSION=21

We can also easily test the performance of our apps with different JDK vendors. If you are looking for an example of such a comparison you can read my article describing such tests and results. I measured the different JDK performance for the Spring Boot 3 app that interacts with the Mongo database using several available Paketo Java Buildpacks.

Consider Migration To Native Compilation 

Native compilation is a real “game changer” in the Java world. But I can bet that not many of you use it – especially on production. Of course, there were (and still are) numerous challenges in the migration of existing apps into the native compilation. The static code analysis performed by the GraalVM during build time can result in errors like ClassNotFound, as or MethodNotFound. To overcome these challenges, we need to provide several hints to let GraalVM know about the dynamic elements of the code. The number of those hints usually depends on the number of libraries and the general number of language features used in the app.

The Java frameworks like Quarkus or Micronaut try to address challenges related to a native compilation by design. For example, they are avoiding using reflection wherever possible. Spring Boot has also improved native compilation support a lot through the Spring Native project. So, my advice in this area is that if you are creating a new application, prepare it the way it will be ready for native compilation. For example, with Quarkus you can simply generate a Maven configuration that contains a dedicated profile for building a native executable.

<profiles>
  <profile>
    <id>native</id>
    <activation>
      <property>
        <name>native</name>
      </property>
    </activation>
    <properties>
      <skipITs>false</skipITs>
      <quarkus.package.type>native</quarkus.package.type>
    </properties>
  </profile>
</profiles>

Once you add it you can just a native build with the following command:

$ mvn clean package -Pnative

Then you can analyze if there are any issues during the build. Even if you do not run native apps on production now (for example your organization doesn’t approve it), you should place GraalVM compilation as a step in your acceptance pipeline. You can easily build the Java native image for your app with the most popular frameworks. For example, with Spring Boot you just need to provide the following configuration in your Maven pom.xml as shown below:

<plugin>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-maven-plugin</artifactId>
  <executions>
    <execution>
      <goals>
        <goal>build-info</goal>
        <goal>build-image</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    
  </configuration>
</plugin>

Configure Logging Properly

Logging is probably not the first thing you are thinking about when writing your Java apps. However, at the global scope, it becomes very important since we need to be able to collect, store data, and finally search and present the particular entry quickly. The best practice is to write your application logs to the standard output (stdout) and standard error (stderr) streams.

Fluentd is a popular open-source log aggregator that allows you to collect logs from the Kubernetes cluster, process them, and then ship them to a data storage backend of your choice. It integrates seamlessly with Kubernetes deployments. Fluentd tries to structure data as JSON to unify logging across different sources and destinations. Assuming that probably the best way is to prepare logs in this format. With JSON format we can also easily include additional fields for tagging logs and then easily search them in the visual tool with various criteria.

In order to format our logs to JSON readable by Fluentd we may include the Logstash Logback Encoder library in Maven dependencies.

<dependency>
   <groupId>net.logstash.logback</groupId>
   <artifactId>logstash-logback-encoder</artifactId>
   <version>7.2</version>
</dependency>

Then we just need to set a default console log appender for our Spring Boot application in the file logback-spring.xml.

<configuration>
    <appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>
    <logger name="jsonLogger" additivity="false" level="DEBUG">
        <appender-ref ref="consoleAppender"/>
    </logger>
    <root level="INFO">
        <appender-ref ref="consoleAppender"/>
    </root>
</configuration>

Should we avoid using additional log appenders and just print logs just to the standard output? From my experience, the answer is – no. You can still alternative mechanisms for sending the logs. Especially if you use more than one tool for collecting logs in your organization – for example internal stack on Kubernetes and global stack outside. Personally, I’m using a tool that helps me to resolve performance problems e.g. message broker as a proxy. In Spring Boot we easily use RabbitMQ for that. Just include the following starter:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-amqp</artifactId>
</dependency>

Then you need to provide a similar appender configuration in the logback-spring.xml:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

  <springProperty name="destination" source="app.amqp.url" />

  <appender name="AMQP"
		class="org.springframework.amqp.rabbit.logback.AmqpAppender">
    <layout>
      <pattern>
{
  "time": "%date{ISO8601}",
  "thread": "%thread",
  "level": "%level",
  "class": "%logger{36}",
  "message": "%message"
}
      </pattern>
    </layout>

    <addresses>${destination}</addresses>	
    <applicationId>api-service</applicationId>
    <routingKeyPattern>logs</routingKeyPattern>
    <declareExchange>true</declareExchange>
    <exchangeName>ex_logstash</exchangeName>

  </appender>

  <root level="INFO">
    <appender-ref ref="AMQP" />
  </root>

</configuration>

Create Integration Tests

Ok, I know – it’s not directly related to Kubernetes. However, since we use Kubernetes to manage and orchestrate containers, we should also run integration tests on the containers. Fortunately, with Java frameworks, we can simplify that process a lot. For example, Quarkus allows us to annotate the test with @QuarkusIntegrationTest. It is a really powerful solution in conjunction with the Quarkus containers build feature. We can run the tests against an already-built image containing the app. First, let’s include the Quarkus Jib module:

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-container-image-jib</artifactId>
</dependency>

Then we have to enable container build by setting the quarkus.container-image.build property to true in the application.properties file. In the test class, we can use @TestHTTPResource and @TestHTTPEndpoint annotations to inject the test server URL. Then we are creating a client with the RestClientBuilder and call the service started on the container. The name of the test class is not accidental. In order to be automatically detected as the integration test, it has the IT suffix.

@QuarkusIntegrationTest
public class EmployeeControllerIT {

    @TestHTTPEndpoint(EmployeeController.class)
    @TestHTTPResource
    URL url;

    @Test
    void add() {
        EmployeeService service = RestClientBuilder.newBuilder()
                .baseUrl(url)
                .build(EmployeeService.class);
        Employee employee = new Employee(1L, 1L, "Josh Stevens", 
                                         23, "Developer");
        employee = service.add(employee);
        assertNotNull(employee.getId());
    }

    @Test
    public void findAll() {
        EmployeeService service = RestClientBuilder.newBuilder()
                .baseUrl(url)
                .build(EmployeeService.class);
        Set<Employee> employees = service.findAll();
        assertTrue(employees.size() >= 3);
    }

    @Test
    public void findById() {
        EmployeeService service = RestClientBuilder.newBuilder()
                .baseUrl(url)
                .build(EmployeeService.class);
        Employee employee = service.findById(1L);
        assertNotNull(employee.getId());
    }
}

You can find more details about that process in my previous article about advanced testing with Quarkus. The final effect is visible in the picture below. When we run the tests during the build with the mvn clean verify command, our test is executed after building the container image.

kubernetes-java-integration-tests

That Quarkus feature is based on the Testcontainers framework. We can also use Testcontainers with Spring Boot. Here’s the sample test of the Spring REST app and its integration with the PostgreSQL database.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerTests {

    @Autowired
    TestRestTemplate restTemplate;

    @Container
    static PostgreSQLContainer<?> postgres = 
       new PostgreSQLContainer<>("postgres:15.1")
            .withExposedPorts(5432);

    @DynamicPropertySource
    static void registerMySQLProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.datasource.url", postgres::getJdbcUrl);
        registry.add("spring.datasource.username", postgres::getUsername);
        registry.add("spring.datasource.password", postgres::getPassword);
    }

    @Test
    @Order(1)
    void add() {
        Person person = Instancio.of(Person.class)
                .ignore(Select.field("id"))
                .create();
        person = restTemplate.postForObject("/persons", person, Person.class);
        Assertions.assertNotNull(person);
        Assertions.assertNotNull(person.getId());
    }

    @Test
    @Order(2)
    void updateAndGet() {
        final Integer id = 1;
        Person person = Instancio.of(Person.class)
                .set(Select.field("id"), id)
                .create();
        restTemplate.put("/persons", person);
        Person updated = restTemplate.getForObject("/persons/{id}", Person.class, id);
        Assertions.assertNotNull(updated);
        Assertions.assertNotNull(updated.getId());
        Assertions.assertEquals(id, updated.getId());
    }

}

Final Thoughts

I hope that this article will help you avoid some common pitfalls when running Java apps on Kubernetes. Treat it as a summary of other people’s recommendations I found in similar articles and my private experience in that area. Maybe you will find some of those rules are quite controversial. Feel free to share your opinions and feedback in the comments. It will also be valuable for me. If you like this article, once again, I recommend reading another one from my blog – more focused on running microservices-based apps on Kubernetes – Best Practices For Microservices on Kubernetes. But it also contains several useful (I hope) recommendations.

The post Best Practices for Java Apps on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/02/13/best-practices-for-java-apps-on-kubernetes/feed/ 10 13990