Kubernetes Archives - Piotr's TechBlog https://piotrminkowski.com/tag/kubernetes/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 06 Jan 2026 09:31:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Kubernetes Archives - Piotr's TechBlog https://piotrminkowski.com/tag/kubernetes/ 32 32 181738725 Istio Spring Boot Library Released https://piotrminkowski.com/2026/01/06/istio-spring-boot-library-released/ https://piotrminkowski.com/2026/01/06/istio-spring-boot-library-released/#respond Tue, 06 Jan 2026 09:31:45 +0000 https://piotrminkowski.com/?p=15957 This article explains how to use my Spring Boot Istio library to generate and create Istio resources on a Kubernetes cluster during application startup. The library is primarily intended for development purposes. It aims to make it easier for developers to quickly and easily launch their applications within the Istio mesh. Of course, you can […]

The post Istio Spring Boot Library Released appeared first on Piotr's TechBlog.

]]>
This article explains how to use my Spring Boot Istio library to generate and create Istio resources on a Kubernetes cluster during application startup. The library is primarily intended for development purposes. It aims to make it easier for developers to quickly and easily launch their applications within the Istio mesh. Of course, you can also use this library in production. However, its purpose is to generate resources from annotations in Java application code automatically.

You can also find many other articles on my blog about Istio. For example, this article is about Quarkus and tracing with Istio.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Two sample applications for this exercise are available in the spring-boot-istio directory.

Prerequisites

We will start with the most straightforward Istio installation on a local Kubernetes cluster. This could be Minikube, which you can run with the following command. You can set slightly lower resource limits than I did.

minikube start --memory='8gb' --cpus='6'
ShellSession

To complete the exercise below, you need to install istioctl in addition to kubectl. Here you will find the available distributions for the latest versions of kubectl and istioctl. I install them on my laptop using Homebrew.

$ brew install kubectl
$ brew install istioctl
ShellSession

To install Istio with default parameters, run the following command:

istioctl install
ShellSession

After a moment, Istio should be running in the istio-system namespace.

It is also worth installing Kiali to verify the Istio resources we have created. Kiali is an observability and management tool for Istio that provides a web-based dashboard for service mesh monitoring. It visualizes service-to-service traffic, validates Istio configuration, and integrates with tools like Prometheus, Grafana, and Jaeger.

kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.28/samples/addons/kiali.yaml
ShellSession

Once Kiali is successfully installed on Kubernetes, we can expose its web dashboard locally with the following istioctl command:

istioctl dashboard kiali
ShellSession

To test access to the Istio mesh from outside the cluster, you need to expose the ingress gateway. To do this, run the minikube tunnel command.

Use Spring Boot Istio Library

To test our library’s functionality, we will create a simple Spring Boot application that exposes a single REST endpoint.

@RestController
@RequestMapping("/callme")
public class CallmeController {

    private static final Logger LOGGER = LoggerFactory
        .getLogger(CallmeController.class);

    @Autowired
    Optional<BuildProperties> buildProperties;
    @Value("${VERSION}")
    private String version;

    @GetMapping("/ping")
    public String ping() {
        LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() ?
            buildProperties.get().getName() : "callme-service", version);
        return "I'm callme-service " + version;
    }
    
}
Java

Then, in addition to the standard Spring Web starter, add the istio-spring-boot-starter dependency.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>com.github.piomin</groupId>
  <artifactId>istio-spring-boot-starter</artifactId>
  <version>1.2.1</version>
</dependency>
XML

Finally, we must add the @EnableIstio annotation to our application’s main class. We can also enable Istio Gateway to expose the REST endpoint outside the cluster. An Istio Gateway is a component that controls how external traffic enters or leaves a service mesh.

@SpringBootApplication
@EnableIstio(enableGateway = true)
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

Let’s deploy our application on the Kubernetes cluster. To do this, we must first create a role with the necessary permissions to manage Istio resources in the cluster. The role must be assigned to the ServiceAccount used by the application.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: callme-service-with-starter
rules:
  - apiGroups: ["networking.istio.io"]
    resources: ["virtualservices", "destinationrules", "gateways"]
    verbs: ["create", "get", "list", "watch", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: callme-service-with-starter
subjects:
  - kind: ServiceAccount
    name: callme-service-with-starter
    namespace: spring
roleRef:
  kind: Role
  name: callme-service-with-starter
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: callme-service-with-starter
YAML

Here are the Deployment and Service resources.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-with-starter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service-with-starter
  template:
    metadata:
      labels:
        app: callme-service-with-starter
    spec:
      containers:
        - name: callme-service-with-starter
          image: piomin/callme-service-with-starter
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
      serviceAccountName: callme-service-with-starter
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service-with-starter
  labels:
    app: callme-service-with-starter
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service-with-starter
YAML

The application repository is configured to run it with Skaffold. Of course, you can apply YAML manifests to the cluster with kubectl apply. To do this, simply navigate to the callme-service-with-starter/k8s directory and apply the deployment.yaml file. As part of the exercise, we will run our applications in the spring namespace.

skaffold dev -n spring
ShellSession

The sample Spring Boot application creates two Istio objects at startup: a VirtualService and a Gateway. We can verify them in the Kiali dashboard.

spring-boot-istio-kiali

The default host name generated for the gateway includes the deployment name and the .ext suffix. We can change the suffix name using the domain field in the @EnableIstio annotation. Assuming you run the minikube tunnel command, you can call the service using the Host header with the hostname in the following way:

$ curl http://localhost/callme/ping -H "Host:callme-service-with-starter.ext"
I'm callme-service v1
ShellSession

Additional Capabilities with Spring Boot Istio

The library’s behavior can be customized by modifying the @EnableIstio annotation. For example, you can enable the fault injection mechanism using the fault field. Both abort and delay are possible. You can make this change without redeploying the app. The library updates the existing VirtualService.

@SpringBootApplication
@EnableIstio(enableGateway = true, fault = @Fault(percentage = 50))
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

Now, you can call the same endpoint through the gateway. There is a 50% chance that you will receive this response.

spring-boot-istio-curl

Now let’s analyze a slightly more complex scenario. Let’s assume that we are running two different versions of the same application on Kubernetes. We use Istio to manage its versioning. Traffic is forwarded to the particular version based on the X-Version header in the incoming request. If the header value is v1 the request is sent to the application Pod with the label version=v1, and similarly, for the version v2. Here’s the annotation for the v1 application main class.

@SpringBootApplication
@EnableIstio(enableGateway = true, version = "v1",
	matches = { 
	  @Match(type = MatchType.HEADERS, key = "X-Version", value = "v1") })
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

The Deployment manifest for the callme-service-with-starter-v1 should define two labels: app and version.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-with-starter-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service-with-starter
      version: v1
  template:
    metadata:
      labels:
        app: callme-service-with-starter
        version: v1
    spec:
      containers:
        - name: callme-service-with-starter
          image: piomin/callme-service-with-starter
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
      serviceAccountName: callme-service-with-starter
YAML

Unlike the skaffold dev command, the skaffold run command simply launches the application on the cluster and terminates. Let’s first release version v1, and then move on to version v2.

skaffold run -n spring
ShellSession

Then, we can deploy the v2 version of our sample Spring Boot application. In this exercise, it is just an “artificial” version, since we deploy the same source code, but with different labels and environment variables injected in the Deployment manifest.

@SpringBootApplication
@EnableIstio(enableGateway = true, version = "v2",
	matches = { 
	  @Match(type = MatchType.HEADERS, key = "X-Version", value = "v2") })
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

Here’s the callme-service-with-starter-v2 Deployment manifest.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-with-starter-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service-with-starter
      version: v2
  template:
    metadata:
      labels:
        app: callme-service-with-starter
        version: v2
    spec:
      containers:
        - name: callme-service-with-starter
          image: piomin/callme-service-with-starter
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v2"
      serviceAccountName: callme-service-with-starter
YAML

Service, on the other hand, remains unchanged. It still refers to Pods labeled with app=callme-service-with-starter. However, this time it includes both application instances marked as v1 or v2.

apiVersion: v1
kind: Service
metadata:
  name: callme-service-with-starter
  labels:
    app: callme-service-with-starter
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service-with-starter
YAML

Version v2 should be run in the same way as before, using the skaffold run command. There are three Istio objects generated during apps startup: Gateway, VirtualService and DestinationRule.

spring-boot-istio-versioning

A generated DestinationRule contains two subsets for both v1 and v2 versions.

spring-boot-istio-subsets

The automatically generated VirtualService looks as follows.

kind: VirtualService
apiVersion: networking.istio.io/v1
metadata:
  name: callme-service-with-starter-route
spec:
  hosts:
  - callme-service-with-starter
  - callme-service-with-starter.ext
  gateways:
  - callme-service-with-starter
  http:
  - match:
    - headers:
        X-Version:
          prefix: v1
    route:
    - destination:
        host: callme-service-with-starter
        subset: v1
    timeout: 6s
    retries:
      attempts: 3
      perTryTimeout: 2s
      retryOn: 5xx
  - match:
    - headers:
        X-Version:
          prefix: v2
    route:
    - destination:
        host: callme-service-with-starter
        subset: v2
      weight: 100
    timeout: 6s
    retries:
      attempts: 3
      perTryTimeout: 2s
      retryOn: 5xx
YAML

To test the versioning mechanism generated using the Spring Boot Istio library, set the X-Version header for each call.

$ curl http://localhost/callme/ping -H "Host:callme-service-with-starter.ext" -H "X-Version:v1"
I'm callme-service v1

$ curl http://localhost/callme/ping -H "Host:callme-service-with-starter.ext" -H "X-Version:v2"
I'm callme-service v2
ShellSession

Conclusion

I am still working on this library, and new features will be added in the near future. I hope it will be helpful for those of you who want to get started with Istio without getting into the details of its configuration.

The post Istio Spring Boot Library Released appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2026/01/06/istio-spring-boot-library-released/feed/ 0 15957
Startup CPU Boost in Kubernetes with In-Place Pod Resize https://piotrminkowski.com/2025/12/22/startup-cpu-boost-in-kubernetes-with-in-place-pod-resize/ https://piotrminkowski.com/2025/12/22/startup-cpu-boost-in-kubernetes-with-in-place-pod-resize/#respond Mon, 22 Dec 2025 08:22:48 +0000 https://piotrminkowski.com/?p=15917 This article explains how to use the In-Place Pod Resize feature in Kubernetes, combined with Kube Startup CPU Boost, to speed up Java application startup. The In-Place Update of Pod Resources feature was initially introduced in Kubernetes 1.27 as an alpha release. With version 1.35, Kubernetes has reached GA stability. One potential use case for […]

The post Startup CPU Boost in Kubernetes with In-Place Pod Resize appeared first on Piotr's TechBlog.

]]>
This article explains how to use the In-Place Pod Resize feature in Kubernetes, combined with Kube Startup CPU Boost, to speed up Java application startup. The In-Place Update of Pod Resources feature was initially introduced in Kubernetes 1.27 as an alpha release. With version 1.35, Kubernetes has reached GA stability. One potential use case for using this feature is to set a high CPU limit only during application startup, which is necessary for Java to launch quickly. I have already described such a scenario in my previous article. The example implemented in that article used the Kyverno tool. However, it is based on an alpha version of the in-place pod resize feature, so it requires a minor tweak to the Kyverno policy to align with the GA release.

The other potential solution in that context is the Vertical Pod Autoscaler. In the latest version, it supports in-place pod resize. Vertical Pod Autoscaler (VPA) in Kubernetes automatically adjusts CPU and memory requests/limits for pods based on their actual usage, ensuring containers receive the appropriate resources. Unlike the Horizontal Pod Autoscaler (HPA), which scales resources, not replicas, it may restart pods to apply changes. For now, VPA does not support this use case, but once this feature is implemented, the situation will change.

On the other hand, Kube Startup CPU Boost is a dedicated feature for scenarios with high CPU requirements during app startup. It is a controller that increases CPU resource requests and limits during Kubernetes workload startup. Once the workload is up and running, the resources are set back to their original values. Let’s see how this solution works in practice!

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. The sample application is based on Spring Boot and exposes several REST endpoints. However, in this exercise, we will use a ready-made image published on my Quay: quay.io/pminkows/sample-kotlin-spring:1.5.1.1.

Install Kube Startup CPU Boost

The Kubernetes cluster you are using must enable the In-Place Pod Resize feature. The activation method may vary by Kubernetes distribution. For the Minikube I am using in today’s example, it looks like this:

minikube start --memory='8gb' --cpus='6' --feature-gates=InPlacePodVerticalScaling=true
ShellSession

After that, we can proceed with installing the Kube Startup CPU Boost controller. There are several ways to achieve it. The easiest way to do this is with a Helm chart. Let’s add the following Helm repository:

helm repo add kube-startup-cpu-boost https://google.github.io/kube-startup-cpu-boost
ShellSession

Then, we can install the kube-startup-cpu-boost chart in the dedicated kube-startup-cpu-boost-system namespace using the following command:

helm install -n kube-startup-cpu-boost-system kube-startup-cpu-boost \
  kube-startup-cpu-boost/kube-startup-cpu-boost --create-namespace
ShellSession

If the installation was successful, you should see the following pod running in the kube-startup-cpu-boost-system namespace as below.

$ kubectl get pod -n kube-startup-cpu-boost-system
NAME                                                         READY   STATUS    RESTARTS   AGE
kube-startup-cpu-boost-controller-manager-75f95d5fb6-692s6   1/1     Running   0          36s
ShellSession

Install Monitoring Stack (optional)

Then, we can install Prometheus monitoring. It is an optional step to verify pod resource usage in the graphical form. Firstly, let’s install the following Helm repository:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
ShellSession

After that, we can install the latest version of the kube-prometheus-stack chart in the monitoring namespace.

helm install my-kube-prometheus-stack prometheus-community/kube-prometheus-stack \
  -n monitoring --create-namespace
ShellSession

Let’s verify the installation succeeded by listing the pods running in the monitoring namespace.

$ kubectl get pods -n monitoring
NAME                                                         READY   STATUS    RESTARTS   AGE
alertmanager-my-kube-prometheus-stack-alertmanager-0         2/2     Running   0          38s
my-kube-prometheus-stack-grafana-f8bb6b8b8-mzt4l             3/3     Running   0          48s
my-kube-prometheus-stack-kube-state-metrics-99f4574c-bf5ln   1/1     Running   0          48s
my-kube-prometheus-stack-operator-6d58dd9d6c-6srtg           1/1     Running   0          48s
my-kube-prometheus-stack-prometheus-node-exporter-tdwmr      1/1     Running   0          48s
prometheus-my-kube-prometheus-stack-prometheus-0             2/2     Running   0          38s
ShellSession

Finally, we can expose the Prometheus console over localhost using the port forwarding feature:

kubectl port-forward svc/my-kube-prometheus-stack-prometheus 9090:9090 -n monitoring
ShellSession

Configure Kube Startup CPU Boost

The Kube Startup CPU Boost configuration is pretty intuitive. We need to create a StartupCPUBoost resource. It can manage multiple applications based on a given selector. In our case, it is a single sample-kotlin-spring Deployment determined by the app.kubernetes.io/name label (1). The next step is to define the resource management policy (2). The Kube Startup CPU Boost increases both request and limit by 50%. Resources should only be increased for the duration of the startup (3). Therefore, once the readiness probe succeeds, the resource level will return to its initial state. Of course, everything happens in-place without restarting the container.

apiVersion: autoscaling.x-k8s.io/v1alpha1
kind: StartupCPUBoost
metadata:
  name: sample-kotlin-spring
  namespace: demo
selector:
  matchExpressions: # (1)
  - key: app.kubernetes.io/name
    operator: In
    values: ["sample-kotlin-spring"]
spec:
  resourcePolicy: # (2)
    containerPolicies:
    - containerName: sample-kotlin-spring
      percentageIncrease:
        value: 50
  durationPolicy: # (3)
    podCondition:
      type: Ready
      status: "True"
YAML

Next, we will deploy our sample application. Here’s the Deployment manifest of our Spring Boot app. The name of the app container is sample-kotlin-spring, which matches the target Deployment name defined inside the StartupCPUBoost object (1). Then, we set the CPU limit to 500 millicores (2). There’s also a new field resizePolicy. It tells Kubernetes whether a change to CPU or memory can be applied in-place or requires a Pod restart. (3). The NotRequired value means that changing the resource limit or request will not trigger a pod restart. The Deployment object also contains a readiness probe that calls the GET/actuator/health/readiness exposed with the Spring Boot Actuator (4).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-kotlin-spring
  namespace: demo
  labels:
    app: sample-kotlin-spring
    app.kubernetes.io/name: sample-kotlin-spring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sample-kotlin-spring
  template:
    metadata:
      labels:
        app: sample-kotlin-spring
        app.kubernetes.io/name: sample-kotlin-spring
    spec:
      containers:
      - name: sample-kotlin-spring # (1)
        image: quay.io/pminkows/sample-kotlin-spring:1.5.1.1
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 500m # (2)
            memory: "1Gi"
          requests:
            cpu: 200m
            memory: "256Mi"
        resizePolicy: # (3)
        - resourceName: "cpu"
          restartPolicy: "NotRequired"
        readinessProbe: # (4)
          httpGet:
            path: /actuator/health/readiness
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 15
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
YAML

Here are the pod requests and limits configured by Kube Startup CPU Boost. As you can see, the request is set to 300m, while the limit is completely removed.

in-place-pod-resize-boost

Once the application startup process completes, Kube Startup CPU Boost restores the initial request and limit.

Now we can switch to the Prometheus console to see the history of CPU request values for our pod. As you can see, the request was temporarily increased during the pod startup.

The chart below illustrates CPU usage when the application is launched and then during normal operation.

in-place-pod-resize-metric

We can also define the fixed resources for a target container. The CPU requests and limits of the selected container will be set to the given values (1). If you do not want the operator to remove the CPU limit during boost time, set the REMOVE_LIMITS environment variable to false in the kube-startup-cpu-boost-controller-manager Deployment.

apiVersion: autoscaling.x-k8s.io/v1alpha1
kind: StartupCPUBoost
metadata:
  name: sample-kotlin-spring
  namespace: demo
selector:
  matchExpressions:
  - key: app.kubernetes.io/name
    operator: In
    values: ["sample-kotlin-spring"]
spec:
  resourcePolicy:
    containerPolicies: # (1)
    - containerName: sample-kotlin-spring
      fixedResources:
        requests: "500m"
        limits: "2"
  durationPolicy:
    podCondition:
      type: Ready
      status: "True"
YAML

Conclusion

There are many ways to address application CPU demand during startup. First, you don’t need to set a CPU limit for Deployment. What’s more, many people believe that setting a CPU limit doesn’t make sense, but for different reasons. In this situation, the request issue remains, but given the short timeframe and the significantly higher usage than in the declaration, it isn’t material.

Other solutions are related to strictly Java features. If we compile the application natively with GraalVM or use the CRaC feature, we will significantly speed up startup and reduce CPU requirements.

Finally, several solutions rely on in-place resizing. If you use Kyverno, consider its mutate policy, which can modify resources in response to an application startup event. The Kube Startup CPU Boost tool described in this article operates similarly but is designed exclusively for this use case. In the near future, Vertical Pod Autoscaler will also offer a CPU boost via in-place resize.

The post Startup CPU Boost in Kubernetes with In-Place Pod Resize appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/12/22/startup-cpu-boost-in-kubernetes-with-in-place-pod-resize/feed/ 0 15917
A Book: Hands-On Java with Kubernetes https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/ https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/#respond Mon, 08 Dec 2025 16:05:58 +0000 https://piotrminkowski.com/?p=15892 My book about Java and Kubernetes has finally been published! The book “Hands-On Java with Kubernetes” is the result of several months of work and, in fact, a summary of my experiences over the last few years of research and development. In this post, I want to share my thoughts on this book, explain why […]

The post A Book: Hands-On Java with Kubernetes appeared first on Piotr's TechBlog.

]]>
My book about Java and Kubernetes has finally been published! The book “Hands-On Java with Kubernetes” is the result of several months of work and, in fact, a summary of my experiences over the last few years of research and development. In this post, I want to share my thoughts on this book, explain why I chose to write and publish it, and briefly outline its content and concept. To purchase the latest version, go to this link.

Here is a brief overview of all my published books.

Motivation

I won’t hide that this post is mainly directed at my blog subscribers and people who enjoy reading it and value my writing style. As you know, all posts and content on my blog, along with sample application repositories on GitHub, are always accessible to you for free. Over the past eight years, I have worked to publish high-quality content on my blog, and I plan to keep doing so. It is a part of my life, a significant time commitment, but also a lot of fun and a hobby.

I want to explain why I decided to write this book, why now, and why in this way. But first, a bit of background. I wrote my last book first, then my first book, over seven years ago. It focused on topics I was mainly involved with at the time, specifically Spring Boot and Spring Cloud. Since then, a lot of time has passed, and much has changed – not only in the technology itself but also a little in my personal life. Today, I am more involved in Kubernetes and container topics than, for example, Spring Cloud. For years, I have been helping various organizations transition from traditional application architectures to cloud-native models based on Kubernetes. Of course, Java remains my main area of expertise. Besides Spring Boot, I also really like the Quarkus framework. You can read a lot about both in my book on Kubernetes.

Based on my experience over the past few years, involving development teams is a key factor in the success of the Kubernetes platform within an organization. Ultimately, it is the applications developed by these teams that are deployed there. For developers to be willing to use Kubernetes, it must be easy for them to do so. That is why I persuade organizations to remove barriers to using Kubernetes and to design it in a way that makes it easier for development teams. On my blog and in this book, I aim to demonstrate how to quickly and simply launch applications on Kubernetes using frameworks such as Spring Boot and Quarkus.

It’s an unusual time to publish a book. AI agents are producing more and more technical content online. More often than not, instead of grabbing a book, people turn to an AI chatbot for a quick answer, though not always the best one. Still, a book that thoroughly introduces a topic and offers a step-by-step guide remains highly valuable.

Content of the Book

This book demonstrates that Java is an excellent choice for building applications that run on Kubernetes. In the first chapter, I’ll show you how to quickly build your application, create its image, and run it on Kubernetes without writing a single line of YAML or Dockerfile. This chapter also covers the minimum Kubernetes architecture you must understand to manage applications effectively in this environment. The second chapter, on the other hand, demonstrates how to effectively organize your local development environment to work with a Kubernetes cluster. You’ll see several options for running a distribution of your cluster locally and learn about the essential set of tools you should have. The third chapter outlines best practices for building applications on the Kubernetes platform. Most of the presented requirements are supported by simple examples and explanations of the benefits of meeting them. The fourth chapter presents the most valuable tools for the inner development loop with Kubernetes. After reading the first four chapters, you will understand the main Kubernetes components related to application management, enabling you to navigate the platform efficiently. You’ll also learn to leverage Spring Boot and Quarkus features to adapt your application to Kubernetes requirements.

In the following chapters, I will focus on the benefits of migrating applications to Kubernetes. The first area to cover is security. Chapter five discusses mechanisms and tools for securing applications running in a cluster. Chapter six describes Spring and Quarkus projects that enable native integration with the Kubernetes API from within applications. In chapter seven, you’ll learn about the service mesh tool and the benefits of using it to manage HTTP traffic between microservices. Chapter eight addresses the performance and scalability of Java applications in a Kubernetes environment. Chapter Eight demonstrates how to design a CI/CD process that runs entirely within the cluster, leveraging Kubernetes-native tools for pipeline building and the GitOps approach. This book also covers AI. In the final, ninth chapter, you’ll learn how to run a simple Java application that integrates with an AI model deployed on Kubernetes.

Publication

I decided to publish my book on Leanpub. Leanpub is a platform for writing, publishing, and selling books, especially popular among technical content authors. I previously published a book with Packt, but honestly, I was alone during the writing process. Leanpub is similar but offers several key advantages over publishers like Packt. First, it allows you to update content collaboratively with readers and keep it current. Even though my book is finished, I don’t rule out adding more chapters, such as on AI on Kubernetes. I also look forward to your feedback and plan to improve the content and examples in the repository continuously. Overall, this has been another exciting experience related to publishing technical content.

And when you buy such a book, you can be sure that most of the royalties go to me as the author, unlike with other publishers, where most of the royalties go to them as promoters. So, I’m looking forward to improving my book with you!

Conclusion

My book aims to bring together all the most interesting elements surrounding Java application development on Kubernetes. It is intended not only for developers but also for architects and DevOps teams who want to move to the Kubernetes platform.

The post A Book: Hands-On Java with Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/feed/ 0 15892
Quarkus with Buildpacks and OpenShift Builds https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/ https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/#respond Wed, 19 Nov 2025 08:50:04 +0000 https://piotrminkowski.com/?p=15806 In this article, you will learn how to build Quarkus application images using Cloud Native Buildpacks and OpenShift Builds. Some time ago, I published a blog post about building with OpenShift Builds based on the Shipwright project. At that time, Cloud Native Buildpacks were not supported at the OpenShift Builds level. It was only supported […]

The post Quarkus with Buildpacks and OpenShift Builds appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to build Quarkus application images using Cloud Native Buildpacks and OpenShift Builds. Some time ago, I published a blog post about building with OpenShift Builds based on the Shipwright project. At that time, Cloud Native Buildpacks were not supported at the OpenShift Builds level. It was only supported in the community project. I demonstrated how to add the appropriate build strategy yourself and use it to build an image for a Spring Boot application. However, OpenShift Builds, since version 1.6, support building with Cloud Native Buildpacks. Currently, Quarkus, Go, Node.js, and Python are supported. In this article, we will focus on Quarkus and also examine the built-in support for Buildpacks within Quarkus itself.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

Quarkus Buildpacks Extension

Recently, support for Cloud Native Buildpacks in Quarkus has been significantly enhanced. Here you can access the repository containing the source code for the Paketo Quarkus buildpack. To implement this solution, add one dependency to your application.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-container-image-buildpack</artifactId>
</dependency>
XML

Next, run the build command with Maven and activate the quarkus.container-image.build parameter. Also, set the appropriate Java version needed for your application. For the sample Quarkus application in this article, the Java version is 21.

mvn clean package \
  -Dquarkus.container-image.build=true \
  -Dquarkus.buildpack.builder-env.BP_JVM_VERSION=21
ShellSession

To build, you need Docker or Podman running. Here’s the output from the command run earlier.

As you can see, Quarkus uses, among other buildpacks, the buildpack as mentioned earlier.

The new image is now available for use.

$ docker images sample-quarkus/person-service:1.0.0-SNAPSHOT
REPOSITORY                      TAG              IMAGE ID       CREATED        SIZE
sample-quarkus/person-service   1.0.0-SNAPSHOT   e0b58781e040   45 years ago   160MB
ShellSession

Quarkus with OpenShift Builds Shipwright

Install the Openshift Build Operator

Now, we will move the image building process to the OpenShift cluster. OpenShift offers built-in support for creating container images directly within the cluster through OpenShift Builds, using the BuildConfig solution. For more details, please refer to my previous article. However, in this article, we explore a new technology for building container images called OpenShift Builds with Shipwright. To enable this solution on OpenShift, you need to install the following operator.

After installing this operator, you will see a new item in the “Build” menu called “Shiwright”. Switch to it, then select the “ClusterBuildStrategies” tab. There are two strategies on the list designed for Cloud Native Buildpacks. We are interested in the buildpacks strategy.

Create and Run Build with Shipwright

Finally, we can create the Shiwright Build object. It contains three sections. In the first step, we define the address of the container image repository where we will push our output image. For simplicity, we will use the internal registry provided by the OpenShift cluster itself. In the source section, we specify the repository address where the application source code is located. In the last section, we need to set the build strategy. We chose the previously mentioned buildpacks strategy for Cloud Native Buildpacks. Some parameters need to be set for the buildpacks strategy: run-image and cnb-builder-image. The cnb-builder-image indicates the name of the builder image containing the buildpacks. The run-image refers to a base image used to run the application. We will also activate the buildpacks Maven profile during the build to set the Quarkus property that switches from fast-jar to uber-jar packaging.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildpack-quarkus-build
spec:
  env:
    - name: BP_JVM_VERSION
      value: '21'
  output:
    image: 'image-registry.openshift-image-registry.svc:5000/builds/sample-quarkus-microservice:1.0'
  paramValues:
    - name: run-image
      value: 'paketobuildpacks/run-java-21-ubi9-base:latest'
    - name: cnb-builder-image
      value: 'paketobuildpacks/builder-jammy-java-tiny:latest'
    - name: env-vars
      values:
        - value: BP_MAVEN_ADDITIONAL_BUILD_ARGUMENTS=-Pbuildpacks
  retention:
    atBuildDeletion: true
  source:
    git:
      url: 'https://github.com/piomin/sample-quarkus-microservice.git'
    type: Git
  strategy:
    kind: ClusterBuildStrategy
    name: buildpacks
YAML

Here’s the Maven buildpacks profile that sets a single Quarkus property quarkus.package.jar.type. We must change it to uber-jar, because the paketobuildpacks/builder-jammy-java-tiny builder expects a single jar instead of the multi-folder layout used by the default fast-jar format. Of course, I would prefer to use the paketocommunity/builder-ubi-base builder, which can recognize the fast-jar format. However, at this time, it does not function correctly with OpenShift Builds.

<profiles>
  <profile>
    <id>buildpacks</id>
    <activation>
      <property>
        <name>buildpacks</name>
      </property>
    </activation>
    <properties>
      <quarkus.package.jar.type>uber-jar</quarkus.package.jar.type>
    </properties>
  </profile>
</profiles>
XML

To start the build, you can use the OpenShift console or execute the following command:

shp build run buildpack-quarkus-build --follow
ShellSession

We can switch to the OpenShift Console. As you can see, our build is running.

The history of such builds is available on OpenShift. You can also review the build logs.

Finally, you should see your image in the list of OpenShift internal image streams.

$ oc get imagestream
NAME                          IMAGE REPOSITORY                                                                                                    TAGS                UPDATED
sample-quarkus-microservice   default-route-openshift-image-registry.apps.pminkows.95az.p1.openshiftapps.com/builds/sample-quarkus-microservice   1.2,0.0.1,1.1,1.0   13 hours ago
ShellSession

Conclusion

OpenShift Build Shipwright lets you perform the entire application image build process on the OpenShift cluster in a standardized manner. Cloud Native Buildpacks is a popular mechanism for building images without writing a Dockerfile yourself. In this case, support for Buildpacks on the OpenShift side is an interesting alternative to the Source-to-Image approach.

The post Quarkus with Buildpacks and OpenShift Builds appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/feed/ 0 15806
Backstage Dynamic Plugins with Red Hat Developer Hub https://piotrminkowski.com/2025/06/13/backstage-dynamic-plugins-with-red-hat-developer-hub/ https://piotrminkowski.com/2025/06/13/backstage-dynamic-plugins-with-red-hat-developer-hub/#respond Fri, 13 Jun 2025 11:38:24 +0000 https://piotrminkowski.com/?p=15718 This article will teach you how to create Backstage dynamic plugins and install them smoothly in Red Hat Developer Hub. One of the most significant pain points in Backstage is the installation of plugins. If you want to run Backstage on Kubernetes, for example, you have to rebuild the project and create a new image […]

The post Backstage Dynamic Plugins with Red Hat Developer Hub appeared first on Piotr's TechBlog.

]]>
This article will teach you how to create Backstage dynamic plugins and install them smoothly in Red Hat Developer Hub. One of the most significant pain points in Backstage is the installation of plugins. If you want to run Backstage on Kubernetes, for example, you have to rebuild the project and create a new image containing the added plugin. Red Hat Developer solves this problem by using dynamic plugins. In an earlier article about Developer Hub, I demonstrated how to activate selected plugins from the list of built-in extensions. These extensions are available inside the image and only require activation. However, creating your plugin and adding it to Developer Hub requires a different approach. This article will focus on just such a case.

For comparison, please refer to the article where I demonstrate how to prepare a Backstage instance for running on Kubernetes step-by-step. Today, for the sake of clarity, we will be working in a Developer Hub instance running on OpenShift using an operator. However, we can also easily install Developer Hub on vanilla Kubernetes using a Helm chart.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. We will also use another repository with a sample Backstage plugin. This time, I won’t create a plugin myself, but I will use an existing one. The plugin provides a collection of scaffolder actions for interacting with Kubernetes on Backstage, including apply and delete. Once you clone both of those repositories, you should only follow my instructions.

Prerequisites

You must have an OpenShift cluster with the Red Hat Developer Hub installed and configured with a Kubernetes plugin. I will briefly explain it in the next section without getting more into the details. You can find more information in the already mentioned article about Red Hat Developer Hub.

You must also have Node.js, NPM, and Yarn installed and configured on your laptop. It is used for plugin compilation and building. The npm package @janus-idp/cli used for developing and exporting Backstage plugins as dynamic plugins, requires Podman when working in image mode.

Motivation

Red Hat Developer Hub comes with a curated set of plugins preinstalled on its container image. It is easy to enable such a plugin just by changing the configuration available in Kubernetes ConfigMap. The situation becomes complicated when we attempt to install and configure a third-party plugin. In this case, I would like to extend Backstage with a set of actions that enable the creation and management of Kubernetes resources directly, rather than applying them via Argo CD. To do that, we must install the Backstage Scaffolder Actions for Kubernetes plugin. To use this plugin in Red Hat Developer Hub without rebuilding its image, you must export plugins as derived dynamic plugin packages. This is our goal.

Convert to a dynamic Backstage plugin

The backstage-k8s-scaffolder-actions is a backend plugin. It meets all the requirements to be converted to a dynamic plugin form. It has a valid package.json file in its root directory, containing all required metadata and dependencies. That plugin is compatible with the new Backstage backend system, which means that it was created using createBackendPlugin() or createBackendModule(). Let’s clone its repository first:

$ git clone https://github.com/kirederik/backstage-k8s-scaffolder-actions.git
$ cd backstage-k8s-scaffolder-actions
ShellSession

Then we must install the @janus-idp/cli npm package with the following command:

yarn add @janus-idp/cli
ShellSession

After that, you should run both those commands inside the plugin directory:

$ yarn install
$ yarn build
ShellSession

If the commands were successful, you can proceed with the plugin conversion procedure. The plugin defines some shared dependencies that must be explicitly specified with the --shared-package flag.

Here’s the command used to convert our plugin to a dynamic form supported by Red Hat Developer Hub:

npx @janus-idp/cli@latest package export-dynamic-plugin \
  --shared-package '!@backstage/cli-common' \
  --shared-package '!@backstage/cli-node' \
  --shared-package '!@backstage/config-loader' \
  --shared-package '!@backstage/config' \
  --shared-package '!@backstage/errors' \
  --shared-package '!@backstage/types'
ShellSession

Package and publish Backstage dynamic plugins

After exporting a third-party plugin, you can package the derived package into one of the following supported formats:

  • Open Container Initiative (OCI) image (recommended)
  • TGZ file
  • JavaScript package

Since the OCI image option is recommended, we will proceed accordingly. However, first you must ensure that podman is running on your laptop and is logged in to your container registry.

podman login quay.io
ShellSession

Then you can run the following @janus-idp/cli command with npx. It must specify the target image repository address, including image name and tag. The target image address is quay.io/pminkows/backstage-k8s-scaffolder-actions:v0.5. It is tagged with the latest version of the plugin.

npx @janus-idp/cli@latest package package-dynamic-plugins \
  --tag quay.io/pminkows/backstage-k8s-scaffolder-actions:v0.5
ShellSession

Here’s the command output. Ultimately, it provides instructions on how to install and enable the plugin within the Red Hat Developer configuration. Copy that statement for future use.

backstage-dynamic-plugins-package-cli

The previous command packages the plugin and builds its image.

Finally, let’s push the image with our plugin to the target registry:

podman push quay.io/pminkows/backstage-k8s-scaffolder-actions:v0.5
ShellSession

Install and enable the Backstage dynamic plugins in Developer Hub

My instance of Developer Hub is running in the backstage namespace. The operator manages it.

backstage-dynamic-plugins-developer-hub

Here’s the Backstage CR object responsible for creating a Developer Hub instance. The dynamicPluginsConfigMapName property specifies the name of the ConfigMap that stores the plugins’ configuration.

apiVersion: rhdh.redhat.com/v1alpha3
kind: Backstage
metadata:
  name: developer-hub
  namespace: backstage
spec:
  application:
    appConfig:
      configMaps:
        - name: app-config-rhdh
      mountPath: /opt/app-root/src
    dynamicPluginsConfigMapName: dynamic-plugins-rhdh
    extraEnvs:
      secrets:
        - name: app-secrets-rhdh
    extraFiles:
      mountPath: /opt/app-root/src
    replicas: 1
    route:
      enabled: true
  database:
    enableLocalDb: true
YAML

Then, we must modify the dynamic-plugins-rhdh ConfigMap to register our plugin in Red Hat Developer Hub. You must paste the previously copied two lines of code generated by the npx @janus-idp/cli@latest package package-dynamic-plugins command.

kind: ConfigMap
apiVersion: v1
metadata:
  name: dynamic-plugins-rhdh
  namespace: backstage
data:
  dynamic-plugins.yaml: |-
    plugins:
      # ... other plugins

      - package: oci://quay.io/pminkows/backstage-k8s-scaffolder-actions:v0.5!devangelista-backstage-scaffolder-kubernetes
        disabled: false
YAML

That’s all! After the change is applied to ConfigMap, the operator should restart the pod with Developer Hub. It can take some time, as the pod will be restarted, since all plugins must be enabled during pod startup.

$ oc get pod
NAME                                      READY   STATUS    RESTARTS   AGE
backstage-developer-hub-896c5f9d9-vvddb   1/1     Running   0          4m21s
backstage-psql-developer-hub-0            1/1     Running   0          8d
ShellSession

You can verify the logs with the oc logs command. Developer Hub prints a list of available actions provided by the installed plugins. You should see three actions delivered by the Backstage Scaffolder Actions for Kubernetes plugin, starting with the kube: prefix.

backstage-dynamic-plugins-developer-hub-logs

Prepare the Backstage template

Finally, we will test a new plugin by calling the kube:apply action from our Backstage template. It uses the kube:apply to create a Secret in the specified namespace under a given name. This template is available in the backstage-templates repository.

apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  description: Create a Secret in Kubernetes
  name: create-secret
  title: Create a Secret
spec:
  lifecycle: experimental
  owner: user
  type: example
  parameters:
    - properties:
        name:
          description: The namespace name
          title: Name
          type: string
          ui:autofocus: true
      required:
        - name
      title: Namespace Name
    - properties:
        secretName:
          description: The secret name
          title: Secret Name
          type: string
          ui:autofocus: true
      required:
        - secretName
      title: Secret Name
    - title: Cluster Name
      properties:
        cluster:
          type: string
          enum:
            - ocp
          ui:autocomplete:
            options:
              - ocp
  steps:
    - action: kube:apply
      id: k-apply
      name: Create a Resouce
      input:
        namespaced: true
        clusterName: ${{ parameters.cluster }}
        manifest: |
          kind: Secret
          apiVersion: v1
          metadata:
            name: ${{ parameters.secretName }}
            namespace: ${{ parameters.name }}
          data:
            username: YWRtaW4=
https://github.com/piomin/backstage-templates/blob/master/templates.yaml

You should import the repository with templates into your Developer Hub instance. The app-config-rhdh ConfigMap should contain the full address templates list file in the repository, and the Kubernetes cluster address and connection credentials.

catalog:
  rules:
    - allow: [Component, System, API, Resource, Location, Template]
  locations:
    - type: url
      target: https://github.com/piomin/backstage-templates/blob/master/templates.yaml
      rules:
        - allow: [Template, Location]

kubernetes:
  clusterLocatorMethods:
    - clusters:
      - authProvider: serviceAccount
        name: ocp
        serviceAccountToken: ${OPENSHIFT_TOKEN}
        skipTLSVerify: true
        url: https://api.${DOMAIN}:6443
      type: config
  customResources:
    - apiVersion: v1beta1
      group: tekton.dev
      plural: pipelineruns
    - apiVersion: v1beta1
      group: tekton.dev
      plural: taskruns
    - apiVersion: v1
      group: route.openshift.io
      plural: routes
  serviceLocatorMethod:
    type: multiTenant
YAML

You can access the Developer Hub instance through the OpenShift Route:

$ oc get route
NAME                      HOST/PORT                                                                 PATH   SERVICES                  PORT           TERMINATION     WILDCARD
backstage-developer-hub   backstage-developer-hub-backstage.apps.piomin.ewyw.p1.openshiftapps.com   /      backstage-developer-hub   http-backend   edge/Redirect   None
ShellSession

Then find and use the following template available in the Developer Hub.

You will have to insert the Secret name, namespace, and choose a target OpenShift cluster. Then accept the action.

You should see a similar screen. The Secret has been successfully created.

backstage-dynamic-plugins-ui

Let’s verify if it exists on our OpenShift cluster:

Final Thoughts

Red Hat is steadily developing its Backstage-based product, adding new functionality in future versions. The ability to easily create and install custom plug-ins in the Developer Hub appears to be a key element in building an attractive platform for developers. This article focuses on demonstrating how to convert a standard Backstage plugin into a dynamic form supported by Red Hat Developer Hub.

The post Backstage Dynamic Plugins with Red Hat Developer Hub appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/06/13/backstage-dynamic-plugins-with-red-hat-developer-hub/feed/ 0 15718
The Art of Argo CD ApplicationSet Generators with Kubernetes https://piotrminkowski.com/2025/03/20/the-art-of-argo-cd-applicationset-generators-with-kubernetes/ https://piotrminkowski.com/2025/03/20/the-art-of-argo-cd-applicationset-generators-with-kubernetes/#comments Thu, 20 Mar 2025 09:40:46 +0000 https://piotrminkowski.com/?p=15624 This article will teach you how to use the Argo CD ApplicationSet generators to manage your Kubernetes cluster using a GitOps approach. An Argo CD ApplicationSet is a Kubernetes resource that allows us to manage and deploy multiple Argo CD Applications. It dynamically generates multiple Argo CD Applications based on a given template. As a […]

The post The Art of Argo CD ApplicationSet Generators with Kubernetes appeared first on Piotr's TechBlog.

]]>
This article will teach you how to use the Argo CD ApplicationSet generators to manage your Kubernetes cluster using a GitOps approach. An Argo CD ApplicationSet is a Kubernetes resource that allows us to manage and deploy multiple Argo CD Applications. It dynamically generates multiple Argo CD Applications based on a given template. As a result, we can deploy applications across multiple Kubernetes clusters, create applications for different environments (e.g., dev, staging, prod), and manage many repositories or branches. Everything can be easily achieved with a minimal source code effort.

Argo CD ApplicationSet supports several different generators. In this article, we will focus on the Git generator type. It generates Argo CD Applications based on directory structure or branch changes in a Git repository. It contains two subtypes: the Git directory generator and the Git file generator. If you are interested in other Argo CD ApplicationSet generators you can find some articles on my blog. For example, the following post shows how to use List Generator to promote images between environments. You can also find a post about the Cluster Decision Resource generator, which shows how to spread applications dynamically between multiple Kubernetes clusters.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. You must go to the appset-helm-demo directory, which contains the whole configuration required for that exercise. Then you should only follow my instructions.

Argo CD Installation

Argo CD is the only tool we need to install on our Kubernetes cluster for that exercise. We can use the official Helm chart to install it on Kubernetes. Firstly. let’s add the following Helm repository:

helm repo add argo https://argoproj.github.io/argo-helm
ShellSession

After that, we can install ArgoCD in the current Kubernetes cluster in the argocd namespace using the following command:

helm install my-argo-cd argo/argo-cd -n argocd
ShellSession

I use OpenShift in that exercise. With the OpenShift Console, I can easily install ArgoCD on the cluster using the OpenShift GitOps operator.

Once we installed it we can easily access the Argo CD dashboard.

We can sign in there using OpenShift credentials.

Motivation

Our goal in this exercise is to deploy and run some applications (a simple Java app and Postgres database) on Kubernetes with minimal source code effort. Those two applications only show how to create a standard that can be easily applied to any application type deployed on our cluster. In this standard, a directory structure determines how and where our applications are deployed on Kubernetes. My example configuration is stored in a single Git repository. However, we can easily extend it with multiple repositories, where Argoc CD switches between the central repository and other Git repositories containing a configuration for concrete applications.

Here’s a directory structure and files for deploying our two applications. Both the custom app and Postgres database are deployed in three environments: dev, test, and prod. We use Helm charts for deploying them. Each environment directory contains a Helm values file with installation parameters. The configuration distinguishes two different types of installation: apps and components. Each app is installed using the same Helm chart dedicated to a standard deployment. Each component is installed using a custom Helm chart provided by that component. For example, for Postgres, we will use the following Bitnami chart.

.
├── apps
│   ├── aaa-1
│   │   └── basic
│   │       ├── prod
│   │       │   └── values.yaml
│   │       ├── test
│   │       │   └── values.yaml
│   │       ├── uat
│   │       │   └── values.yaml
│   │       └── values.yaml
│   ├── aaa-2
│   └── aaa-3
└── components
    └── aaa-1
        └── postgresql
            ├── prod
            │   ├── config.yaml
            │   └── values.yaml
            ├── test
            │   ├── config.yaml
            │   └── values.yaml
            └── uat
                ├── config.yaml
                └── values.yaml
ShellSession

Before deploying the application, we should prepare namespaces with quotas, Argo CD projects, and ApplicationSet generators for managing application deployments. Here’s the structure of a global configuration repository. It also uses Helm chart to apply that part of manifests to the Kubernetes cluster. Each directory inside the projects directory determines our project name. On the other hand, a project contains several Kubernetes namespaces. Each project may contain several different Kubernetes Deployments.

.
└── projects
    ├── aaa-1
    │   └── values.yaml
    ├── aaa-2
    │   └── values.yaml
    └── aaa-3
        └── values.yaml
ShellSession

Prepare Global Cluster Configuration

Helm Template for Namespaces and Quotas

Here’s the Helm template for creating namespaces and quotas for each namespace. We will create the project namespace per each environment (stage).

{{- range .Values.stages }}
---
apiVersion: v1
kind: Namespace
metadata:
  name: {{ $.Values.projectName }}-{{ .name }}
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: default-quota
  namespace: {{ $.Values.projectName }}-{{ .name }}
spec:
  hard:
    {{- if .config }}
    {{- with .config.quotas }}
    pods: {{ .pods | default "10" }}
    requests.cpu: {{ .cpuRequest | default "2" }}
    requests.memory: {{ .memoryRequest | default "2Gi" }}
    limits.cpu: {{ .cpuLimit | default "8" }}
    limits.memory: {{ .memoryLimit | default "8Gi" }}
    {{- end }}
    {{- else }}
    pods: "10"
    requests.cpu: "2"
    requests.memory: "2Gi"
    limits.cpu: "8"
    limits.memory: "8Gi"
    {{- end }}
{{- end }}
chart/templates/namespace.yaml

Helm Template for the Argo CD AppProject

Helm chart will also create a dedicated Argo CD AppProject object per our project.

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: {{ .Values.projectName }}
  namespace: {{ .Values.argoNamespace | default "argocd" }}
spec:
  clusterResourceWhitelist:
    - group: '*'
      kind: '*'
  destinations:
    - namespace: '*'
      server: '*'
  sourceRepos:
    - '*'
chart/templates/appproject.yaml

Helm Template for Argo CD ApplicationSet

After that, we can proceed to the most tricky part of our exercise. Helm chart also defines a template for creating the Argo CD ApplicationSet. This ApplicationSet must analyze the repository structure, which contains the configuration of apps and components. We define two ApplicationSets per each project. The first uses the Git Directory generator to determine the structure of the apps catalog and deploy the apps in all environments using my custom spring-boot-api-app chart. The chart parameters can be overridden with Helm values placed in each app directory.

The second ApplicationSet uses the Git Files generator to determine the structure of the components catalog. It reads the contents of the config.yaml file in each directory. The config.yaml file sets the repository, name, and version of the Helm chart that must be used to install the component on Kubernetes.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: '{{ .Values.projectName }}-apps-config'
  namespace: {{ .Values.argoNamespace | default "argocd" }}
spec:
  goTemplate: true
  generators:
    - git:
        repoURL: https://github.com/piomin/argocd-showcase.git
        revision: HEAD
        directories:
          {{- range .Values.stages }}
          - path: appset-helm-demo/apps/{{ $.Values.projectName }}/*/{{ .name }}
          {{- end }}
  template:
    metadata:
      name: '{{`{{ index .path.segments 3 }}`}}-{{`{{ index .path.segments 4 }}`}}'
    spec:
      destination:
        namespace: '{{`{{ index .path.segments 2 }}`}}-{{`{{ index .path.segments 4 }}`}}'
        server: 'https://kubernetes.default.svc'
      project: '{{ .Values.projectName }}'
      sources:
        - chart: spring-boot-api-app
          repoURL: 'https://piomin.github.io/helm-charts/'
          targetRevision: 0.3.8
          helm:
            valueFiles:
              - $values/appset-helm-demo/apps/{{ .Values.projectName }}/{{`{{ index .path.segments 3 }}`}}/{{`{{ index .path.segments 4 }}`}}/values.yaml
            parameters:
              - name: appName
                value: '{{ .Values.projectName }}'
        - repoURL: 'https://github.com/piomin/argocd-showcase.git'
          targetRevision: HEAD
          ref: values
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
---
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: '{{ .Values.projectName }}-components-config'
  namespace: {{ .Values.argoNamespace | default "argocd" }}
spec:
  goTemplate: true
  generators:
    - git:
        repoURL: https://github.com/piomin/argocd-showcase.git
        revision: HEAD
        files:
          {{- range .Values.stages }}
          - path: appset-helm-demo/components/{{ $.Values.projectName }}/*/{{ .name }}/config.yaml
          {{- end }}
  template:
    metadata:
      name: '{{`{{ index .path.segments 3 }}`}}-{{`{{ index .path.segments 4 }}`}}'
    spec:
      destination:
        namespace: '{{`{{ index .path.segments 2 }}`}}-{{`{{ index .path.segments 4 }}`}}'
        server: 'https://kubernetes.default.svc'
      project: '{{ .Values.projectName }}'
      sources:
        - chart: '{{`{{ .chart.name }}`}}'
          repoURL: '{{`{{ .chart.repository }}`}}'
          targetRevision: '{{`{{ .chart.version }}`}}'
          helm:
            valueFiles:
              - $values/appset-helm-demo/components/{{ .Values.projectName }}/{{`{{ index .path.segments 3 }}`}}/{{`{{ index .path.segments 4 }}`}}/values.yaml
            parameters:
              - name: appName
                value: '{{ .Values.projectName }}'
        - repoURL: 'https://github.com/piomin/argocd-showcase.git'
          targetRevision: HEAD
          ref: values
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
chart/templates/applicationsets.yaml

There are several essential elements in this configuration, which we should pay attention to. Both Helm and ApplicationSet use templating engines based on {{ ... }} placeholders. So to avoid conflicts we should escape Argo CD ApplicationSet templating elements from the Helm templating elements. The following part of the template responsible for generating the Argo CD Application name is a good example of that approach: '{{`{{ index .path.segments 3 }}`}}-{{`{{ index .path.segments 4 }}`}}'. First, we use the AppliocationSet Git generator parameter index .path.segments 3 that returns the name of the third part of the directory path. Those elements are escaped with the ` char so Helm doesn’t try to analyze it.

Helm Chart Structure

Our ApplicationSets use the “Multiple Sources for Application” feature to read parameters from Helm values files and inject them into the Helm chart from a remote repository. Thanks to that, our configuration repositories for apps and components contain only values.yaml files in the standardized directory structure. The only chart we store in the sample repository has been described above and is responsible for creating the configuration required to run app Deployments on the cluster.

.
└── chart
    ├── Chart.yaml
    ├── templates
    │   ├── additional.yaml
    │   ├── applicationsets.yaml
    │   ├── appproject.yaml
    │   └── namespaces.yaml
    └── values.yaml
ShellSession

By default, each project defines three environments (stages): test, uat, prod.

stages:
  - name: test
    additionalObjects: {}
  - name: uat
    additionalObjects: {}
  - name: prod
    additionalObjects: {}
chart/values.yml

We can override a default behavior for the specific project in Helm values. Each project directory contains the values.yaml file. Here are Helm parameters for the aaa-3 project that override CPU request quota from 2 CPUs to 4 CPUs only for the test environment.

stages:
  - name: test
    config:
      quotas:
        cpuRequest: 4
    additionalObjects: {}
  - name: uat
    additionalObjects: {}
  - name: prod
    additionalObjects: {}
projects/aaa-3/values.yaml

Run the Synchronization Process

Generate Global Structure on the Cluster

To start a process we must create the ApplicationSet that reads the structure of the projects directory. Each subdirectory in the projects directory indicates the name of our project. Our ApplicationSet uses a Git directory generator to create an Argo CD Application per each project. Its name contains the name of the subdirectory and the config suffix. Each generated Application uses the previously described Helm chart to create all namespaces, quotas, and other resources requested by the project. It also leverages the “Multiple Sources for Application” feature to allow us to override default Helm chart settings. It reads a project name from the directory name and passes it as a parameter to the generated Argo CD Application.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: global-config
  namespace: openshift-gitops
spec:
  goTemplate: true
  generators:
    - git:
        repoURL: https://github.com/piomin/argocd-showcase.git
        revision: HEAD
        directories:
          - path: appset-helm-demo/projects/*
  template:
    metadata:
      name: '{{.path.basename}}-config'
    spec:
      destination:
        namespace: '{{.path.basename}}'
        server: 'https://kubernetes.default.svc'
      project: default
      sources:
        - path: appset-helm-demo/chart
          repoURL: 'https://github.com/piomin/argocd-showcase.git'
          targetRevision: HEAD
          helm:
            valueFiles:
              - $values/appset-helm-demo/projects/{{.path.basename}}/values.yaml
            parameters:
              - name: projectName
                value: '{{.path.basename}}'
              - name: argoNamespace
                value: openshift-gitops
        - repoURL: 'https://github.com/piomin/argocd-showcase.git'
          targetRevision: HEAD
          ref: values
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
YAML

Once we create the global-config ApplicationSet object magic happens. Here’s the list of Argo CD Applications generated from our directories in Git configuration repositories.

argo-cd-applicationset-all-apps

First, there are three Argo CD Applications with the projects’ configuration. That’s happening because we defined 3 subdirectories in the projects directory with names aaa-1, aaa-2 and aaa-3.

The configuration applied by those Argo CD Applications is pretty similar since they are using the same Helm chart. We can look at the list of resources managed by the aaa-3-config Application. There are three namespaces (aaa-3-test, aaa-3-uat, aaa-3-prod) with resource quotas, a single Argo CD AppProject, and two ApplicationSet objects responsible for generating Argo CD Applications for apps and components directories.

argo-cd-applicationset-global-config

In this configuration, we can verify if the value of the request.cpu ResourceQuota object has been overridden from 2 CPUs to 4 CPUs.

Let’s analyze what happened. Here’s a list of Argo CD ApplicationSets. The global-config ApplicationSet generated Argo CD Application per each detected project inside the projects directory. Then, each of these Applications applied two ApplicationSet objects to cluster using the Helm template.

$ kubectl get applicationset
NAME                      AGE
aaa-1-components-config   29m
aaa-1-apps-config         29m
aaa-2-components-config   29m
aaa-2-apps-config         29m
aaa-3-components-config   29m
aaa-3-apps-config         29m
global-config             29m
ShellSession

There’s also a list of created namespaces:

$ kubectl get ns
NAME                                               STATUS   AGE
aaa-1-prod                                         Active   34m
aaa-1-test                                         Active   34m
aaa-1-uat                                          Active   34m
aaa-2-prod                                         Active   34m
aaa-2-test                                         Active   34m
aaa-2-uat                                          Active   34m
aaa-3-prod                                         Active   34m
aaa-3-test                                         Active   34m
aaa-3-uat                                          Active   34m
ShellSession

Generate and Apply Deployments

Our sample configuration contains only two Deployments. We defined the basic subdirectory in the apps directory and the postgres subdirectory in the components directory inside the aaa-1 project. The aaa-2 and aaa-3 projects don’t contain any Deployments for simplification. However, the more subdirectories with the values.yaml file we create there, the more applications will be deployed on the cluster. Here’s a typical values.yaml file for a simple app deployed with a standard Helm chart. It defines the image repository, name, and tag. It also set the Deployment name and environment.

image:
  repository: piomin/basic
  tag: 1.0.0
app:
  name: basic
  environment: prod
YAML

For the postgres component we must set more parameters in Helm values. Here’s the final list:

global:
  compatibility:
    openshift:
      adaptSecurityContext: force

image:
  tag: 1-54
  registry: registry.redhat.io
  repository: rhel9/postgresql-15

primary:
  containerSecurityContext:
    readOnlyRootFilesystem: false
  persistence:
    mountPath: /var/lib/pgsql
  extraEnvVars:
    - name: POSTGRESQL_ADMIN_PASSWORD
      value: postgresql123

postgresqlDataDir: /var/lib/pgsql/data
YAML

The following Argo CD Application has been generated by the aaa-1-apps-config ApplicationSet. It detected the basic subdirectory in the apps directory. The basic subdirectory contained 3 subdirectories: test, uat and prod with values.yaml file. As a result, we have Argo CD per environment responsible for deploying the basic app in the target namespaces.

argo-cd-applicationset-basic-apps

Here’s a list of resources managed by the basic-prod Application. It uses my custom Helm chart and applies Deployment and Service objects to the cluster.

The following Argo CD Application has been generated by the aaa-1-components-config ApplicationSet. It detected the basic subdirectory in the components directory. The postgres subdirectory contained 3 subdirectories: test, uat and prod with values.yaml and config.yaml files. The ApplicationSet Files generator reads the repository, name, and version from the configuration in the config.yaml file.

Here’s the config.yaml file with the Bitnami Postgres chart settings. We could place here any other chart we want to install something else on the cluster.

chart:
  repository: https://charts.bitnami.com/bitnami
  name: postgresql
  version: 15.5.38
components/aaa-1/postgresql/prod/config.yaml

Here’s the list of resources installed by the Bitnami Helm chart used by the generated Argo CD Applications.

argo-cd-applicationset-postgres

Final Thoughts

This article proves that Argo CD ApplicationSet and Helm templates can be used together to create advanced configuration structures. It shows how to use ApplicationSet Git Directory and Files generators to analyze the structure of directories and files in the Git config repository. With that approach, we can propose a standardization in the configuration structure across the whole organization and propagate it similarly for all the applications deployed in the Kubernetes clusters. Everything can be easily managed at the cluster admin level with the single global Argo CD ApplicationSet that accesses many different repositories with configuration.

The post The Art of Argo CD ApplicationSet Generators with Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/03/20/the-art-of-argo-cd-applicationset-generators-with-kubernetes/feed/ 2 15624
Continuous Promotion on Kubernetes with GitOps https://piotrminkowski.com/2025/01/14/continuous-promotion-on-kubernetes-with-gitops/ https://piotrminkowski.com/2025/01/14/continuous-promotion-on-kubernetes-with-gitops/#respond Tue, 14 Jan 2025 12:36:44 +0000 https://piotrminkowski.com/?p=15464 This article will teach you how to continuously promote application releases between environments on Kubernetes using the GitOps approach. Promotion between environments is one of the most challenging aspects in the continuous delivery process realized according to the GitOps principles. That’s because typically we manage that process independently per each environment by providing changes in […]

The post Continuous Promotion on Kubernetes with GitOps appeared first on Piotr's TechBlog.

]]>
This article will teach you how to continuously promote application releases between environments on Kubernetes using the GitOps approach. Promotion between environments is one of the most challenging aspects in the continuous delivery process realized according to the GitOps principles. That’s because typically we manage that process independently per each environment by providing changes in the Git configuration repository. If we use Argo CD, it comes down to creating three Application CRDs that refer to different places or files inside a repository. Each application is responsible for synchronizing changes to e.g. a specified namespace in the Kubernetes cluster. In that case, a promotion is driven by the commit in the part of the repository responsible for managing a given environment. Here’s the diagram that illustrates the described scenario.

kubernetes-promote-arch

The solution to these challenges is Kargo, an open-source tool implementing continuous promotion within CI/CD pipelines. It provides a structured mechanism for promoting changes in complex environments involving Kubernetes and GitOps. I’ve been following Kargo for several months. It’s an interesting tool that provides stage-to-stage promotion using GitOps principles with Argo CD. It reached the version in October 2024. Let’s take a closer look at it.

If you are interested in promotion between environments on Kubernetes using the GitOps approach, you can read about another tool that tackles that challenge – Devtron. Here’s the link to my article that explains its concept.

Understand the Concept

Before we start with Kargo, we need to understand the concept around that tool. Let’s analyze several basic terms defined and implemented by Kargo.

A project is a bunch of related Kargo resources that describe one or more delivery pipelines. It’s the basic unit of organization and multi-tenancy in Kargo. Every Kargo project has its own cluster-scoped Kubernetes resource of type Project. We should put all the resources related to a certain project into the same Kubernetes namespace.

A stage represents environments in Kargo. Stages are the most important concept in Kargo. We can link them together in a directed acyclic graph to describe a delivery pipeline. Typically, a delivery pipeline starts with a test or dev stage and ends with one or more prod stages.

A Freight object represents resources that Kargo promotes from one stage to another. It can reference one or more versioned artifacts, such as container images, Kubernetes manifests loaded from Git repositories, or Helm charts from chart repositories. 

A warehouse is a source of freight. It can refer to container image repositories, Git, or Helm chart repositories.

In that context, we should treat a promotion as a request to move a piece of freight into a specified stage.

Source Code

If you want to try out this exercise, go ahead and take a look at my source code. To do that, just clone my GitHub repository. It contains the sample Spring Boot application in the basic directory. The application exposes a single REST endpoint that returns the app Maven version number. Go to that directory. Then, you can follow my further instructions.

Kargo Installation

A few things must be ready before installing Kargo. We must have a Kubernetes cluster and the helm CLI installed on our laptop. I use Minikube. Kargo integrates with Cert-Manager, Argo CD, and Argo Rollouts. We can install all those tools using official Helm charts. Let’s begin with Cert-Manager. First, we must add the jetstack Helm repository:

helm repo add jetstack https://charts.jetstack.io
ShellSession

Here’s the helm command that installs it in the cert-manager namespace.

helm install cert-manager --namespace cert-manager jetstack/cert-manager \
  --set crds.enabled=true \
  --set crds.keep=true
ShellSession

To install Argo CD, we must first add the following Helm repository:

helm repo add argo https://argoproj.github.io/argo-helm
ShellSession

Then, we can install Argo CD in the argocd namespace.

helm install argo-cd argo/argo-cd
ShellSession

The Argo Rollouts chart is located in the same Helm repository as the Argo CD chart. We will also install it in the argocd namespace:

helm install my-argo-rollouts argo/argo-rollouts
ShellSession

Finally, we can proceed to the Kargo installation. We will install it in the kargo namespace. The installation command sets two Helm parameters. We should set the Bcrypt password hash and a key used to sign JWT tokens for the admin account:

helm install kargo \
  oci://ghcr.io/akuity/kargo-charts/kargo \
  --namespace kargo \
  --create-namespace \
  --set api.adminAccount.passwordHash='$2y$10$xu2U.Ux5nV5wKmerGcrDlO261YeiTlRrcp2ngDGPxqXzDyiPQvDXC' \
  --set api.adminAccount.tokenSigningKey=piomin \
  --wait
ShellSession

We can generate and print the password hash using e.g. htpasswd. Here’s the sample command:

htpasswd -nbB admin 123456
ShellSession

After installation is finished, we can verify it by displaying a list of pods running in the kargo namespace.

$ kubectl get po -n kargo
NAME                                          READY   STATUS    RESTARTS   AGE
kargo-api-dbb4d5cb7-zvnc6                     1/1     Running   0          44s
kargo-controller-c4964bbb7-4ngnv              1/1     Running   0          44s
kargo-management-controller-dc5569759-596ch   1/1     Running   0          44s
kargo-webhooks-server-6df6dd58c-g5jlp         1/1     Running   0          44s
ShellSession

Kargo provides a UI dashboard that allows us to display and manage continuous promotion configuration. Let’s expose it locally on the 8443 port using a port-forward feature:

kubectl port-forward svc/kargo-api -n kargo 8443:443
ShellSession

Once we sign in to the dashboard using the admin password set during an installation we can create a new kargo-demo project:

Sample Application

Our sample application is simple. It exposes a single GET /basic/ping endpoint that returns the version number read from Maven pom.xml.

@RestController
@RequestMapping("/basic")
public class BasicController {

    @Autowired
    Optional<BuildProperties> buildProperties;

    @GetMapping("/ping")
    public String ping() {
        return "I'm basic:" + buildProperties.orElseThrow().getVersion();
    }
}
Java

We will build an application image using the Jib Maven Plugin. It is already configured in Maven pom.xml. I set my Docker Hub as the target registry, but you can relate it to your account.

<plugin>
  <groupId>com.google.cloud.tools</groupId>
  <artifactId>jib-maven-plugin</artifactId>
  <version>3.4.4</version>
  <configuration>
    <container>
      <user>1001</user>
    </container>
    <to>
      
    </to>
  </configuration>
</plugin>
XML

The following Maven command builds the application and its image from a source code. Before the build, we should increase the version number in the project.version field in pom.xml. We begin with the 1.0.0 version, which should be pushed to your registry before proceeding.

mvn clean package -DskipTests jib:build
ShellSession

Here’s the result of my initial build.

Let’s switch to the Docker Registry dashboard after pushing the 1.0.0 version.

Configure Kargo for Promotion on Kubernetes

First, we will create the Kargo Warehouse object. It refers to the piomin/basic repository containing the image with our sample app. The Warehouse object is responsible for discovering new image tags pushed into the registry. We also use my Helm chart to deploy the image to Kubernetes. However, we will only use the latest version of that chart. Otherwise, we should also place that chart inside the basic Warehouse object to enable new chart version discovery.

apiVersion: kargo.akuity.io/v1alpha1
kind: Warehouse
metadata:
 name: basic
 namespace: demo
spec:
 subscriptions:
 - image:
     discoveryLimit: 5
     repoURL: piomin/basic
YAML

Then, we will create the Argo CD ApplicationSet to generate an application per environment. There are three environments: test, uat, prod. Each Argo CD application must be annotated by kargo.akuity.io/authorized-stage containing the project and stage name. Argo uses multiple sources. The argocd-showcase repository contains Helm values files with parameters per each stage. The piomin.github.io/helm-charts repository provides the spring-boot-api-app Helm chart that refers to those values.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
 name: demo
 namespace: argocd
spec:
 generators:
 - list:
     elements:
     - stage: test
     - stage: uat
     - stage: prod
 template:
   metadata:
     name: demo-{{stage}}
     annotations:
       kargo.akuity.io/authorized-stage: demo:{{stage}}
   spec:
     project: default
     sources:
       - chart: spring-boot-api-app
         repoURL: 'https://piomin.github.io/helm-charts/'
         targetRevision: 0.3.8
         helm:
           valueFiles:
             - $values/values/values-{{stage}}.yaml
       - repoURL: 'https://github.com/piomin/argocd-showcase.git'
         targetRevision: HEAD
         ref: values
     destination:
       server: https://kubernetes.default.svc
       namespace: demo-{{stage}}
     syncPolicy:
       syncOptions:
       - CreateNamespace=true
YAML

Now, we can proceed to the most complex element of our exercise – stage creation. The Stage object refers to the previously created Warehouse object to request freight to promote. Then, it defines the steps to perform during the promotion process. Kargo’s promotion steps define the workflow of a promotion process. They do the things needed to promote a piece of freight into the next stage. We can use several built-in steps that cover the most common operations like cloning Git repo, updating Helm values, or pushing changes to the remote repository. Our stage definition contains five steps. After cloning the repository with Helm values, we must update the image.tag parameter in the values-test.yaml file with the tag value read from the basic Warehouse. Then, Kargo commits and pushes changes to the configuration repository and triggers Argo CD application synchronization.

apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
 name: test
 namespace: demo
spec:
 requestedFreight:
 - origin:
     kind: Warehouse
     name: basic
   sources:
     direct: true
 promotionTemplate:
   spec:
     vars:
     - name: gitRepo
       value: https://github.com/piomin/argocd-showcase.git
     - name: imageRepo
       value: piomin/basic
     steps:
       - uses: git-clone
         config:
           repoURL: ${{ vars.gitRepo }}
           checkout:
           - branch: master
             path: ./out
       - uses: helm-update-image
         as: update-image
         config:
           path: ./out/values/values-${{ ctx.stage }}.yaml
           images:
           - image: ${{ vars.imageRepo }}
             key: image.tag
             value: Tag
       - uses: git-commit
         as: commit
         config:
           path: ./out
           messageFromSteps:
           - update-image
       - uses: git-push
         config:
           path: ./out
       - uses: argocd-update
         config:
           apps:
           - name: demo-${{ ctx.stage }}
             sources:
             - repoURL: ${{ vars.gitRepo }}
               desiredRevision: ${{ outputs.commit.commit }}
YAML

Here’s the values-test.yaml file in the Argo CD configuration repository.

image:
  repository: piomin/basic
  tag: 1.0.0
app:
  name: basic
  environment: test
values-test.yaml

Here’s the Stage definition of the uat environment. It is pretty similar to the definition of the test environment. It just defines the previous source stage to test.

apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
  name: uat
  namespace: demo
spec:
 requestedFreight:
 - origin:
     kind: Warehouse
     name: basic
   sources:
     stages:
       - test
 promotionTemplate:
   spec:
     vars:
     - name: gitRepo
       value: https://github.com/piomin/argocd-showcase.git
     - name: imageRepo
       value: piomin/basic
     steps:
       - uses: git-clone
         config:
           repoURL: ${{ vars.gitRepo }}
           checkout:
           - branch: master
             path: ./out
       - uses: helm-update-image
         as: update-image
         config:
           path: ./out/values/values-${{ ctx.stage }}.yaml
           images:
           - image: ${{ vars.imageRepo }}
             key: image.tag
             value: Tag
       - uses: git-commit
         as: commit
         config:
           path: ./out
           messageFromSteps:
           - update-image
       - uses: git-push
         config:
           path: ./out
       - uses: argocd-update
         config:
           apps:
           - name: demo-${{ ctx.stage }}
             sources:
             - repoURL: ${{ vars.gitRepo }}
               desiredRevision: ${{ outputs.commit.commit }}
YAML

Perform Promotion Process

Once a new image tag is published to the registry, it becomes visible in the Kargo Dashboard. We must click the “Promote into Stage” button to promote a selected version to the target stage.

Then we should specify the source image tag. A choice is obvious since we only have the image tagged with 1.0.0. After approving the selection by clicking the “Yes” button Kargo starts a promotion process on Kubernetes.

kubernetes-promote-initial-deploy

After a while, we should have a new image promoted to the test stage.

Let’s repeat the promotion process of the 1.0.0 version for the other two stages. Each stage should be at a healthy status. That status is read directly from the corresponding Argo CD Application.

kubernetes-promote-all-initial

Let’s switch to the Argo CD dashboard. There are three applications.

We can make a test call of the sample application HTTP endpoint. Currently, all the environments run the same 1.0.0 version of the app. Let’s enable port forwarding for the basic service in the demo-prod namespace.

kubectl port-forward svc/basic -n demo-prod 8080:8080
ShellSession

The endpoint returns the application name and version as a response.

$ curl http://localhost:8080/basic/ping
I'm basic:1.0.0
ShellSession

Then, we will build another three versions of the basic application beginning from 1.0.1 to 1.0.5.

Once we push each version to the registry, we can refresh the list of images. With the default configuration, Kargo should add the latest version to the list. After pushing the 1.0.3 version I promoted it to the test stage. Then I refreshed a list after pushing the 1.0.4 tag. Now, the 1.0.3 tag can be promoted to a higher environment. In the illustration below, I’m promoting it to the uat stage.

kubernetes-promote-accept

After that, we can promote the 1.0.4 version to the test stage, and refresh a list of images once again to see the currently pushed 1.0.5 tag.

Here’s another promotion. This time, I moved the 1.0.3 version to the prod stage.

After clicking on the image tag tile, we will see its details. For example, the 1.0.3 tag has been verified on the test and uat stages. There is also an approval section. However, we still didn’t approve freight. To do that, we need to switch to the kargo CLI.

The kargo CLI binary for a particular OS on the project GitHub releases page. We must download and copy it to the directory under the PATH. Then, we can sign in to the Kargo server running on Kubernetes using admin credentials.

kargo login https://localhost:8443 --admin \
  --password 123456 \
  --insecure-skip-tls-verify
ShellSession

We can approve a specific freight. Let’s display a list of Freight objects.

$ kubectl get freight -n demo
NAME                                       ALIAS            ORIGIN (KIND)   ORIGIN (NAME)   AGE
0501f8d8018a953821ea437078c1ec34e6db5a6b   ideal-horse      Warehouse       basic           8m7s
683be41b1cef57ed755fc7a0f8e8d7776f90c63a   wiggly-tuatara   Warehouse       basic           11m
6a1219425ddeceabfe94f0605d7a5f6d9d20043e   ulterior-zebra   Warehouse       basic           13m
f737178af6492ea648f85fc7d082e34b7a085927   eager-snail      Warehouse       basic           7h49m
ShellSession

The kargo approve command takes Freight ID as the input parameter.

kargo approve --freight 6a1219425ddeceabfe94f0605d7a5f6d9d20043e \
  --stage uat \
  --project demo
ShellSession

Now, the image tag details window should display the approved stage name.

kubernetes-promote-approved

Let’s enable port forwarding for the basic service in the demo-uat namespace.

kubectl port-forward svc/basic -n demo-uat 8080:8080
ShellSession

Then we call the /basic/ping endpoint to check out the current version.

$ curl http://localhost:8080/basic/ping
I'm basic:1.0.3
ShellSession

Final Thoughts

This article explains the idea behind continuous app promotion between environments on Kubernetes with Kargo and Argo CD. Kargo is a relatively new project in the Kubernetes ecosystem, that smoothly addresses the challenges related to GitOps promotion. It seems promising. I will closely monitor the further development of this project.

The post Continuous Promotion on Kubernetes with GitOps appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/01/14/continuous-promotion-on-kubernetes-with-gitops/feed/ 0 15464
Spring Boot on Kubernetes with Eclipse JKube https://piotrminkowski.com/2024/10/03/spring-boot-on-kubernetes-with-eclipse-jkube/ https://piotrminkowski.com/2024/10/03/spring-boot-on-kubernetes-with-eclipse-jkube/#comments Thu, 03 Oct 2024 16:51:39 +0000 https://piotrminkowski.com/?p=15398 This article will teach you how to use the Eclipse JKube project to build images and generate Kubernetes manifests for the Spring Boot application. Eclipse JKube is a collection of plugins and libraries that we can use to build container images using Docker, Jib, or source-2-image (S2I) build strategies. It also generates and deploys Kubernetes […]

The post Spring Boot on Kubernetes with Eclipse JKube appeared first on Piotr's TechBlog.

]]>
This article will teach you how to use the Eclipse JKube project to build images and generate Kubernetes manifests for the Spring Boot application. Eclipse JKube is a collection of plugins and libraries that we can use to build container images using Docker, Jib, or source-2-image (S2I) build strategies. It also generates and deploys Kubernetes and OpenShift manifests at compile time. We can include it as the Maven or Gradle plugin to use it during our build process. On the other hand, Spring Boot doesn’t provide any built-in tools to simplify deployment to Kubernetes. It only provides the build-image goal within the Spring Boot Maven and Gradle plugins dedicated to building container images with Cloud Native Buildpacks. Let’s check out how Eclipse JKube can simplify our interaction with Kubernetes. By the way, it also provides tools for watching, debugging, and logging. 

You can find other interesting articles on my blog if you are interested in the tools for generating Kubernetes manifests. Here’s the article that shows how to use the Dekorate library to generate Kubernetes manifests for the Spring Boot app.

Source Code

If you would like to try this exercise by yourself, you may always take a look at my source code. First, you need to clone the following GitHub repository. It contains several sample Java applications for a Kubernetes showcase. You must go to the “inner-dev-loop” directory, to proceed with exercise. Then you should follow my further instructions.

Prerequisites

Before we start the development, we must install some tools on our laptops. Of course, we should have Maven and at least Java 21 installed. We must also have access to the container engine (like Docker or Podman) and a Kubernetes cluster. I have everything configured on my local machine using Podman Desktop and Minikube. Finally, we need to install the Helm CLI. It can be used to deploy the Postgres database on Kubernetes using the popular Bitnami Helm chart. In summary, we need to have:

  • Maven
  • OpenJDK 21+
  • Podman or Docker
  • Kubernetes
  • Helm CLI

Once we have everything in place, we can proceed to the next steps.

Create Spring Boot Application

In this exercise, we create a typical Spring Boot application that connects to the relational database and exposes REST endpoints for the basic CRUD operations. Both the application and database will run on Kubernetes. We install the Postgres database using the Bitnami Helm chart. To build and deploy the application in the Kubernetes cluster, we will use Maven and Eclipse JKube features. First, let’s take a look at the source code of our application. Here’s the list of included dependencies. It’s worth noting that Spring Boot Actuator is responsible for generating Kubernetes liveness and readiness health checks. JKube will be able to detect it and generate the required elements in the Deployment manifest.

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>org.springdoc</groupId>
    <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
    <version>2.6.0</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-devtools</artifactId>
    <scope>runtime</scope>
    <optional>true</optional>
</dependency>
XML

Here’s our Person entity class:

@Entity
public class Person {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer id;
    private String firstName;
    private String lastName;
    private int age;
    @Enumerated(EnumType.STRING)
    private Gender gender;
    private Integer externalId;

    // getters and setters
}
Java

We use the well-known Spring Data repository pattern to implement the data access layer. Here’s our PersonRepository interface. There are the additional method to search persons by age.

public interface PersonRepository extends CrudRepository<Person, Long> {
    List<Person> findByAgeGreaterThan(int age);
}
Java

Finally, we can implement the REST controller using the previously created PersonRepository to interact with the database.

@RestController
@RequestMapping("/persons")
public class PersonController {

    private static final Logger LOG = LoggerFactory
       .getLogger(PersonController.class);
    private final PersonRepository repository;

    public PersonController(PersonRepository repository) {
        this.repository = repository;
    }

    @GetMapping
    public List<Person> getAll() {
        LOG.info("Get all persons");
        return (List<Person>) repository.findAll();
    }

    @GetMapping("/{id}")
    public Person getById(@PathVariable("id") Long id) {
        LOG.info("Get person by id={}", id);
        return repository.findById(id).orElseThrow();
    }

    @GetMapping("/age/{age}")
    public List<Person> getByAgeGreaterThan(@PathVariable("age") int age) {
        LOG.info("Get person by age={}", age);
        return repository.findByAgeGreaterThan(age);
    }

    @DeleteMapping("/{id}")
    public void deleteById(@PathVariable("id") Long id) {
        LOG.info("Delete person by id={}", id);
        repository.deleteById(id);
    }

    @PostMapping
    public Person addNew(@RequestBody Person person) {
        LOG.info("Add new person: {}", person);
        return repository.save(person);
    }
    
}
Java

Here’s the full list of configuration properties. The database name and connection credentials are configured through environment variables: DATABASE_NAME, DATABASE_USER, and DATABASE_PASS. We should enable the exposure of the Kubernetes liveness and readiness health checks. After that, we include the database component status in the readiness probe.

spring:
  application:
    name: inner-dev-loop
  datasource:
    url: jdbc:postgresql://person-db-postgresql:5432/${DATABASE_NAME}
    username: ${DATABASE_USER}
    password: ${DATABASE_PASS}
  jpa:
    hibernate:
      ddl-auto: create
    properties:
      hibernate:
        show_sql: true
        format_sql: true

management:
  info.java.enabled: true
  endpoints:
    web:
      exposure:
        include: "*"
  endpoint.health:
    show-details: always
    group:
      readiness:
        include: db
    probes:
      enabled: true
YAML

Install Postgres on Kubernetes

We begin our interaction with Kubernetes from the database installation. We use the Bitnami Helm chart for that. In the first step, we must add the Bitnami repository:

helm repo add bitnami https://charts.bitnami.com/bitnami 
ShellSession

Then, we can install the Postgres chart under the person-db name. During the installation, we create the spring user and the database under the same name.

helm install person-db bitnami/postgresql \
   --set auth.username=spring \
   --set auth.database=spring
ShellSession

Postgres is accessible inside the cluster under the person-db-postgresql name.

$ kubectl get svc
NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
person-db-postgresql      ClusterIP   10.96.115.19   <none>        5432/TCP   23s
person-db-postgresql-hl   ClusterIP   None           <none>        5432/TCP   23s
ShellSession

Helm chart generates a Kubernetes Secret under the same name person-db-postgresql. The password for the spring user is automatically generated during installation. We can retrieve that password from the password field.

$ kubectl get secret person-db-postgresql -o yaml
apiVersion: v1
data:
  password: UkRaalNYU3o3cA==
  postgres-password: a1pBMFFuOFl3cQ==
kind: Secret
metadata:
  annotations:
    meta.helm.sh/release-name: person-db
    meta.helm.sh/release-namespace: demo
  creationTimestamp: "2024-10-03T14:39:19Z"
  labels:
    app.kubernetes.io/instance: person-db
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: postgresql
    app.kubernetes.io/version: 17.0.0
    helm.sh/chart: postgresql-16.0.0
  name: person-db-postgresql
  namespace: demo
  resourceVersion: "61646"
  uid: 00b0cf7e-8521-4f53-9c69-cfd3c942004c
type: Opaque
ShellSession

JKube with Spring Boot in Action

With JKube, we can build a Spring Boot app image and deploy it on Kubernetes with a single command without creating any YAML or Dockerfile. To use Eclipse JKube, we must include the org.eclipse.jkube:kubernetes-maven-plugin plugin to the Maven pom.xml. The plugin configuration contains values for two environment variables DATABASE_USER and DATABASE_NAME required by our Spring Boot application. We also set the memory and CPU request for the Deployment, which is obviusly a good practice.

<plugin>
    <groupId>org.eclipse.jkube</groupId>
    <artifactId>kubernetes-maven-plugin</artifactId>
    <version>1.17.0</version>
    <configuration>
        <resources>
            <controller>
                <env>
                    <DATABASE_USER>spring</DATABASE_USER>
                    <DATABASE_NAME>spring</DATABASE_NAME>
                </env>
                <containerResources>
                    <requests>
                        <memory>256Mi</memory>
                        <cpu>200m</cpu>
                    </requests>
                </containerResources>
            </controller>
        </resources>
    </configuration>
</plugin>
XML

We can use the resource fragments to generate a more advanced YAML manifest with properties not covered by the plugin’s XML fields. Such a fragment of the YAML manifest must be placed in the src/main/jkube directory. In our case, the password to the database must be injected from the Kubernetes person-db-postgresql Secret generated by the Bitnami Helm chart. Here’s the fragment of Deployment YAML in the deployment.yml file:

spec:
  template:
    spec:
      containers:
        - env:
          - name: DATABASE_PASS
            valueFrom:
              secretKeyRef:
                key: password
                name: person-db-postgresql
src/main/jkube/deployment.yml

If we want to build the image and generate Kubernetes manifests without applying them to the cluster we can use the goals k8s:build and k8s:resource during the Maven build.

mvn clean package -DskipTests k8s:build k8s:resource
ShellSession

Let’s take a look at the logs from the k8s:build phase. JKube reads the image group from the last part of the Maven group ID and replaces the version that contains the -SNAPSHOT suffix with the latest tag.

spring-boot-jkube-build

Here are the logs from k8s:resource phase. As you see, JKube reads the Spring Boot management.health.probes.enabled configuration property and includes /actuator/health/liveness and /actuator/health/readiness endpoints as the probes.

spring-boot-jkube-resource

Here’s the Deployment object generated by the JKube plugin for our Spring Boot application.

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    jkube.eclipse.org/scm-url: https://github.com/spring-projects/spring-boot/inner-dev-loop
    jkube.eclipse.org/scm-tag: HEAD
    jkube.eclipse.org/git-commit: 92b2e11f7ddb134323133aee0daa778135500113
    jkube.eclipse.org/git-url: https://github.com/piomin/kubernetes-quickstart.git
    jkube.eclipse.org/git-branch: master
  labels:
    app: inner-dev-loop
    provider: jkube
    version: 1.0-SNAPSHOT
    group: pl.piomin
    app.kubernetes.io/part-of: pl.piomin
    app.kubernetes.io/managed-by: jkube
    app.kubernetes.io/name: inner-dev-loop
    app.kubernetes.io/version: 1.0-SNAPSHOT
  name: inner-dev-loop
spec:
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: inner-dev-loop
      provider: jkube
      group: pl.piomin
      app.kubernetes.io/name: inner-dev-loop
      app.kubernetes.io/part-of: pl.piomin
      app.kubernetes.io/managed-by: jkube
  template:
    metadata:
      annotations:
        jkube.eclipse.org/scm-url: https://github.com/spring-projects/spring-boot/inner-dev-loop
        jkube.eclipse.org/scm-tag: HEAD
        jkube.eclipse.org/git-commit: 92b2e11f7ddb134323133aee0daa778135500113
        jkube.eclipse.org/git-url: https://github.com/piomin/kubernetes-quickstart.git
        jkube.eclipse.org/git-branch: master
      labels:
        app: inner-dev-loop
        provider: jkube
        version: 1.0-SNAPSHOT
        group: pl.piomin
        app.kubernetes.io/part-of: pl.piomin
        app.kubernetes.io/managed-by: jkube
        app.kubernetes.io/name: inner-dev-loop
        app.kubernetes.io/version: 1.0-SNAPSHOT
    spec:
      containers:
      - env:
        - name: DATABASE_PASS
          valueFrom:
            secretKeyRef:
              key: password
              name: person-db-postgresql
        - name: KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: DATABASE_NAME
          value: spring
        - name: DATABASE_USER
          value: spring
        - name: HOSTNAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        image: piomin/inner-dev-loop:latest
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /actuator/health/liveness
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 180
          successThreshold: 1
        name: spring-boot
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        - containerPort: 9779
          name: prometheus
          protocol: TCP
        - containerPort: 8778
          name: jolokia
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /actuator/health/readiness
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 10
          successThreshold: 1
        securityContext:
          privileged: false
YAML

In order to deploy the application to Kubernetes, we need to add the k8s:apply to the previously executed command.

mvn clean package -DskipTests k8s:build k8s:resource k8s:apply
ShellSession

After that, JKube applies the generated YAML manifests to the cluster.

We can verify a list of running applications by displaying a list of pods:

$ kubectl get pod
NAME                              READY   STATUS    RESTARTS   AGE
inner-dev-loop-5cbcf7dfc6-wfdr6   1/1     Running   0          17s
person-db-postgresql-0            1/1     Running   0          106m
ShellSession

It is also possible to display the application logs by executing the following command:

mvn k8s:log
ShellSession

Here’s the output after running the mvn k8s:log command:

spring-boot-jkube-log

We can also do other things, like to undeploy the app from the Kubernetes cluster.

Final Thoughts

Eclipse JKube simplifies Spring Boot deployment on the Kubernetes cluster. Except for the presented features, it also provides mechanisms for the inner development loop with the k8s:watch and k8s:remote-dev goals.

The post Spring Boot on Kubernetes with Eclipse JKube appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/10/03/spring-boot-on-kubernetes-with-eclipse-jkube/feed/ 4 15398
Azure DevOps with OpenShift https://piotrminkowski.com/2024/09/12/azure-devops-with-openshift/ https://piotrminkowski.com/2024/09/12/azure-devops-with-openshift/#comments Thu, 12 Sep 2024 10:38:24 +0000 https://piotrminkowski.com/?p=15375 This article will teach you how to integrate Azure DevOps with the OpenShift cluster to build and deploy your app there. You will learn how to run Azure Pipelines self-hosted agents on OpenShift and use the oc client from your pipelines. If you are interested in Azure DevOps you can read my previous article about […]

The post Azure DevOps with OpenShift appeared first on Piotr's TechBlog.

]]>
This article will teach you how to integrate Azure DevOps with the OpenShift cluster to build and deploy your app there. You will learn how to run Azure Pipelines self-hosted agents on OpenShift and use the oc client from your pipelines. If you are interested in Azure DevOps you can read my previous article about that platform and Terraform used together to prepare the environment and run the Spring Boot app on the Azure cloud.

Before we begin, let me clarify some things and explain my decisions. If you were searching for something about Azure DevOps and OpenShift integration, you came across several articles about the Red Hat Azure DevOps extension for OpenShift. I won’t use that extension. In my opinion, it is not actively developed right now, and therefore it provides some limitations that may complicate our integration process. On the other hand, it won’t offer many useful features, so we can do just as well without it.

You will also find articles that show how to prepare a self-hosted agent image based on the Red Hat dotnet-runtime as a base image (e.g. this one). I also won’t use it. Instead, I’m going to leverage the image built on top of UBI9 provided by the tool called Blue Agent (formerly Azure Pipelines Agent). It is a self-hosted Azure Pipelines agent for Kubernetes, easy to run, secure, and auto-scaled. We will have to modify that image slightly, but more details later.

Prerequisites

In order to proceed with the exercise, we need an active subscription to the Azure cloud and an instance of Azure DevOps. We also have to run the OpenShift cluster, which is accessible for our pipelines running on Azure DevOps. I’m running that cluster also on the Azure cloud using the Azure Red Hat OpenShift managed service (ARO). The details about Azure DevOps creation or OpenShift installation are out of the scope of that article.

Source Code

If you would like to try this exercise by yourself, you may always take a look at my source code. Today you will have to clone two sample Git repositories. The first one contains a sample Spring Boot app used in our exercise. We will try to build that app with Azure Pipelines and deploy it on OpenShift. The second repository is a fork of the official Blue Agent repository. It contains a new version of the Dockerfile for our sample self-agent image based on Red Hat UBI9. Once you clone both of these repositories, you just need to follow my instructions.

Azure DevOps Self-Hosted Agent on OpenShift with Blue Agent

Build the Agent Image

In this section, we will build the image of the self-agent based on UBI9. Then, we will run it on the OpenShift cluster. We need to open the Dockerfile in the repository located in the src/docker/Dockerfile-ubi9 path. We don’t need to change much inside that file. It contains several clients, e.g. for AWS or Azure interaction. We will include the line for installing the oc client, which allows us to interact with OpenShift.

RUN curl -s https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-client-linux.tar.gz -o - | tar zxvf - -C /usr/bin/
Dockerfile

After that, we need to build a new image. You can completely omit this step and pull the final image published on my Docker account and available under the piomin/blue-agent:ubi9 tag.

$ docker build -t piomin/blue-agent:ubi9 -f Dockerfile-ubi9 \
  --build-arg JQ_VERSION=1.6 \
  --build-arg AZURE_CLI_VERSION=2.63.0 \
  --build-arg AWS_CLI_VERSION=2.17.42 \
  --build-arg GCLOUD_CLI_VERSION=490.0.0 \
  --build-arg POWERSHELL_VERSION=7.2.23 \
  --build-arg TINI_VERSION=0.19.0 \
  --build-arg BUILDKIT_VERSION=0.15.2  \
  --build-arg AZP_AGENT_VERSION=3.243.1 \
  --build-arg ROOTLESSKIT_VERSION=2.3.1 \
  --build-arg GO_VERSION=1.22.7 \
  --build-arg YQ_VERSION=4.44.3 .
ShellSession

We can use a Helm chart to install Blue Agent on Kubernetes. This project is still under active development. I could not customize it with parameters to prepare a chart for installing Blue Agent on OpenShift. So, I just set some of them inside the values.yaml file:

pipelines:
  organizationURL: https://dev.azure.com/pminkows
  personalAccessToken: <AZURE_DEVOPS_PERSONAL_ACCESS_TOKEN>
  poolName: Default

image:
  repository: piomin/blue-agent
  version: 2
  flavor: ubi9
YAML

Deploy Agent on OpenShift

We can generate the YAML manifests for the defined parameters without installing it with the following command:

$ helm template -f blue-agent-values.yaml --dry-run .
ShellSession

In order to run it on OpenShift, we need to customize the Deployment object. First of all, I had to grant the privileged SCC (Security Context Constraint) to the container and remove some fields from the securityContext section. Here’s our Deployment object. It refers to the objects previously generated by the Helm chart: agent-blue-agent ServiceAccount and the Secret with the same name.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: agent-blue-agent    
  labels:
    app.kubernetes.io/component: agent
    app.kubernetes.io/instance: agent
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: blue-agent
    app.kubernetes.io/part-of: blue-agent
    app.kubernetes.io/version: 3.243.1
    helm.sh/chart: blue-agent-7.0.3
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/instance: agent
      app.kubernetes.io/name: blue-agent
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: agent
        app.kubernetes.io/name: blue-agent
      annotations:
        cluster-autoscaler.kubernetes.io/safe-to-evict: 'false'
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: Always
      serviceAccountName: agent-blue-agent
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 3600
      securityContext: {}
      containers:
        - resources:
            limits:
              cpu: '2'
              ephemeral-storage: 8Gi
              memory: 4Gi
            requests:
              cpu: '1'
              ephemeral-storage: 2Gi
              memory: 2Gi
          terminationMessagePath: /dev/termination-log
          lifecycle:
            preStop:
              exec:
                command:
                  - bash
                  - '-c'
                  - ''
                  - 'rm -rf ${AZP_WORK};'
                  - 'rm -rf ${TMPDIR};'
          name: azp-agent
          env:
            - name: AGENT_DIAGLOGPATH
              value: /app-root/azp-logs
            - name: VSO_AGENT_IGNORE
              value: AZP_TOKEN
            - name: AGENT_ALLOW_RUNASROOT
              value: '1'
            - name: AZP_AGENT_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: AZP_URL
              valueFrom:
                secretKeyRef:
                  name: agent-blue-agent
                  key: organizationURL
            - name: AZP_POOL
              value: Default
            - name: AZP_TOKEN
              valueFrom:
                secretKeyRef:
                  name: agent-blue-agent
                  key: personalAccessToken
            - name: flavor_ubi9
            - name: version_7.0.3
          securityContext:
            privileged: true
          imagePullPolicy: Always
          volumeMounts:
            - name: azp-logs
              mountPath: /app-root/azp-logs
            - name: azp-work
              mountPath: /app-root/azp-work
            - name: local-tmp
              mountPath: /app-root/.local/tmp
          terminationMessagePolicy: File
          image: 'piomin/blue-agent:ubi9'
      serviceAccount: agent-blue-agent
      volumes:
        - name: azp-logs
          emptyDir:
            sizeLimit: 1Gi
        - name: azp-work
          ephemeral:
            volumeClaimTemplate:
              spec:
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 10Gi
                storageClassName: managed-csi
                volumeMode: Filesystem
        - name: local-tmp
          ephemeral:
            volumeClaimTemplate:
              spec:
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 1Gi
                storageClassName: managed-csi
                volumeMode: Filesystem
      dnsPolicy: ClusterFirst
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 50%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
YAML

After removing the --dry-run option from the helm template command we can install the solution on OpenShift. It’s worth noting that there were some problems with the Deployment object in the Helm chart, so we will apply it directly later.

azure-devops-openshift-blue-agent

Before applying Deployment we have to add privileged SCC to the ServiceAccount used by the agent. By the way, adding privileged SCC is not the best approach to managing containers on OpenShift. However, the oc client installed in the image creates a directory during the login procedure

$ oc adm policy add-scc-to-user privileged -z agent-blue-agent
ShellSession

Once we deploy an agent on OpenShift we should see three running pods. They are trying to connect to the Default pool defined in the Azure DevOps for our self-hosted agent.

$ oc get po
NAME                                READY   STATUS    RESTARTS   AGE
agent-blue-agent-5458b9b76d-2gk2z   1/1     Running   0          13h
agent-blue-agent-5458b9b76d-nwcxh   1/1     Running   0          13h
agent-blue-agent-5458b9b76d-tcpnp   1/1     Running   0          13h
ShellSession

Connect Self-hosted Agent to Azure DevOps

Let’s switch to the Azure DevOps instance. Then, we should go to our project. After that, we need to go to the Organization Settings -> Agent pools. We need to create a new agent pool. The pool name depends on the name configured for the Blue Agent deployed on OpenShift. In our case, this name is Default.

Once we create a pool we should see all three instances of agents in the following list.

azure-devops-openshift-agents

We can switch to the OpenShift once again. Then, let’s take a look at the logs printed out by one of the agents. As you see, it has been started successfully and listens for the incoming jobs.

If agents are running on OpenShift and they successfully connect with Azure DevOps, we finished the first part of our exercise. Now, we can create a pipeline for our sample Spring Boot app.

Create Azure DevOps Pipeline for OpenShift

Azure Pipeline Definition

Firstly, we need to go to the sample-spring-boot-web repository. The pipeline is configured in the azure-pipelines.yml file in the repository root directory. Let’s take a look at it. It’s very simple simple. I’m just using the Azure DevOps Command-Line task to interact with OpenShift through the oc client. Of course, it has to define the target agent pool used for running its jobs (1). In the first step, we need to log in to the OpenShift cluster (2). We can use the internal address of the Kubernetes Service, since the agent is running inside the cluster. All the actions should be performed inside our sample myapp project (3). We need to create BuildConfig and Deployment objects using the templates from the repository (4). The pipeline uses its BuildId parameter to the output image. Finally, it starts the build configured by the previously applied BuildConfig object (5).

trigger:
- master

# (1)
pool:
  name: Default

steps:
# (2)
- task: CmdLine@2
  inputs:
    script: 'oc login https://172.30.0.1:443 -u kubeadmin -p $(OCP_PASSWORD) --insecure-skip-tls-verify=true'
  displayName: Login to OpenShift
# (3)
- task: CmdLine@2
  inputs:
    script: 'oc project myapp'
  displayName: Switch to project
# (4)
- task: CmdLine@2
  inputs:
    script: 'oc process -f ocp/openshift.yaml -o yaml -p IMAGE_TAG=v1.0-$(Build.BuildId) -p NAMESPACE=myapp | oc apply -f -'
  displayName: Create build
# (5)
- task: CmdLine@2
  inputs:
    script: 'oc start-build sample-spring-boot-web-bc -w'
    failOnStderr: true
  displayName: Start build
  timeoutInMinutes: 5
- task: CmdLine@2
  inputs:
    script: 'oc status'
  displayName: Check status
YAML

Integrate Azure Pipelines with OpenShift

We use the OpenShift Template object for defining the YAML manifest with Deployment and BuildConfig. The BuildConfig object typically manages the image build process on OpenShift. In order to build the image directly from the source code, it uses the source-2-image (S2I) tool. We need to set at least three parameters to configure the process properly. The first of them is the address of the output image (1). We can use the internal OpenShift repository available at image-registry.openshift-image-registry.svc:5000 or any other external registry like Quay or Docker Hub. We should also set the name of the builder image (2). Our app requires at least Java 21. Of course, the process requires the source code repository as the input (3). At the same time, we define the Deployment object. It uses the image previously built by the BuildConfig object (4). The whole template takes two input parameters: IMAGE_TAG and NAMESPACE (5).

ind: Template
apiVersion: template.openshift.io/v1
metadata:
  name: sample-spring-boot-web-tmpl
objects:
  - kind: BuildConfig
    apiVersion: build.openshift.io/v1
    metadata:
      name: sample-spring-boot-web-bc
      labels:
        build: sample-spring-boot-web-bc
    spec:
      # (1)
      output:
        to:
          kind: DockerImage
          name: 'image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/sample-spring-boot-web:${IMAGE_TAG}'
      # (2)
      strategy:
        type: Source
        sourceStrategy:
          from:
            kind: ImageStreamTag
            namespace: openshift
            name: 'openjdk-21:stable'
      # (3)
      source:
        type: Git
        git:
          uri: 'https://github.com/piomin/sample-spring-boot-web.git'
  - kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: sample-spring-boot-web
      labels:
        app: sample-spring-boot-web
        app.kubernetes.io/component: sample-spring-boot-web
        app.kubernetes.io/instance: sample-spring-boot-web
    spec:
      replicas: 1
      selector:
        matchLabels:
          deployment: sample-spring-boot-web
      template:
        metadata:
          labels:
            deployment: sample-spring-boot-web
        spec:
          containers:
            - name: sample-spring-boot-web
              # (4)
              image: 'image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/sample-spring-boot-web:${IMAGE_TAG}'
              ports:
                - containerPort: 8080
                  protocol: TCP
                - containerPort: 8443
                  protocol: TCP
# (5)
parameters:
  - name: IMAGE_TAG
    displayName: Image tag
    description: The output image tag
    value: v1.0
    required: true
  - name: NAMESPACE
    displayName: Namespace
    description: The OpenShift Namespace where the ImageStream resides
    value: openshift
YAML

Currently, OpenShift doesn’t provide OpenJDK 21 image by default. So we need to manually import it to our cluster before running the pipeline.

$ oc import-image openjdk-21:stable \
  --from=registry.access.redhat.com/ubi9/openjdk-21:1.20-2.1725851045 \
  --confirm
ShellSession

Now, we can create a pipeline in Azure DevOps. In order to do it, we need to go to the Pipelines section, and then click the New pipeline button. After that, we just need to pass the address of our repository with the pipeline definition.

Our pipeline requires the OCP_PASSWORD input parameter with the OpenShift admin user password. We can set it as the pipeline secret variable. In order to do that, we need to edit the pipeline and then click the Variables button.

Run the Pipeline

Finally, we can run our pipeline. If everything finishes successfully, the status of the job is Success. It takes around one minute to execute all the steps defined in the pipeline.

We can see detailed logs for each step.

azure-devops-openshift-pipeline-logs

Each time, we run a pipeline a new build on OpenShift starts. Note, that the pipeline updates the BuildConfig object with the new version of the output image.

azure-devops-openshift-builds

We can take a look at detailed logs of each build. Within such a build OpenShift starts a new pod, which performs the whole process. It uses the S2I approach for building the image from the source code and pushing that image to the internal OpenShift registry.

Finally, let’s take a look at the Deployment. As you see, it was reloaded with the latest version of the image and worked fine.

$ oc describe deploy sample-spring-boot-web -n myapp
Name:                   sample-spring-boot-web
Namespace:              myapp
CreationTimestamp:      Wed, 11 Sep 2024 16:09:11 +0200
Labels:                 app=sample-spring-boot-web
                        app.kubernetes.io/component=sample-spring-boot-web
                        app.kubernetes.io/instance=sample-spring-boot-web
Annotations:            deployment.kubernetes.io/revision: 3
Selector:               deployment=sample-spring-boot-web
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  deployment=sample-spring-boot-web
  Containers:
   sample-spring-boot-web:
    Image:        image-registry.openshift-image-registry.svc:5000/myapp/sample-spring-boot-web:v1.0-134
    Ports:        8080/TCP, 8443/TCP
    Host Ports:   0/TCP, 0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  sample-spring-boot-web-78559697c9 (0/0 replicas created), sample-spring-boot-web-7985d6b844 (0/0 replicas created)
NewReplicaSet:   sample-spring-boot-web-5584447f5d (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  19m   deployment-controller  Scaled up replica set sample-spring-boot-web-5584447f5d to 1
  Normal  ScalingReplicaSet  17m   deployment-controller  Scaled down replica set sample-spring-boot-web-7985d6b844 to 0 from 1
ShellSession

By the way, Azure Pipelines jobs are “load-balanced” between the agents. Here’s the name of the agent used for the first pipeline run.

Here’s the name of the agent used for the second pipeline run.

There are three instances of the agent running on OpenShift. So, Azure DevOps can process max 3 pipeline runs. By the way, we could enable autoscaling for Blue Agent based on KEDA.

azure-devops-openshift-jobs

Final Thoughts

In this article, I showed the simplest and most OpenShift-native approach to building CI/CD pipelines on Azure DevOps. We just need to use the oc client on the agent image and the OpenShift BuildConfig object to orchestrate the whole process of building and deploying the Spring Boot app on the cluster. Of course, we could implement the same process in several different. For example, we could leverage the plugins for Kubernetes and completely omit the tools provided by OpenShift. On the other hand, it is possible to use Argo CD for the delivery phase, and Azure Pipelines for the integration phase.

The post Azure DevOps with OpenShift appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/09/12/azure-devops-with-openshift/feed/ 2 15375
Multi-node Kubernetes Cluster with Minikube https://piotrminkowski.com/2024/07/09/multi-node-kubernetes-cluster-with-minikube/ https://piotrminkowski.com/2024/07/09/multi-node-kubernetes-cluster-with-minikube/#comments Tue, 09 Jul 2024 10:20:59 +0000 https://piotrminkowski.com/?p=15346 This article will teach you how to run and manage a multi-node Kubernetes cluster locally with Minikube. We will run this cluster on Docker. After that, we will enable some useful add-ons, install Kubernetes-native tools for monitoring and observability, and run a sample app that requires storage. You can compare this article with a similar […]

The post Multi-node Kubernetes Cluster with Minikube appeared first on Piotr's TechBlog.

]]>
This article will teach you how to run and manage a multi-node Kubernetes cluster locally with Minikube. We will run this cluster on Docker. After that, we will enable some useful add-ons, install Kubernetes-native tools for monitoring and observability, and run a sample app that requires storage. You can compare this article with a similar post about the Azure Kubernetes Service.

Prerequisites

Before you begin, you need to install Docker on your local machine. Then you need to download and install Minikube. On macOS, we can do it using the Homebrew command as shown below:

$ brew install minikube
ShellSession

Once we successfully installed Minikube, we can use its CLI. Let’s verify the version used in this article:

$ minikube version
minikube version: v1.33.1
commit: 5883c09216182566a63dff4c326a6fc9ed2982ff
ShellSession

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. This time, we won’t work much with the source code. However, you can access the repository with the sample Spring Boot app that uses storage exposed on the Kubernetes cluster. Once you clone the repository, go to the volumes/files-app directory. Then you should follow my instructions.

Create a Multi-node Kubernetes Cluster with Minikube

In order to create a multi-node Kubernetes cluster with Minikube, we need to use the --nodes or -n parameter in the minikube start command. Additionally, we can increase the default value of memory and CPUs reserved for the cluster with the --memory and --cpus parameters. Here’s the required command to execute:

$ minikube start --memory='12g' --cpus='4' -n 3
ShellSession

By the way, if you increase the resources assigned to the Minikube instance, you should also take care of resource reservations for Docker.

Once we run the minikube start command, the cluster creation begins. You should see a similar output, if everything goes fine.

minikube-kubernetes-create

Now, we can use Minikube with the kubectl tool:

$ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:52879
CoreDNS is running at https://127.0.0.1:52879/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ShellSession

We can display a list of running nodes:

$ kubectl get nodes
NAME           STATUS   ROLES           AGE     VERSION
minikube       Ready    control-plane   4h10m   v1.30.0
minikube-m02   Ready    <none>          4h9m    v1.30.0
minikube-m03   Ready    <none>          4h9m    v1.30.0
ShellSession

Sample Spring Boot App

Our Spring Boot app is simple. It exposes some REST endpoints for file-based operations on the target directory attached as a mounted volume. In order to expose REST endpoints, we need to include the Spring Boot Web starter. We will build the image using the Jib Maven plugin.

<properties>
  <spring-boot.version>3.3.1</spring-boot.version>
</properties>

<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
  </dependency>
</dependencies>

<build>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
  </plugins>
</build>

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-dependencies</artifactId>
      <version>${spring-boot.version}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>
    
<build>
  <plugins>
    <plugin>
      <groupId>com.google.cloud.tools</groupId>
      <artifactId>jib-maven-plugin</artifactId>
      <version>3.4.3</version>
    </plugin>
  </plugins>
</build>
XML

Let’s take a look at the main @RestController in our app. It exposes endpoints for listing all the files inside the target directory (GET /files/all), another one for creating a new file (POST /files/{name}), and also for adding a new string line to the existing file (POST /files/{name}/line).

package pl.piomin.services.files.controller;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.*;

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.List;

import static java.nio.file.Files.list;
import static java.nio.file.Files.writeString;

@RestController
@RequestMapping("/files")
public class FilesController {

    private static final Logger LOG = LoggerFactory.getLogger(FilesController.class);

    @Value("${MOUNT_PATH:/mount/data}")
    String root;

    @GetMapping("/all")
    public List<String> files() throws IOException {
        return list(Path.of(root)).map(Path::toString).toList();
    }

    @PostMapping("/{name}")
    public String createFile(@PathVariable("name") String name) throws IOException {
        return Files.createFile(Path.of(root + "/" + name)).toString();
    }

    @PostMapping("/{name}/line")
    public void addLine(@PathVariable("name") String name,
                        @RequestBody String line) {
        try {
            writeString(Path.of(root + "/" + name), line, StandardOpenOption.APPEND);
        } catch (IOException e) {
            LOG.error("Error while writing to file", e);
        }
    }
}
Java

Usually, I deploy the apps on Kubernetes with Skaffold. But this time, there are some issues with integration between the multi-node Minikube cluster and Skaffold. You can find a detailed description of those issues here. Therefore we build the image directly with the Jib Maven plugin and then just run the app with kubectl CLI.

Install Addons and Tools

Minikube comes with a set of predefined add-ons for Kubernetes. We can install each of them with a single minikube addons enable <ADDON_NAME> command. Although there are several plugins available, we still need to install some useful Kubernetes-native tools like Prometheus, for example using the Helm chart. In order to list all available plugins, we should execute the following command:

$ minikube addons list
ShellSession

Install Addon for Storage

The default storage provider in Minikube doesn’t support the multi-node clusters. It also doesn’t implement the CSI interface and is not able to handle volume snapshots. Fortunately, Minikube offers the csi-hostpath-driver addon for deploying the “CSI Hostpath Driver”. Since this plugin is disabled, we need to enable it.

$ minikube addons enable csi-hostpath-driver
ShellSession

Then, we can set the csi-hostpath-driver as a default storage class for the dynamic volume claims.

$ kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
$ kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
ShellSession

Install Monitoring Stack with Helm

The monitoring stack is not available as an add-on. However, we can easily install it using the Helm chart. We will use the official community chart for that kube-prometheus-stack. Firstly, let’s add the required repository.

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
ShellSession

Then, we can install the Prometheus monitoring stack in the monitoring namespace by executing the following command:

$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
  -n monitoring --create-namespace
ShellSession

Once you install Prometheus in your Minikube, you take advantage of the several default metrics exposed by this tool. For example, the Lens IDE automatically integrates with Prometheus metrics and displays the graphs with cluster overview.

minikube-kubernetes-cluster-metrics

We can also see the visualization of resource usage for all running pods, deployments, or stateful sets.

minikube-kubernetes-pod-metrics

Install Postgres with Helm

We will also install the Postgres database for multi-node cluster testing purposes. Once again, there is a Helm chart that simplifies Postgres installation on Kubernetes. It is published in the Bitnami repository. Firstly, let’s add the required repository:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
ShellSession

Then, we can install Postgres in the db namespace. We increase the default number of instances to 3.

$ helm install postgresql bitnami/postgresql \
  --set readReplicas.replicaCount=3 \
  -n db --create-namespace
ShellSession

The chart creates the StatefulSet object with 3 replicas.

$ kubectl get statefulset -n db
NAME         READY   AGE
postgresql   3/3     55m
ShellSession

We can display a list of running pods. As you see, Kubernetes scheduled 2 pods on the minikube-m02 node, and a single pod on the minikube node.

$ kubectl get po -n db -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP            NODE 
postgresql-0   1/1     Running   0          56m   10.244.1.9    minikube-m02
postgresql-1   1/1     Running   0          23m   10.244.1.10   minikube-m02
postgresql-2   1/1     Running   0          23m   10.244.0.4    minikube
ShellSession

Under the hood, there are 3 persistence volumes created. They use a default csi-hostpath-sc storage class and RWO mode.

$ kubectl get pvc -n db
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data-postgresql-0   Bound    pvc-e9b55ce8-978a-44ae-8fab-d5d6f911f1f9   8Gi        RWO            csi-hostpath-sc   <unset>                 65m
data-postgresql-1   Bound    pvc-d93af9ad-a034-4fbb-8377-f39005cddc99   8Gi        RWO            csi-hostpath-sc   <unset>                 32m
data-postgresql-2   Bound    pvc-b683f1dc-4cd9-466c-9c99-eb0d356229c3   8Gi        RWO            csi-hostpath-sc   <unset>                 32m
ShellSession

Build and Deploy Sample Spring Boot App on Minikube

In the first step, we build the app image. We use the Jib Maven plugin for that. I’m pushing the image to my own Docker registry under the piomin name. So you can change to your registry account.

$ cd volumes/files-app
$ mvn clean compile jib:build -Dimage=piomin/files-app:latest
ShellSession

The image is successfully pushed to the remote registry and is available under the piomin/files-app:latest tag.

Let’s create a new namespace on Minikube. We will run our app in the demo namespace.

$ kubectl create ns demo
ShellSession

Then, let’s create the PersistenceVolumeClaim. Since we will run multiple app pods distributed across all the Kubernetes nodes and the same volume is shared between all the instances we need the ReadWriteMany mode.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data
  namespace: demo
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
YAML

xxx

$ kubectl get pvc -n demo
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data   Bound    pvc-08fe242a-6599-4282-b03c-ee38e092431e   1Gi        RWX            csi-hostpath-sc
ShellSession

After that, we can deploy our app. In order, to spread the pods across all the cluster nodes we need to define the PodAntiAffinity rule (1). We will enable the running of only a single pod on each node. The deployment also mounts the data volume into all the app pods (2) (3).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: files-app
  namespace: demo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: files-app
  template:
    metadata:
      labels:
        app: files-app
    spec:
      # (1)
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - files-app
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: files-app
        image: piomin/files-app:latest
        imagePullPolicy: Always
        resources:
          requests:
            memory: 200Mi
            cpu: 100m
        ports:
        - containerPort: 8080
        env:
          - name: MOUNT_PATH
            value: /mount/data
        # (2)
        volumeMounts:
          - name: data
            mountPath: /mount/data
      # (3)
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: data
YAML

Let’s verify a list of running pods deploying the app.

$ kubectl get po -n demo
NAME                         READY   STATUS    RESTARTS   AGE
files-app-84897d9b57-5qqdr   0/1     Pending   0          36m
files-app-84897d9b57-7gwgp   1/1     Running   0          36m
files-app-84897d9b57-bjs84   0/1     Pending   0          36m
ShellSession

Although, we created the RWX volume, only a single pod is running. As you see, the CSI Hostpath Provider doesn’t fully support the read-write-many mode on Minikube.

In order to solve that problem, we can enable the Storage Provisioner Gluster addon in Minikube.

$ minikube addons enable storage-provisioner-gluster
ShellSession

After enabling it, several new pods are running in the storage-gluster namespace.

$ kubectl -n storage-gluster get pods
NAME                                       READY   STATUS    RESTARTS   AGE
glusterfile-provisioner-79cf7f87d5-87p57   1/1     Running   0          5m25s
glusterfs-d8pfp                            1/1     Running   0          5m25s
glusterfs-mp2qx                            1/1     Running   0          5m25s
glusterfs-rlnxz                            1/1     Running   0          5m25s
heketi-778d755cd-jcpqb                     1/1     Running   0          5m25s
ShellSession

Also, there is a new default StorageClass with the glusterfile name.

$ kubectl get sc
NAME                    PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-hostpath-sc         hostpath.csi.k8s.io        Delete          Immediate           false                  20h
glusterfile (default)   gluster.org/glusterfile    Delete          Immediate           false                  19s
standard                k8s.io/minikube-hostpath   Delete          Immediate           false                  21h
ShellSession

Once we redeploy our app and recreate the PVC using a new default storage class, we can expose our sample Spring Boot app as a Kubernetes service:

apiVersion: v1
kind: Service
metadata:
  name: files-app
spec:
  selector:
    app: files-app
  ports:
  - port: 8080
    protocol: TCP
    name: http
  type: ClusterIP
YAML

Then, let’s enable port forwarding for that service to access it over the localhost:8080:

$ kubectl port-forward svc/files-app 8080 -n demo
ShellSession

Finally, we can run some tests to list and create some files on the target volume:

$ curl http://localhost:8080/files/all
[]

$ curl http://localhost:8080/files/test1.txt -X POST
/mount/data/test1.txt

$ curl http://localhost:8080/files/test2.txt -X POST
/mount/data/test2.txt

$ curl http://localhost:8080/files/all
["/mount/data/test1.txt","/mount/data/test2.txt"]

$ curl http://localhost:8080/files/test1.txt/line -X POST -d "hello1"
$ curl http://localhost:8080/files/test1.txt/line -X POST -d "hello2"
ShellSession

And verify the content of a particular inside the volume:

Final Thoughts

In this article, I wanted to share my experience working with the multi-node Kubernetes cluster simulation on Minikube. It was a very quick introduction. I hope it helps 🙂

The post Multi-node Kubernetes Cluster with Minikube appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/07/09/multi-node-kubernetes-cluster-with-minikube/feed/ 2 15346