Java Archives - Piotr's TechBlog https://piotrminkowski.com/tag/java/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 17 Feb 2026 16:25:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Java Archives - Piotr's TechBlog https://piotrminkowski.com/tag/java/ 32 32 181738725 Create Apps with Claude Code on Ollama https://piotrminkowski.com/2026/02/17/create-apps-with-claude-code-on-ollama/ https://piotrminkowski.com/2026/02/17/create-apps-with-claude-code-on-ollama/#comments Tue, 17 Feb 2026 16:25:18 +0000 https://piotrminkowski.com/?p=15992 This article explains how to run Claude Code on Ollama and use local or cloud models served by Ollama to create Java apps. Read this article if you are experimenting with AI code generation and using paid APIs for this purpose. Relatively recently, Ollama has made a built-in integration with developer tools such as Codex […]

The post Create Apps with Claude Code on Ollama appeared first on Piotr's TechBlog.

]]>
This article explains how to run Claude Code on Ollama and use local or cloud models served by Ollama to create Java apps. Read this article if you are experimenting with AI code generation and using paid APIs for this purpose. Relatively recently, Ollama has made a built-in integration with developer tools such as Codex and Claude Code available. This is a really useful feature. Using the example of integration with Claude Coda and several different models running both locally and in the cloud, you will see how it works.

You can find other articles about AI and Java on my blog. For example, if you are interested in how to use Ollama to serve models for Spring AI applications, you can read the following article.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions. This repository contains several branches, each with an application generated from the same prompt using different models. Currently, the branch with the fewest comments in the code review has been merged into master. This is the version of the code generated using the glm-5 model. However, this may change in the future, and the master branch may be modified. Therefore, it is best to simply refer to the individual branches or pull requests shown below.

Below is the current list of branches. The dev branch contains the initial version of the repository with the CLAUDE.md file, which specifies the basic requirements for the generated code.

$ git branch
    dev
    glm-5
    gpt-oss
  * master
    minimax
    qwen3-coder
ShellSession

Here are instructions for AI from the CLAUDE.md file. They include a description of the technologies I plan to use in my application and a few practices I intend to apply. For example, I don’t want to use Lombok, a popular Java library that automates the generation of code parts such as getters, setters, and constructors. It seems that in the age of AI, this approach doesn’t make sense, but for some reason, AI models really like this library 🙂 Also, each time I make a code change, I want the LLM model to increment the version number and update the README.md file, etc.

# Project Instructions

- Always use the latest versions of dependencies.
- Always write Java code as the Spring Boot application.
- Always use Maven for dependency management.
- Always create test cases for the generated code both positive and negative.
- Always generate the CircleCI pipeline in the .circleci directory to verify the code.
- Minimize the amount of code generated.
- The Maven artifact name must be the same as the parent directory name.
- Use semantic versioning for the Maven project. Each time you generate a new version, bump the PATCH section of the version number.
- Use `pl.piomin.services` as the group ID for the Maven project and base Java package.
- Do not use the Lombok library.
- Generate the Docker Compose file to run all components used by the application.
- Update README.md each time you generate a new version.
Markdown

Run Claude on Ollama

First, install Ollama on your computer. You can download the installer for your OS here. If you have used Ollama before, please update to the latest version.

$ ollama --version
  ollama version is 0.16.1
ShellSession

Next, install Claude Code.

curl -fsSL https://claude.ai/install.sh | bash
ShellSession

Before you start, it is worth increasing the maximum context window value allowed by Ollama. By default, it is set to 4k, and on the Ollama website itself, you will find a recommendation of 64k for Claude Code. I set the maximum value to 256k for testing different models.

For example, the gpt-oss model supports a 128k context window size.

Let’s pull and run the gpt-oss model with Ollama:

ollama run gpt-oss
ShellSession

After downloading and launching, you can verify the model parameters with the ollama ps command. If you have 100% GPU and a context window size of ~131k, that’s exactly what I meant.

Ensure you are in the root repository directory, then run Claude Code with the command ollama launch claude. Next, choose the gpt-oss model visible in the list under “More”.

ollama-claude-code-gpt-oss

That’s it! Finally, we can start playing with AI.

ollama-claude-code-run

Generate a Java App with Claude Code

My application will be very simple. I just need something to quickly test the solution. Of course, all guidelines defined in the CLAUDE.md file should be followed. So, here is my prompt. Nothing more, nothing less 🙂

Generate an application that exposes REST API and connects to a PostgreSQL database.
The application should have a Person entity with id, and typical fields related to each person.
All REST endpoints should be protected with JWT and OAuth2.
The codebase should use Skaffold to deploy on Kubernetes.
Plaintext

After a few minutes, I have the entire code generated. Below is a summary from the AI of what has been done. If you want to check it out for yourself, take a look at this branch in my repository.

ollama-claude-code-generated

For the sake of formality, let’s take a look at the generated code. There is nothing spectacular here, because it is just a regular Spring Boot application that exposes a few REST endpoints for CRUD operations. However, it doen’t look bad. Here’s the Spring Boot @Service implementation responsible for using PersonRepository to interact with database.

@Service
public class PersonService {
    private final PersonRepository repository;

    public PersonService(PersonRepository repository) {
        this.repository = repository;
    }

    public List<Person> findAll() {
        return repository.findAll();
    }

    public Optional<Person> findById(Long id) {
        return repository.findById(id);
    }

    @Transactional
    public Person create(Person person) {
        return repository.save(person);
    }

    @Transactional
    public Optional<Person> update(Long id, Person person) {
        return repository.findById(id).map(existing -> {
            existing.setFirstName(person.getFirstName());
            existing.setLastName(person.getLastName());
            existing.setEmail(person.getEmail());
            existing.setAge(person.getAge());
            return repository.save(existing);
        });
    }

    @Transactional
    public void delete(Long id) {
        repository.deleteById(id);
    }
}
Java

Here’s the generated @RestController witn REST endpoints implementation:

@RestController
@RequestMapping("/api/people")
public class PersonController {
    private final PersonService service;

    public PersonController(PersonService service) {
        this.service = service;
    }

    @GetMapping
    public List<Person> getAll() {
        return service.findAll();
    }

    @GetMapping("/{id}")
    public ResponseEntity<Person> getById(@PathVariable Long id) {
        Optional<Person> person = service.findById(id);
        return person.map(ResponseEntity::ok).orElseGet(() -> ResponseEntity.notFound().build());
    }

    @PostMapping
    public ResponseEntity<Person> create(@RequestBody Person person) {
        Person saved = service.create(person);
        return ResponseEntity.status(201).body(saved);
    }

    @PutMapping("/{id}")
    public ResponseEntity<Person> update(@PathVariable Long id, @RequestBody Person person) {
        Optional<Person> updated = service.update(id, person);
        return updated.map(ResponseEntity::ok).orElseGet(() -> ResponseEntity.notFound().build());
    }

    @DeleteMapping("/{id}")
    public ResponseEntity<Void> delete(@PathVariable Long id) {
        service.delete(id);
        return ResponseEntity.noContent().build();
    }
}
Java

Below is a summary in a pull request with the generated code.

ollama-claude-code-pr

Using Ollama Cloud Models

Recently, Ollama has made it possible to run models not only locally, but also in the cloud. By default, all models tagged with cloud are run this way. Cloud models are automatically offloaded to Ollama’s cloud service while offering the same capabilities as local models. This is the most useful for larger models that wouldn’t fit on a personal computer. You can for example try to experiment with the qwen3-coder model locally. Unfortunately, it didn’t look very good on my laptop.

Then, I can run a same or event a larger model in cloud and automatically connect Claude Code with that model using the following command:

ollama launch claude --model qwen3-coder:480b-cloud
Java

Now you can repeat exactly the same exercise as before or take a look at my branch containing the code generated using this model.

You can also try some other cloud models like minimax-m2.5 or glm-5.

Conclusion

If you’re developing locally and don’t want to burn money on APIs, use Claude Code with Ollama, and e.g., the gpt-oss or glm-5 models. It’s a pretty powerful and free option. If you have a powerful personal computer, the locally launched model should be able to generate the code efficiently. Otherwise, you can use the option of launching the model in the cloud offered by Ollama free of charge up to a certain usage limit (it is difficult to say exactly what that limit is). The gpt-oss model worked really well on my laptop (MacBook Pro M3), and it took about 7-8 minutes to generate the application. You can also look for a model that suits you better.

The post Create Apps with Claude Code on Ollama appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2026/02/17/create-apps-with-claude-code-on-ollama/feed/ 5 15992
Istio Spring Boot Library Released https://piotrminkowski.com/2026/01/06/istio-spring-boot-library-released/ https://piotrminkowski.com/2026/01/06/istio-spring-boot-library-released/#respond Tue, 06 Jan 2026 09:31:45 +0000 https://piotrminkowski.com/?p=15957 This article explains how to use my Spring Boot Istio library to generate and create Istio resources on a Kubernetes cluster during application startup. The library is primarily intended for development purposes. It aims to make it easier for developers to quickly and easily launch their applications within the Istio mesh. Of course, you can […]

The post Istio Spring Boot Library Released appeared first on Piotr's TechBlog.

]]>
This article explains how to use my Spring Boot Istio library to generate and create Istio resources on a Kubernetes cluster during application startup. The library is primarily intended for development purposes. It aims to make it easier for developers to quickly and easily launch their applications within the Istio mesh. Of course, you can also use this library in production. However, its purpose is to generate resources from annotations in Java application code automatically.

You can also find many other articles on my blog about Istio. For example, this article is about Quarkus and tracing with Istio.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Two sample applications for this exercise are available in the spring-boot-istio directory.

Prerequisites

We will start with the most straightforward Istio installation on a local Kubernetes cluster. This could be Minikube, which you can run with the following command. You can set slightly lower resource limits than I did.

minikube start --memory='8gb' --cpus='6'
ShellSession

To complete the exercise below, you need to install istioctl in addition to kubectl. Here you will find the available distributions for the latest versions of kubectl and istioctl. I install them on my laptop using Homebrew.

$ brew install kubectl
$ brew install istioctl
ShellSession

To install Istio with default parameters, run the following command:

istioctl install
ShellSession

After a moment, Istio should be running in the istio-system namespace.

It is also worth installing Kiali to verify the Istio resources we have created. Kiali is an observability and management tool for Istio that provides a web-based dashboard for service mesh monitoring. It visualizes service-to-service traffic, validates Istio configuration, and integrates with tools like Prometheus, Grafana, and Jaeger.

kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.28/samples/addons/kiali.yaml
ShellSession

Once Kiali is successfully installed on Kubernetes, we can expose its web dashboard locally with the following istioctl command:

istioctl dashboard kiali
ShellSession

To test access to the Istio mesh from outside the cluster, you need to expose the ingress gateway. To do this, run the minikube tunnel command.

Use Spring Boot Istio Library

To test our library’s functionality, we will create a simple Spring Boot application that exposes a single REST endpoint.

@RestController
@RequestMapping("/callme")
public class CallmeController {

    private static final Logger LOGGER = LoggerFactory
        .getLogger(CallmeController.class);

    @Autowired
    Optional<BuildProperties> buildProperties;
    @Value("${VERSION}")
    private String version;

    @GetMapping("/ping")
    public String ping() {
        LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() ?
            buildProperties.get().getName() : "callme-service", version);
        return "I'm callme-service " + version;
    }
    
}
Java

Then, in addition to the standard Spring Web starter, add the istio-spring-boot-starter dependency.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>com.github.piomin</groupId>
  <artifactId>istio-spring-boot-starter</artifactId>
  <version>1.2.1</version>
</dependency>
XML

Finally, we must add the @EnableIstio annotation to our application’s main class. We can also enable Istio Gateway to expose the REST endpoint outside the cluster. An Istio Gateway is a component that controls how external traffic enters or leaves a service mesh.

@SpringBootApplication
@EnableIstio(enableGateway = true)
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

Let’s deploy our application on the Kubernetes cluster. To do this, we must first create a role with the necessary permissions to manage Istio resources in the cluster. The role must be assigned to the ServiceAccount used by the application.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: callme-service-with-starter
rules:
  - apiGroups: ["networking.istio.io"]
    resources: ["virtualservices", "destinationrules", "gateways"]
    verbs: ["create", "get", "list", "watch", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: callme-service-with-starter
subjects:
  - kind: ServiceAccount
    name: callme-service-with-starter
    namespace: spring
roleRef:
  kind: Role
  name: callme-service-with-starter
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: callme-service-with-starter
YAML

Here are the Deployment and Service resources.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-with-starter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service-with-starter
  template:
    metadata:
      labels:
        app: callme-service-with-starter
    spec:
      containers:
        - name: callme-service-with-starter
          image: piomin/callme-service-with-starter
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
      serviceAccountName: callme-service-with-starter
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service-with-starter
  labels:
    app: callme-service-with-starter
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service-with-starter
YAML

The application repository is configured to run it with Skaffold. Of course, you can apply YAML manifests to the cluster with kubectl apply. To do this, simply navigate to the callme-service-with-starter/k8s directory and apply the deployment.yaml file. As part of the exercise, we will run our applications in the spring namespace.

skaffold dev -n spring
ShellSession

The sample Spring Boot application creates two Istio objects at startup: a VirtualService and a Gateway. We can verify them in the Kiali dashboard.

spring-boot-istio-kiali

The default host name generated for the gateway includes the deployment name and the .ext suffix. We can change the suffix name using the domain field in the @EnableIstio annotation. Assuming you run the minikube tunnel command, you can call the service using the Host header with the hostname in the following way:

$ curl http://localhost/callme/ping -H "Host:callme-service-with-starter.ext"
I'm callme-service v1
ShellSession

Additional Capabilities with Spring Boot Istio

The library’s behavior can be customized by modifying the @EnableIstio annotation. For example, you can enable the fault injection mechanism using the fault field. Both abort and delay are possible. You can make this change without redeploying the app. The library updates the existing VirtualService.

@SpringBootApplication
@EnableIstio(enableGateway = true, fault = @Fault(percentage = 50))
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

Now, you can call the same endpoint through the gateway. There is a 50% chance that you will receive this response.

spring-boot-istio-curl

Now let’s analyze a slightly more complex scenario. Let’s assume that we are running two different versions of the same application on Kubernetes. We use Istio to manage its versioning. Traffic is forwarded to the particular version based on the X-Version header in the incoming request. If the header value is v1 the request is sent to the application Pod with the label version=v1, and similarly, for the version v2. Here’s the annotation for the v1 application main class.

@SpringBootApplication
@EnableIstio(enableGateway = true, version = "v1",
	matches = { 
	  @Match(type = MatchType.HEADERS, key = "X-Version", value = "v1") })
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

The Deployment manifest for the callme-service-with-starter-v1 should define two labels: app and version.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-with-starter-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service-with-starter
      version: v1
  template:
    metadata:
      labels:
        app: callme-service-with-starter
        version: v1
    spec:
      containers:
        - name: callme-service-with-starter
          image: piomin/callme-service-with-starter
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
      serviceAccountName: callme-service-with-starter
YAML

Unlike the skaffold dev command, the skaffold run command simply launches the application on the cluster and terminates. Let’s first release version v1, and then move on to version v2.

skaffold run -n spring
ShellSession

Then, we can deploy the v2 version of our sample Spring Boot application. In this exercise, it is just an “artificial” version, since we deploy the same source code, but with different labels and environment variables injected in the Deployment manifest.

@SpringBootApplication
@EnableIstio(enableGateway = true, version = "v2",
	matches = { 
	  @Match(type = MatchType.HEADERS, key = "X-Version", value = "v2") })
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

Here’s the callme-service-with-starter-v2 Deployment manifest.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-with-starter-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service-with-starter
      version: v2
  template:
    metadata:
      labels:
        app: callme-service-with-starter
        version: v2
    spec:
      containers:
        - name: callme-service-with-starter
          image: piomin/callme-service-with-starter
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v2"
      serviceAccountName: callme-service-with-starter
YAML

Service, on the other hand, remains unchanged. It still refers to Pods labeled with app=callme-service-with-starter. However, this time it includes both application instances marked as v1 or v2.

apiVersion: v1
kind: Service
metadata:
  name: callme-service-with-starter
  labels:
    app: callme-service-with-starter
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service-with-starter
YAML

Version v2 should be run in the same way as before, using the skaffold run command. There are three Istio objects generated during apps startup: Gateway, VirtualService and DestinationRule.

spring-boot-istio-versioning

A generated DestinationRule contains two subsets for both v1 and v2 versions.

spring-boot-istio-subsets

The automatically generated VirtualService looks as follows.

kind: VirtualService
apiVersion: networking.istio.io/v1
metadata:
  name: callme-service-with-starter-route
spec:
  hosts:
  - callme-service-with-starter
  - callme-service-with-starter.ext
  gateways:
  - callme-service-with-starter
  http:
  - match:
    - headers:
        X-Version:
          prefix: v1
    route:
    - destination:
        host: callme-service-with-starter
        subset: v1
    timeout: 6s
    retries:
      attempts: 3
      perTryTimeout: 2s
      retryOn: 5xx
  - match:
    - headers:
        X-Version:
          prefix: v2
    route:
    - destination:
        host: callme-service-with-starter
        subset: v2
      weight: 100
    timeout: 6s
    retries:
      attempts: 3
      perTryTimeout: 2s
      retryOn: 5xx
YAML

To test the versioning mechanism generated using the Spring Boot Istio library, set the X-Version header for each call.

$ curl http://localhost/callme/ping -H "Host:callme-service-with-starter.ext" -H "X-Version:v1"
I'm callme-service v1

$ curl http://localhost/callme/ping -H "Host:callme-service-with-starter.ext" -H "X-Version:v2"
I'm callme-service v2
ShellSession

Conclusion

I am still working on this library, and new features will be added in the near future. I hope it will be helpful for those of you who want to get started with Istio without getting into the details of its configuration.

The post Istio Spring Boot Library Released appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2026/01/06/istio-spring-boot-library-released/feed/ 0 15957
Startup CPU Boost in Kubernetes with In-Place Pod Resize https://piotrminkowski.com/2025/12/22/startup-cpu-boost-in-kubernetes-with-in-place-pod-resize/ https://piotrminkowski.com/2025/12/22/startup-cpu-boost-in-kubernetes-with-in-place-pod-resize/#respond Mon, 22 Dec 2025 08:22:48 +0000 https://piotrminkowski.com/?p=15917 This article explains how to use the In-Place Pod Resize feature in Kubernetes, combined with Kube Startup CPU Boost, to speed up Java application startup. The In-Place Update of Pod Resources feature was initially introduced in Kubernetes 1.27 as an alpha release. With version 1.35, Kubernetes has reached GA stability. One potential use case for […]

The post Startup CPU Boost in Kubernetes with In-Place Pod Resize appeared first on Piotr's TechBlog.

]]>
This article explains how to use the In-Place Pod Resize feature in Kubernetes, combined with Kube Startup CPU Boost, to speed up Java application startup. The In-Place Update of Pod Resources feature was initially introduced in Kubernetes 1.27 as an alpha release. With version 1.35, Kubernetes has reached GA stability. One potential use case for using this feature is to set a high CPU limit only during application startup, which is necessary for Java to launch quickly. I have already described such a scenario in my previous article. The example implemented in that article used the Kyverno tool. However, it is based on an alpha version of the in-place pod resize feature, so it requires a minor tweak to the Kyverno policy to align with the GA release.

The other potential solution in that context is the Vertical Pod Autoscaler. In the latest version, it supports in-place pod resize. Vertical Pod Autoscaler (VPA) in Kubernetes automatically adjusts CPU and memory requests/limits for pods based on their actual usage, ensuring containers receive the appropriate resources. Unlike the Horizontal Pod Autoscaler (HPA), which scales resources, not replicas, it may restart pods to apply changes. For now, VPA does not support this use case, but once this feature is implemented, the situation will change.

On the other hand, Kube Startup CPU Boost is a dedicated feature for scenarios with high CPU requirements during app startup. It is a controller that increases CPU resource requests and limits during Kubernetes workload startup. Once the workload is up and running, the resources are set back to their original values. Let’s see how this solution works in practice!

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. The sample application is based on Spring Boot and exposes several REST endpoints. However, in this exercise, we will use a ready-made image published on my Quay: quay.io/pminkows/sample-kotlin-spring:1.5.1.1.

Install Kube Startup CPU Boost

The Kubernetes cluster you are using must enable the In-Place Pod Resize feature. The activation method may vary by Kubernetes distribution. For the Minikube I am using in today’s example, it looks like this:

minikube start --memory='8gb' --cpus='6' --feature-gates=InPlacePodVerticalScaling=true
ShellSession

After that, we can proceed with installing the Kube Startup CPU Boost controller. There are several ways to achieve it. The easiest way to do this is with a Helm chart. Let’s add the following Helm repository:

helm repo add kube-startup-cpu-boost https://google.github.io/kube-startup-cpu-boost
ShellSession

Then, we can install the kube-startup-cpu-boost chart in the dedicated kube-startup-cpu-boost-system namespace using the following command:

helm install -n kube-startup-cpu-boost-system kube-startup-cpu-boost \
  kube-startup-cpu-boost/kube-startup-cpu-boost --create-namespace
ShellSession

If the installation was successful, you should see the following pod running in the kube-startup-cpu-boost-system namespace as below.

$ kubectl get pod -n kube-startup-cpu-boost-system
NAME                                                         READY   STATUS    RESTARTS   AGE
kube-startup-cpu-boost-controller-manager-75f95d5fb6-692s6   1/1     Running   0          36s
ShellSession

Install Monitoring Stack (optional)

Then, we can install Prometheus monitoring. It is an optional step to verify pod resource usage in the graphical form. Firstly, let’s install the following Helm repository:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
ShellSession

After that, we can install the latest version of the kube-prometheus-stack chart in the monitoring namespace.

helm install my-kube-prometheus-stack prometheus-community/kube-prometheus-stack \
  -n monitoring --create-namespace
ShellSession

Let’s verify the installation succeeded by listing the pods running in the monitoring namespace.

$ kubectl get pods -n monitoring
NAME                                                         READY   STATUS    RESTARTS   AGE
alertmanager-my-kube-prometheus-stack-alertmanager-0         2/2     Running   0          38s
my-kube-prometheus-stack-grafana-f8bb6b8b8-mzt4l             3/3     Running   0          48s
my-kube-prometheus-stack-kube-state-metrics-99f4574c-bf5ln   1/1     Running   0          48s
my-kube-prometheus-stack-operator-6d58dd9d6c-6srtg           1/1     Running   0          48s
my-kube-prometheus-stack-prometheus-node-exporter-tdwmr      1/1     Running   0          48s
prometheus-my-kube-prometheus-stack-prometheus-0             2/2     Running   0          38s
ShellSession

Finally, we can expose the Prometheus console over localhost using the port forwarding feature:

kubectl port-forward svc/my-kube-prometheus-stack-prometheus 9090:9090 -n monitoring
ShellSession

Configure Kube Startup CPU Boost

The Kube Startup CPU Boost configuration is pretty intuitive. We need to create a StartupCPUBoost resource. It can manage multiple applications based on a given selector. In our case, it is a single sample-kotlin-spring Deployment determined by the app.kubernetes.io/name label (1). The next step is to define the resource management policy (2). The Kube Startup CPU Boost increases both request and limit by 50%. Resources should only be increased for the duration of the startup (3). Therefore, once the readiness probe succeeds, the resource level will return to its initial state. Of course, everything happens in-place without restarting the container.

apiVersion: autoscaling.x-k8s.io/v1alpha1
kind: StartupCPUBoost
metadata:
  name: sample-kotlin-spring
  namespace: demo
selector:
  matchExpressions: # (1)
  - key: app.kubernetes.io/name
    operator: In
    values: ["sample-kotlin-spring"]
spec:
  resourcePolicy: # (2)
    containerPolicies:
    - containerName: sample-kotlin-spring
      percentageIncrease:
        value: 50
  durationPolicy: # (3)
    podCondition:
      type: Ready
      status: "True"
YAML

Next, we will deploy our sample application. Here’s the Deployment manifest of our Spring Boot app. The name of the app container is sample-kotlin-spring, which matches the target Deployment name defined inside the StartupCPUBoost object (1). Then, we set the CPU limit to 500 millicores (2). There’s also a new field resizePolicy. It tells Kubernetes whether a change to CPU or memory can be applied in-place or requires a Pod restart. (3). The NotRequired value means that changing the resource limit or request will not trigger a pod restart. The Deployment object also contains a readiness probe that calls the GET/actuator/health/readiness exposed with the Spring Boot Actuator (4).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-kotlin-spring
  namespace: demo
  labels:
    app: sample-kotlin-spring
    app.kubernetes.io/name: sample-kotlin-spring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sample-kotlin-spring
  template:
    metadata:
      labels:
        app: sample-kotlin-spring
        app.kubernetes.io/name: sample-kotlin-spring
    spec:
      containers:
      - name: sample-kotlin-spring # (1)
        image: quay.io/pminkows/sample-kotlin-spring:1.5.1.1
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 500m # (2)
            memory: "1Gi"
          requests:
            cpu: 200m
            memory: "256Mi"
        resizePolicy: # (3)
        - resourceName: "cpu"
          restartPolicy: "NotRequired"
        readinessProbe: # (4)
          httpGet:
            path: /actuator/health/readiness
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 15
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
YAML

Here are the pod requests and limits configured by Kube Startup CPU Boost. As you can see, the request is set to 300m, while the limit is completely removed.

in-place-pod-resize-boost

Once the application startup process completes, Kube Startup CPU Boost restores the initial request and limit.

Now we can switch to the Prometheus console to see the history of CPU request values for our pod. As you can see, the request was temporarily increased during the pod startup.

The chart below illustrates CPU usage when the application is launched and then during normal operation.

in-place-pod-resize-metric

We can also define the fixed resources for a target container. The CPU requests and limits of the selected container will be set to the given values (1). If you do not want the operator to remove the CPU limit during boost time, set the REMOVE_LIMITS environment variable to false in the kube-startup-cpu-boost-controller-manager Deployment.

apiVersion: autoscaling.x-k8s.io/v1alpha1
kind: StartupCPUBoost
metadata:
  name: sample-kotlin-spring
  namespace: demo
selector:
  matchExpressions:
  - key: app.kubernetes.io/name
    operator: In
    values: ["sample-kotlin-spring"]
spec:
  resourcePolicy:
    containerPolicies: # (1)
    - containerName: sample-kotlin-spring
      fixedResources:
        requests: "500m"
        limits: "2"
  durationPolicy:
    podCondition:
      type: Ready
      status: "True"
YAML

Conclusion

There are many ways to address application CPU demand during startup. First, you don’t need to set a CPU limit for Deployment. What’s more, many people believe that setting a CPU limit doesn’t make sense, but for different reasons. In this situation, the request issue remains, but given the short timeframe and the significantly higher usage than in the declaration, it isn’t material.

Other solutions are related to strictly Java features. If we compile the application natively with GraalVM or use the CRaC feature, we will significantly speed up startup and reduce CPU requirements.

Finally, several solutions rely on in-place resizing. If you use Kyverno, consider its mutate policy, which can modify resources in response to an application startup event. The Kube Startup CPU Boost tool described in this article operates similarly but is designed exclusively for this use case. In the near future, Vertical Pod Autoscaler will also offer a CPU boost via in-place resize.

The post Startup CPU Boost in Kubernetes with In-Place Pod Resize appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/12/22/startup-cpu-boost-in-kubernetes-with-in-place-pod-resize/feed/ 0 15917
A Book: Hands-On Java with Kubernetes https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/ https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/#respond Mon, 08 Dec 2025 16:05:58 +0000 https://piotrminkowski.com/?p=15892 My book about Java and Kubernetes has finally been published! The book “Hands-On Java with Kubernetes” is the result of several months of work and, in fact, a summary of my experiences over the last few years of research and development. In this post, I want to share my thoughts on this book, explain why […]

The post A Book: Hands-On Java with Kubernetes appeared first on Piotr's TechBlog.

]]>
My book about Java and Kubernetes has finally been published! The book “Hands-On Java with Kubernetes” is the result of several months of work and, in fact, a summary of my experiences over the last few years of research and development. In this post, I want to share my thoughts on this book, explain why I chose to write and publish it, and briefly outline its content and concept. To purchase the latest version, go to this link.

Here is a brief overview of all my published books.

Motivation

I won’t hide that this post is mainly directed at my blog subscribers and people who enjoy reading it and value my writing style. As you know, all posts and content on my blog, along with sample application repositories on GitHub, are always accessible to you for free. Over the past eight years, I have worked to publish high-quality content on my blog, and I plan to keep doing so. It is a part of my life, a significant time commitment, but also a lot of fun and a hobby.

I want to explain why I decided to write this book, why now, and why in this way. But first, a bit of background. I wrote my last book first, then my first book, over seven years ago. It focused on topics I was mainly involved with at the time, specifically Spring Boot and Spring Cloud. Since then, a lot of time has passed, and much has changed – not only in the technology itself but also a little in my personal life. Today, I am more involved in Kubernetes and container topics than, for example, Spring Cloud. For years, I have been helping various organizations transition from traditional application architectures to cloud-native models based on Kubernetes. Of course, Java remains my main area of expertise. Besides Spring Boot, I also really like the Quarkus framework. You can read a lot about both in my book on Kubernetes.

Based on my experience over the past few years, involving development teams is a key factor in the success of the Kubernetes platform within an organization. Ultimately, it is the applications developed by these teams that are deployed there. For developers to be willing to use Kubernetes, it must be easy for them to do so. That is why I persuade organizations to remove barriers to using Kubernetes and to design it in a way that makes it easier for development teams. On my blog and in this book, I aim to demonstrate how to quickly and simply launch applications on Kubernetes using frameworks such as Spring Boot and Quarkus.

It’s an unusual time to publish a book. AI agents are producing more and more technical content online. More often than not, instead of grabbing a book, people turn to an AI chatbot for a quick answer, though not always the best one. Still, a book that thoroughly introduces a topic and offers a step-by-step guide remains highly valuable.

Content of the Book

This book demonstrates that Java is an excellent choice for building applications that run on Kubernetes. In the first chapter, I’ll show you how to quickly build your application, create its image, and run it on Kubernetes without writing a single line of YAML or Dockerfile. This chapter also covers the minimum Kubernetes architecture you must understand to manage applications effectively in this environment. The second chapter, on the other hand, demonstrates how to effectively organize your local development environment to work with a Kubernetes cluster. You’ll see several options for running a distribution of your cluster locally and learn about the essential set of tools you should have. The third chapter outlines best practices for building applications on the Kubernetes platform. Most of the presented requirements are supported by simple examples and explanations of the benefits of meeting them. The fourth chapter presents the most valuable tools for the inner development loop with Kubernetes. After reading the first four chapters, you will understand the main Kubernetes components related to application management, enabling you to navigate the platform efficiently. You’ll also learn to leverage Spring Boot and Quarkus features to adapt your application to Kubernetes requirements.

In the following chapters, I will focus on the benefits of migrating applications to Kubernetes. The first area to cover is security. Chapter five discusses mechanisms and tools for securing applications running in a cluster. Chapter six describes Spring and Quarkus projects that enable native integration with the Kubernetes API from within applications. In chapter seven, you’ll learn about the service mesh tool and the benefits of using it to manage HTTP traffic between microservices. Chapter eight addresses the performance and scalability of Java applications in a Kubernetes environment. Chapter Eight demonstrates how to design a CI/CD process that runs entirely within the cluster, leveraging Kubernetes-native tools for pipeline building and the GitOps approach. This book also covers AI. In the final, ninth chapter, you’ll learn how to run a simple Java application that integrates with an AI model deployed on Kubernetes.

Publication

I decided to publish my book on Leanpub. Leanpub is a platform for writing, publishing, and selling books, especially popular among technical content authors. I previously published a book with Packt, but honestly, I was alone during the writing process. Leanpub is similar but offers several key advantages over publishers like Packt. First, it allows you to update content collaboratively with readers and keep it current. Even though my book is finished, I don’t rule out adding more chapters, such as on AI on Kubernetes. I also look forward to your feedback and plan to improve the content and examples in the repository continuously. Overall, this has been another exciting experience related to publishing technical content.

And when you buy such a book, you can be sure that most of the royalties go to me as the author, unlike with other publishers, where most of the royalties go to them as promoters. So, I’m looking forward to improving my book with you!

Conclusion

My book aims to bring together all the most interesting elements surrounding Java application development on Kubernetes. It is intended not only for developers but also for architects and DevOps teams who want to move to the Kubernetes platform.

The post A Book: Hands-On Java with Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/feed/ 0 15892
Arconia for Spring Boot Dev Services and Observability https://piotrminkowski.com/2025/11/21/arconia-for-spring-boot-dev-services-and-observability/ https://piotrminkowski.com/2025/11/21/arconia-for-spring-boot-dev-services-and-observability/#respond Fri, 21 Nov 2025 09:32:46 +0000 https://piotrminkowski.com/?p=15824 This article explains how to use the Arconia framework to enhance the developer experience with Spring Boot. This project is a recent initiative under active development. However, it caught my attention because of one feature I love in Quarkus and found missing in Spring Boot. I am referring to a solution called Dev Services, which […]

The post Arconia for Spring Boot Dev Services and Observability appeared first on Piotr's TechBlog.

]]>
This article explains how to use the Arconia framework to enhance the developer experience with Spring Boot. This project is a recent initiative under active development. However, it caught my attention because of one feature I love in Quarkus and found missing in Spring Boot. I am referring to a solution called Dev Services, which is likely familiar to those of you who are familiar with Quarkus. Dev Services supports the automatic provisioning of unconfigured services in development and test mode. Similar to Quarkus, Arconia is based on Testcontainers and also uses Spring Boot Testcontainers support.

To learn how Spring Boot supports Testcontainers, read my article on the subject. If you’re interested in Quarkus Dev Services, consider this post, which focuses on automated testing support in Quarkus.

Prerequisites

To perform the exercise described in this article, you must have the following on your laptop:

  • Docker / Podman
  • Java 21+
  • Maven 3.9+

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

Create Spring Boot Application

For this exercise, we will build a simple application that connects to a Postgres database and returns employee information through a REST interface. In addition to the core logic, we will also implement integration tests to verify endpoint functionality with a live database. Below is a list of required dependencies.

<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <scope>runtime</scope>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
  </dependency>
</dependencies>
XML

Here’s the Employee domain object, which is stored in the employee table:

@Entity
public class Employee {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer id;
    private int organizationId;
    private int departmentId;
    private String name;
    private int age;
    private String position;
    
    // GETTERS and SETTERS
    
}
Java

Here’s the Spring Data repository interface responsible for interacting with the Postgres database:

public interface EmployeeRepository extends CrudRepository<Employee, Integer> {

    List<Employee> findByDepartmentId(int departmentId);
    List<Employee> findByOrganizationId(int organizationId);

}
Java

This is the EmployeeController code with several REST endpoints that allow us to add and find employees in the database:

@RestController
@RequestMapping("/employees")
public class EmployeeController {

    private static final Logger LOGGER = LoggerFactory
       .getLogger(EmployeeController.class);

    @Autowired
    EmployeeRepository repository;

    @PostMapping
    public Employee add(@RequestBody Employee employee) {
        LOGGER.info("Employee add...: {}", employee);
        return repository.save(employee);
    }

    @GetMapping("/{id}")
    public Employee findById(@PathVariable("id") Integer id) {
        LOGGER.info("Employee find: id={}", id);
        return repository.findById(id).get();
    }

    @GetMapping
    public List<Employee> findAll() {
        LOGGER.info("Employee find");
        return (List<Employee>) repository.findAll();
    }

    @GetMapping("/department/{departmentId}")
    public List<Employee> findByDepartment(@PathVariable("departmentId") int departmentId) {
        LOGGER.info("Employee find: departmentId={}", departmentId);
        return repository.findByDepartmentId(departmentId);
    }

    @GetMapping("/organization/{organizationId}")
    public List<Employee> findByOrganization(@PathVariable("organizationId") int organizationId) {
        LOGGER.info("Employee find: organizationId={}", organizationId);
        return repository.findByOrganizationId(organizationId);
    }

    @GetMapping("/department-with-delay/{departmentId}")
    public List<Employee> findByDepartmentWithDelay(@PathVariable("departmentId") int departmentId) throws InterruptedException {
        LOGGER.info("Employee find with delay: departmentId={}", departmentId);
        Thread.sleep(2000);
        return repository.findByDepartmentId(departmentId);
    }

}
Java

With the following configuration in the application.yml, we will initialize the database schema on the application or tests startup:

spring:
  application:
    name: sample-spring-web-with-db
  jpa:
    hibernate:
      ddl-auto: create
    properties:
      hibernate:
        show_sql: true
        format_sql: true
YAML

Finally, here’s the @SpringBootTest that calls and verifies previously implemented REST endpoints:

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class EmployeeControllerTests {

    @Autowired
    TestRestTemplate restTemplate;

    @Test
    @Order(1)
    public void testAdd() {
        Employee employee = new Employee();
        employee.setName("John Doe");
        employee.setAge(30);
        employee.setPosition("Manager");
        employee.setDepartmentId(1);
        employee.setOrganizationId(1);
        employee = restTemplate.postForObject("/employees", employee, Employee.class);
        Assertions.assertNotNull(employee);
        Assertions.assertNotNull(employee.getId());
    }

    @Test
    @Order(2)
    public void testFindById() {
        Employee employee = restTemplate.getForObject("/employees/1", Employee.class);
        Assertions.assertNotNull(employee);
        Assertions.assertEquals(1, employee.getId());
    }

    @Test
    @Order(3)
    public void testFindAll() {
        Employee[] employees = restTemplate.getForObject("/employees", Employee[].class);
        Assertions.assertNotNull(employees);
        Assertions.assertEquals(1, employees.length);
    }

    @Test
    @Order(4)
    public void testFindByDepartment() {
        List<Employee> employees = restTemplate.getForObject("/employees/department/1", List.class);
        Assertions.assertNotNull(employees);
        Assertions.assertEquals(1, employees.size());
    }

}
Java

Spring Boot Dev Services with Arconia

The Arconia framework offers multiple modules to support development services for the most popular databases and event brokers. To add support for the Postgres database, include the following dependency in your Maven pom.xml:

<dependency>
  <groupId>io.arconia</groupId>
  <artifactId>arconia-dev-services-postgresql</artifactId>
  <scope>runtime</scope>
  <optional>true</optional>
</dependency>
XML

We will add other Arconia modules later, so let’s include the BOM (Bill of Materials) with the latest version:

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>io.arconia</groupId>
      <artifactId>arconia-bom</artifactId>
      <version>0.18.2</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>
XML

And that’s all we needed to do. Now you can run the application in developer mode using the Maven command or through the arconia CLI. CLI is an add-on here, so for now, let’s stick with the standard mvn command.

mvn spring-boot:run
ShellSession

You can also run automated tests with the mvn test command or through the IDE’s graphical interface.

Spring Boot Observability with Arconia

Dev services are just one of the features offered by Arconia. In this article, I will present a simple scenario of integrating with the Grafana observability stack using OpenTelemetry. This time, we will include two dependencies. The first is a special Spring Boot starter provided by Arconia, which automatically configures OpenTelemetry, Micrometer, and Spring Boot Actuator for your app. The second dependency includes the dev services for a Grafana LGTM observability platform, which contains: Loki, Grafana, Prometheus, Tempo, and OpenTelemetry collector.

<dependency>
  <groupId>io.arconia</groupId>
  <artifactId>arconia-opentelemetry-spring-boot-starter</artifactId>
</dependency>
<dependency>
  <groupId>io.arconia</groupId>
  <artifactId>arconia-dev-services-lgtm</artifactId>
  <scope>runtime</scope>
  <optional>true</optional>
</dependency>
XML

In addition to the standard Arconia Observability settings, we will enable all built-in resource contributors. Below is the required configuration to add to the application settings in the application.yml file.

arconia:
 otel:
   resource:
     contributors:
       build:
         enabled: true
       host:
         enabled: true
       java:
         enabled: true
       os:
         enabled: true
       process:
         enabled: true
YAML

That’s it. Let’s start our application in development mode once again.

mvn spring-boot:run
ShellSession

This time, Arconia starts one container more than before. I can access the Grafana dashboard at http://localhost:33383.

arconia-spring-boot-launch

Let’s display all the containers running locally:

$ docker ps
CONTAINER ID  IMAGE                                   COMMAND               CREATED        STATUS        PORTS                                                                                                                                                 NAMES
a6a097fb9ebe  docker.io/testcontainers/ryuk:0.12.0    /bin/ryuk             2 minutes ago  Up 2 minutes  0.0.0.0:42583->8080/tcp                                                                                                                               testcontainers-ryuk-dfdea2da-0bbd-43fa-9f50-3e9d966d877f
917d74a5a0ad  docker.io/library/postgres:18.0-alpine  postgres -c fsync...  2 minutes ago  Up 2 minutes  0.0.0.0:38409->5432/tcp                                                                                                                               pensive_mcnulty
090a9434d1fd  docker.io/grafana/otel-lgtm:0.11.16     /otel-lgtm/run-al...  2 minutes ago  Up 2 minutes  0.0.0.0:33383->3000/tcp, 0.0.0.0:39501->3100/tcp, 0.0.0.0:32867->3200/tcp, 0.0.0.0:40389->4317/tcp, 0.0.0.0:46739->4318/tcp, 0.0.0.0:36915->9090/tcp  vigorous_euler
ShellSession

And now for the best part. Right after launch, our application is fully integrated with the Grafana stack. For example, logs are sent to the Loki instance, from which we can view them in the Grafana UI.

arconia-spring-boot-loki

We can also display a dashboard with Spring Boot metrics.

arconia-spring-boot-metrics

Right after launch, I sent several test POST and GET requests to the application endpoints. Information about this is available in Grafana Tempo.

We can also verify JVM statistics in a dedicated dashboard.

What more could you want? 🙂

Conclusion

Arconia is an exciting and promising project, which I will be watching closely in the future. It is a relatively new initiative that is still undergoing intensive development. Arconia already offers several practical solutions that significantly simplify working with the Spring Boot application. I have shown you how this framework works in a simple scenario: running and integrating our application with the Postgres database and the Grafana observability stack using the Micrometer framework.

The post Arconia for Spring Boot Dev Services and Observability appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/11/21/arconia-for-spring-boot-dev-services-and-observability/feed/ 0 15824
Quarkus with Buildpacks and OpenShift Builds https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/ https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/#respond Wed, 19 Nov 2025 08:50:04 +0000 https://piotrminkowski.com/?p=15806 In this article, you will learn how to build Quarkus application images using Cloud Native Buildpacks and OpenShift Builds. Some time ago, I published a blog post about building with OpenShift Builds based on the Shipwright project. At that time, Cloud Native Buildpacks were not supported at the OpenShift Builds level. It was only supported […]

The post Quarkus with Buildpacks and OpenShift Builds appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to build Quarkus application images using Cloud Native Buildpacks and OpenShift Builds. Some time ago, I published a blog post about building with OpenShift Builds based on the Shipwright project. At that time, Cloud Native Buildpacks were not supported at the OpenShift Builds level. It was only supported in the community project. I demonstrated how to add the appropriate build strategy yourself and use it to build an image for a Spring Boot application. However, OpenShift Builds, since version 1.6, support building with Cloud Native Buildpacks. Currently, Quarkus, Go, Node.js, and Python are supported. In this article, we will focus on Quarkus and also examine the built-in support for Buildpacks within Quarkus itself.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

Quarkus Buildpacks Extension

Recently, support for Cloud Native Buildpacks in Quarkus has been significantly enhanced. Here you can access the repository containing the source code for the Paketo Quarkus buildpack. To implement this solution, add one dependency to your application.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-container-image-buildpack</artifactId>
</dependency>
XML

Next, run the build command with Maven and activate the quarkus.container-image.build parameter. Also, set the appropriate Java version needed for your application. For the sample Quarkus application in this article, the Java version is 21.

mvn clean package \
  -Dquarkus.container-image.build=true \
  -Dquarkus.buildpack.builder-env.BP_JVM_VERSION=21
ShellSession

To build, you need Docker or Podman running. Here’s the output from the command run earlier.

As you can see, Quarkus uses, among other buildpacks, the buildpack as mentioned earlier.

The new image is now available for use.

$ docker images sample-quarkus/person-service:1.0.0-SNAPSHOT
REPOSITORY                      TAG              IMAGE ID       CREATED        SIZE
sample-quarkus/person-service   1.0.0-SNAPSHOT   e0b58781e040   45 years ago   160MB
ShellSession

Quarkus with OpenShift Builds Shipwright

Install the Openshift Build Operator

Now, we will move the image building process to the OpenShift cluster. OpenShift offers built-in support for creating container images directly within the cluster through OpenShift Builds, using the BuildConfig solution. For more details, please refer to my previous article. However, in this article, we explore a new technology for building container images called OpenShift Builds with Shipwright. To enable this solution on OpenShift, you need to install the following operator.

After installing this operator, you will see a new item in the “Build” menu called “Shiwright”. Switch to it, then select the “ClusterBuildStrategies” tab. There are two strategies on the list designed for Cloud Native Buildpacks. We are interested in the buildpacks strategy.

Create and Run Build with Shipwright

Finally, we can create the Shiwright Build object. It contains three sections. In the first step, we define the address of the container image repository where we will push our output image. For simplicity, we will use the internal registry provided by the OpenShift cluster itself. In the source section, we specify the repository address where the application source code is located. In the last section, we need to set the build strategy. We chose the previously mentioned buildpacks strategy for Cloud Native Buildpacks. Some parameters need to be set for the buildpacks strategy: run-image and cnb-builder-image. The cnb-builder-image indicates the name of the builder image containing the buildpacks. The run-image refers to a base image used to run the application. We will also activate the buildpacks Maven profile during the build to set the Quarkus property that switches from fast-jar to uber-jar packaging.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildpack-quarkus-build
spec:
  env:
    - name: BP_JVM_VERSION
      value: '21'
  output:
    image: 'image-registry.openshift-image-registry.svc:5000/builds/sample-quarkus-microservice:1.0'
  paramValues:
    - name: run-image
      value: 'paketobuildpacks/run-java-21-ubi9-base:latest'
    - name: cnb-builder-image
      value: 'paketobuildpacks/builder-jammy-java-tiny:latest'
    - name: env-vars
      values:
        - value: BP_MAVEN_ADDITIONAL_BUILD_ARGUMENTS=-Pbuildpacks
  retention:
    atBuildDeletion: true
  source:
    git:
      url: 'https://github.com/piomin/sample-quarkus-microservice.git'
    type: Git
  strategy:
    kind: ClusterBuildStrategy
    name: buildpacks
YAML

Here’s the Maven buildpacks profile that sets a single Quarkus property quarkus.package.jar.type. We must change it to uber-jar, because the paketobuildpacks/builder-jammy-java-tiny builder expects a single jar instead of the multi-folder layout used by the default fast-jar format. Of course, I would prefer to use the paketocommunity/builder-ubi-base builder, which can recognize the fast-jar format. However, at this time, it does not function correctly with OpenShift Builds.

<profiles>
  <profile>
    <id>buildpacks</id>
    <activation>
      <property>
        <name>buildpacks</name>
      </property>
    </activation>
    <properties>
      <quarkus.package.jar.type>uber-jar</quarkus.package.jar.type>
    </properties>
  </profile>
</profiles>
XML

To start the build, you can use the OpenShift console or execute the following command:

shp build run buildpack-quarkus-build --follow
ShellSession

We can switch to the OpenShift Console. As you can see, our build is running.

The history of such builds is available on OpenShift. You can also review the build logs.

Finally, you should see your image in the list of OpenShift internal image streams.

$ oc get imagestream
NAME                          IMAGE REPOSITORY                                                                                                    TAGS                UPDATED
sample-quarkus-microservice   default-route-openshift-image-registry.apps.pminkows.95az.p1.openshiftapps.com/builds/sample-quarkus-microservice   1.2,0.0.1,1.1,1.0   13 hours ago
ShellSession

Conclusion

OpenShift Build Shipwright lets you perform the entire application image build process on the OpenShift cluster in a standardized manner. Cloud Native Buildpacks is a popular mechanism for building images without writing a Dockerfile yourself. In this case, support for Buildpacks on the OpenShift side is an interesting alternative to the Source-to-Image approach.

The post Quarkus with Buildpacks and OpenShift Builds appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/feed/ 0 15806
Getting Started with Backstage https://piotrminkowski.com/2024/06/13/getting-started-with-backstage/ https://piotrminkowski.com/2024/06/13/getting-started-with-backstage/#comments Thu, 13 Jun 2024 13:47:07 +0000 https://piotrminkowski.com/?p=15266 This article will teach you how to use Backstage in your app development and create software templates to generate a typical Spring Boot app. Backstage is an open-source framework for building developer portals. It allows us to automate the creation of the infrastructure, CI/CD, and operational knowledge needed to run an application or product. It […]

The post Getting Started with Backstage appeared first on Piotr's TechBlog.

]]>
This article will teach you how to use Backstage in your app development and create software templates to generate a typical Spring Boot app. Backstage is an open-source framework for building developer portals. It allows us to automate the creation of the infrastructure, CI/CD, and operational knowledge needed to run an application or product. It offers a centralized software catalog and unifies all infrastructure tooling, services, and documentation within a single and intuitive UI. With Backstage, we can create any new software component, such as a new microservice, just with a few clicks. Developers can choose between several standard templates. Platform engineers will create such templates to meet the organization’s best practices.

From the technical point of view, Backstage is a web frontend designed to run on Node.js. It is mostly written in TypeScript using the React framework. It has an extensible nature. Each time we need to integrate Backstage with some third-party software we need to install a dedicated plugin. Plugins are essentially individually packaged React components. Today you will learn how to install and configure plugins to integrate with GitHub, CircleCI, and Sonarqube.

It is the first article about Backstage on my blog. You can expect more in the future. However, before proceeding with this article, it is worth reading the following post. It explains how I create my repositories on GitHub and what tools I’m using to check the quality of the code and be up-to-date.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. This time, there are two sample Git repositories. The first of them contains software templates written in the Backstage technology called Skaffolder. On the other hand, the second repository contains the source code of the simple Spring Boot generated from the Backstage template. Once you clone both of those repos, you should just follow my further instructions.

Writing Software Templates in Backstage

We can create our own software templates using YAML notation or find existing examples on the web. Such software templates are very similar to the definition of Kubernetes objects. They have the apiVersion, kind (Template) fields, metadata, and spec fields. Each template must define a list of input variables and then a list of actions that the scaffolding service executes.

The Structure of the Repository with Software Templates

Let’s take a look at the structure of the repository containing our Spring Boot template. As you see, there is the template.yaml file with the software template YAML manifest and the skeleton directory with our app source code. Besides Java files, there is the Maven pom.xml, Renovate and CircleCI configuration manifests. The Scaffolder template input parameters determine the name of Java classes or packages. In the Skaffolder template, we set the default base package name and a domain object name. The catalog-info.yaml file contains a definition of the object required to register the app in the software catalog. As you can see, we are also parametrizing the names of the files with the domain object names.

.
├── skeleton
│   ├── .circleci
│   │   └── config.yml
│   ├── README.md
│   ├── catalog-info.yaml
│   ├── pom.xml
│   ├── renovate.json
│   └── src
│       ├── main
│       │   ├── java
│       │   │   └── ${{values.javaPackage}}
│       │   │       ├── Application.java
│       │   │       ├── controller
│       │   │       │   └── ${{values.domainName}}Controller.java
│       │   │       └── domain
│       │   │           └── ${{values.domainName}}.java
│       │   └── resources
│       │       └── application.yml
│       └── test
│           └── java
│               └── ${{values.javaPackage}}
│                   └── ${{values.domainName}}ControllerTests.java
└── template.yaml

13 directories, 11 files
ShellSession

Building Templates with Scaffolder

Now, let’s take a look at the most important element in our repository – the template manifest. It defines several input parameters with default values. We can set the name of our app (appName), choose a default branch name inside the Git repository (repoBranchName), Maven group ID (groupId), the name of the default Java package (javaPackage), or the base REST API controller path (apiPath). All these parameters are then used during code generation. If you need more customization in the templates, you should add other parameters in the manifest.

The steps section in the manifest defines the actions required to create a new app in Backstage. We are doing three things here. In the first step, we need to generate the Spring Boot app source by filling the templates inside the skeleton directory with parameters defined in the Scaffolder manifest. Then, we are publishing the generated code in the newly created GitHub repository. The name of the repository is the same as the app name (the appName parameter). The owner of the GitHub repository is determined by the values of the orgName parameter. Finally, we are registering the new component in the Backstage catalog by calling the catalog:register action.

apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  name: spring-boot-basic-template
  title: Create a Spring Boot app
  description: Create a Spring Boot app
  tags:
    - spring-boot
    - java
    - maven
    - circleci
    - renovate
    - sonarqube
spec:
  owner: piomin
  system: piomin
  type: service

  parameters:
    - title: Provide information about the new component
      required:
        - orgName
        - appName
        - domainName
        - repoBranchName
        - groupId
        - javaPackage
        - apiPath
        - description
      properties:
        orgName:
          title: Organization name
          type: string
          default: piomin
        appName:
          title: App name
          type: string
          default: sample-spring-boot-app
        domainName:
          title: Name of the domain object
          type: string
          default: Person
        repoBranchName:
          title: Name of the branch in the Git repository
          type: string
          default: master
        groupId:
          title: Maven Group ID
          type: string
          default: pl.piomin.services
        javaPackage:
          title: Java package directory
          type: string
          default: pl/piomin/services
        apiPath:
          title: REST API path
          type: string
          default: /api/v1
        description:
          title: Description
          type: string
          default: Sample Spring Boot App
          
  steps:
    - id: sourceCodeTemplate
      name: Generating the Source Code Component
      action: fetch:template
      input:
        url: ./skeleton
        values:
          orgName: ${{ parameters.orgName }}
          appName: ${{ parameters.appName }}
          domainName: ${{ parameters.domainName }}
          groupId: ${{ parameters.groupId }}
          javaPackage: ${{ parameters.javaPackage }}
          apiPath: ${{ parameters.apiPath }}

    - id: publish
      name: Publishing to the Source Code Repository
      action: publish:github
      input:
        allowedHosts: ['github.com']
        description: ${{ parameters.description }}
        repoUrl: github.com?owner=${{ parameters.orgName }}&repo=${{ parameters.appName }}
        defaultBranch: ${{ parameters.repoBranchName }}
        repoVisibility: public

    - id: register
      name: Registering the Catalog Info Component
      action: catalog:register
      input:
        repoContentsUrl: ${{ steps.publish.output.repoContentsUrl }}
        catalogInfoPath: /catalog-info.yaml

  output:
    links:
      - title: Open the Source Code Repository
        url: ${{ steps.publish.output.remoteUrl }}
      - title: Open the Catalog Info Component
        icon: catalog
        entityRef: ${{ steps.register.output.entityRef }}
YAML

Generating Spring Boot Source Code

Here’s the template of the Maven pom.xml. Our app uses the current latest version of the Spring Boot framework and Java 21 for compilation. The Maven groupId and artifactId are taken from the Scaffolder template parameters. The pom.xml file also contains data required to integrate with a specific project on Sonarcloud (sonar.projectKey and sonar.organization).

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>${{ values.groupId }}</groupId>
    <artifactId>${{ values.appName }}</artifactId>
    <version>1.0-SNAPSHOT</version>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>3.3.0</version>
    </parent>

    <properties>
        <sonar.projectKey>${{ values.orgName }}_${{ values.appName }}</sonar.projectKey>
        <sonar.organization>${{ values.orgName }}</sonar.organization>
        <sonar.host.url>https://sonarcloud.io</sonar.host.url>
        <java.version>21</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springdoc</groupId>
            <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
            <version>2.5.0</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-devtools</artifactId>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>org.instancio</groupId>
            <artifactId>instancio-junit</artifactId>
            <version>4.7.0</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <executions>
                    <execution>
                        <goals>
                            <goal>build-info</goal>
                        </goals>
                        <configuration>
                            <additionalProperties>
                                <java.target>${java.version}</java.target>
                                <time>${maven.build.timestamp}</time>
                            </additionalProperties>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>pl.project13.maven</groupId>
                <artifactId>git-commit-id-plugin</artifactId>
                <configuration>
                    <failOnNoGitDirectory>false</failOnNoGitDirectory>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.jacoco</groupId>
                <artifactId>jacoco-maven-plugin</artifactId>
                <version>0.8.12</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>prepare-agent</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>report</id>
                        <phase>test</phase>
                        <goals>
                            <goal>report</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

</project>
XML

There are several Java classes generated during bootstrap. Here’s the @RestController class template. It uses three parameters defined in the Scaffolder template: groupId, domainName and apiPath. It imports the domain object class and exposes REST methods for CRUD operations. As you see, the implementation is very simple. It just uses the in-memory Java List to store the domain objects. However, it perfectly shows the idea behind Scaffolder templates.

package ${{ values.groupId }}.controller;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.*;
import ${{ values.groupId }}.domain.${{ values.domainName }};

import java.util.ArrayList;
import java.util.List;

@RestController
@RequestMapping("${{ values.apiPath }}")
public class ${{ values.domainName }}Controller {

    private final Logger LOG = LoggerFactory.getLogger(${{ values.domainName }}Controller.class);
    private final List<${{ values.domainName }}> objs = new ArrayList<>();

    @GetMapping
    public List<${{ values.domainName }}> findAll() {
        return objs;
    }

    @GetMapping("/{id}")
    public ${{ values.domainName }} findById(@PathVariable("id") Long id) {
        ${{ values.domainName }} obj = objs.stream().filter(it -> it.getId().equals(id))
                .findFirst()
                .orElseThrow();
        LOG.info("Found: {}", obj.getId());
        return obj;
    }

    @PostMapping
    public ${{ values.domainName }} add(@RequestBody ${{ values.domainName }} obj) {
        obj.setId((long) (objs.size() + 1));
        LOG.info("Added: {}", obj);
        objs.add(obj);
        return obj;
    }

    @DeleteMapping("/{id}")
    public void delete(@PathVariable("id") Long id) {
        ${{ values.domainName }} obj = objs.stream().filter(it -> it.getId().equals(id)).findFirst().orElseThrow();
        objs.remove(obj);
        LOG.info("Removed: {}", id);
    }

    @PutMapping
    public void update(@RequestBody ${{ values.domainName }} obj) {
        ${{ values.domainName }} objTmp = objs.stream()
                .filter(it -> it.getId().equals(obj.getId()))
                .findFirst()
                .orElseThrow();
        objs.set(objs.indexOf(objTmp), obj);
        LOG.info("Updated: {}", obj.getId());
    }

}
Java

Then, we can generate a test class to verify @RestController endpoints. The app is starting on the random port during the JUnit tests. In the first test, we are adding a new object into the store. Then we are verifying the GET /{id} endpoint works fine. Finally, we are removing the object from the store by calling the DELETE /{id} endpoint.

package ${{ values.groupId }};

import org.instancio.Instancio;
import org.junit.jupiter.api.MethodOrderer;
import org.junit.jupiter.api.Order;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestMethodOrder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.web.client.TestRestTemplate;
import ${{ values.groupId }}.domain.${{ values.domainName }};

import static org.junit.jupiter.api.Assertions.*;

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class ${{ values.domainName }}ControllerTests {

    private static final String API_PATH = "${{values.apiPath}}";

    @Autowired
    private TestRestTemplate restTemplate;

    @Test
    @Order(1)
    void add() {
        ${{ values.domainName }} obj = restTemplate.postForObject(API_PATH, Instancio.create(${{ values.domainName }}.class), ${{ values.domainName }}.class);
        assertNotNull(obj);
        assertEquals(1, obj.getId());
    }

    @Test
    @Order(2)
    void findAll() {
        ${{ values.domainName }}[] objs = restTemplate.getForObject(API_PATH, ${{ values.domainName }}[].class);
        assertTrue(objs.length > 0);
    }

    @Test
    @Order(2)
    void findById() {
        ${{ values.domainName }} obj = restTemplate.getForObject(API_PATH + "/{id}", ${{ values.domainName }}.class, 1L);
        assertNotNull(obj);
        assertEquals(1, obj.getId());
    }

    @Test
    @Order(3)
    void delete() {
        restTemplate.delete(API_PATH + "/{id}", 1L);
        ${{ values.domainName }} obj = restTemplate.getForObject(API_PATH + "/{id}", ${{ values.domainName }}.class, 1L);
        assertNull(obj.getId());
    }

}
Java

Integrate with CircleCI and Renovate

Once I create a new repository on GitHub I want to automatically integrate it with CircleCI builds. I also want to update Maven dependencies versions automatically with Renovate to keep the project up to date. Since my GitHub account is connected to the CircleCI account and the Renovate app is installed there I just need to provide two configuration manifests inside the generated repository. Here’s the CircleCI config.yaml file. It runs the Maven build with JUnit tests and performs the Sonarqube scan in the sonarcloud.io portal.

version: 2.1

jobs:
  analyze:
    docker:
      - image: 'cimg/openjdk:21.0.2'
    steps:
      - checkout
      - run:
          name: Analyze on SonarCloud
          command: mvn verify sonar:sonar

executors:
  jdk:
    docker:
      - image: 'cimg/openjdk:21.0.2'

orbs:
  maven: circleci/maven@1.4.1

workflows:
  maven_test:
    jobs:
      - maven/test:
          executor: jdk
      - analyze:
          context: SonarCloud
YAML

Here’s the renovate.json manifest. Renovate will create a PR in GitHub each time it detects a new version of Maven dependency.

{
  "$schema": "https://docs.renovatebot.com/renovate-schema.json",
  "extends": [
    "config:base",":dependencyDashboard"
  ],
  "packageRules": [
    {
      "matchUpdateTypes": ["minor", "patch", "pin", "digest"],
      "automerge": true
    }
  ],
  "prCreation": "not-pending"
}
YAML

Register a new Component in the Software Catalog

Once we generate the whole code and publish it as the GitHub repository, we need to register a new component in the Backstage catalog. In order to achieve this, our repository needs to contain the Component manifest as shown below. Once again, we need to fill it with the parameter values during the bootstrap phase. It contains a reference to the CircleCI and Sonarqube projects and a generated GitHub repository in the annotations section. Here’s the catalog-info.yaml template:

apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
  name: ${{ values.appName }}
  title: ${{ values.appName }}
  annotations:
    circleci.com/project-slug: github/${{ values.orgName }}/${{ values.appName }}
    github.com/project-slug: ${{ values.orgName }}/${{ values.appName }}
    sonarqube.org/project-key: ${{ values.orgName }}_${{ values.appName }}
  tags:
    - spring-boot
    - java
    - maven
    - circleci
    - renovate
    - sonarqube
spec:
  type: service
  owner: piotr.minkowski@gmail.com
  lifecycle: experimental
YAML

Running Backstage

Once we have the whole template ready, we can proceed to run the Backstage on our local machine. As I mentioned before, Backstage is a Node.js app, so we need to have several tools installed to be able to run it. By the way, the list of prerequisites is pretty large. You can find it under the following link. First of all, I had to downgrade the version of Node.js from the latest 22 to 18. We also need to install yarn and npx. If you have the following versions of those tools, you shouldn’t have any problems with running Backstage according to further instructions.

$ node --version
v18.20.3

$ npx --version
10.7.0

$ yarn --version
1.22.22
ShellSession

Running a Standalone Server Locally

We are going to run Backstage locally in the development mode as a standalone server. In order to achieve this, we first need to run the following command. It will create a new directory with a Backstage app inside. 

$ npx @backstage/create-app@latest
ShellSession

This may take some time. But if you see a similar result, it means that your instance is ready. However, before we start it we need to install some plugins and include some configuration settings. In that case, the name of our instance is backstage1.

Firstly, we should go to the backstage1 directory and take a look at the project structure. The most important elements for us are: the app-config.yaml file with configuration and the packages directory with the source code of the backend and frontend (app) modules.

├── README.md
├── app-config.local.yaml
├── app-config.production.yaml
├── app-config.yaml
├── backstage.json
├── catalog-info.yaml
├── dist-types
├── examples
├── lerna.json
├── node_modules
├── package.json
├── packages
│   ├── app
│   └── backend
├── playwright.config.ts
├── plugins
├── tsconfig.json
└── yarn.lock
ShellSession

The app is not configured according to our needs yet. However, we can run it with the following command just to try it out:

$ yarn dev
ShellSession

We can visit the UI available under the http://localhost:3000:

Provide Configuration and Install Plugins

In the first, we will analyze the app-config.yaml and add some configuration settings there. I will focus only on the aspects important to our exercise. The default configuration comes with enabled built-in integration with GitHub. We just need to generate a personal access token in GitHub and provide it as the GITHUB_TOKEN environment variable (1). Then, we need to integrate with Sonarqube also through the access token (2). Our portal will also display a list of CircleCI builds. Therefore, we need to include the CircleCI token as well (3). Finally, we should include the URL address of our custom Scaffolder template in the catalog section (4). It is located in our sample GitHub repository: https://github.com/piomin/backstage-templates/blob/master/templates/spring-boot-basic/skeleton/catalog-info.yaml.

app:
  title: Scaffolded Backstage App
  baseUrl: http://localhost:3000

organization:
  name: piomin

backend:
  baseUrl: http://localhost:7007
  listen:
    port: 7007
  csp:
    connect-src: ["'self'", 'http:', 'https:']
  cors:
    origin: http://localhost:3000
    methods: [GET, HEAD, PATCH, POST, PUT, DELETE]
    credentials: true
  database:
    client: better-sqlite3
    connection: ':memory:'

# (1)
integrations:
  github:
    - host: github.com
      token: ${GITHUB_TOKEN}

# (2)
sonarqube:
  baseUrl: https://sonarcloud.io
  apiKey: ${SONARCLOUD_TOKEN}

# (3)
proxy:
  '/circleci/api':
    target: https://circleci.com/api/v1.1
    headers:
      Circle-Token: ${CIRCLECI_TOKEN}
      
auth:
  providers:
    guest: {}

catalog:
  import:
    entityFilename: catalog-info.yaml
    pullRequestBranchName: backstage-integration
  rules:
    - allow: [Component, System, API, Resource, Location]
  locations:`
    - type: file
      target: ../../examples/entities.yaml
    - type: file
      target: ../../examples/template/template.yaml
      rules:
        - allow: [Template]
    # (4)
    - type: url
      target: https://github.com/piomin/backstage-templates/blob/master/templates/spring-boot-basic/template.yaml
      rules:
        - allow: [ Template ]
    - type: file
      target: ../../examples/org.yaml
      rules:
        - allow: [User, Group]
YAML

So, before starting Backstage we need to export both GitHub and Sonarqube tokens.

$ export GITHUB_TOKEN=<YOUR_GITHUB_TOKEN>
$ export SONARCLOUD_TOKEN=<YOUR_SONARCLOUD_TOKEN>
$ export CIRCLECI_TOKEN=<YOUR_CIRCLECI_TOKEN>
ShellSession

For those of you who didn’t generate Sonarcloud tokens before:

And similar operation for CicleCI:

Unfortunately, that is not all. Now, we need to install several required plugins. To be honest with you, plugin installation in Backstage is quite troublesome. Usually, we not only need to install such a plugin with yarn but also provide some changes in the packages directory. Let’s begin!

Enable GitHub Integration

Although integration with GitHub is enabled by default in the configuration settings we still need to install the plugin to be able to make some actions related to repositories. Firstly, from your Backstage instance root directory, you need to execute the following command:

$ yarn --cwd packages/backend add @backstage/plugin-scaffolder-backend-module-github
ShellSession

Then, go to the packages/backend/app/index.ts file and add a single highlighted line there. It imports the @backstage/plugin-scaffolder-backend-module-github module to the backend.

import { createBackend } from '@backstage/backend-defaults';

const backend = createBackend();

backend.add(import('@backstage/plugin-app-backend/alpha'));
backend.add(import('@backstage/plugin-proxy-backend/alpha'));
backend.add(import('@backstage/plugin-scaffolder-backend/alpha'));
backend.add(import('@backstage/plugin-techdocs-backend/alpha'));
backend.add(import('@backstage/plugin-auth-backend'));
backend.add(import('@backstage/plugin-auth-backend-module-guest-provider'));
backend.add(import('@backstage/plugin-catalog-backend/alpha'));
backend.add(
  import('@backstage/plugin-catalog-backend-module-scaffolder-entity-model'),
);
backend.add(import('@backstage/plugin-permission-backend/alpha'));
backend.add(
  import('@backstage/plugin-permission-backend-module-allow-all-policy'),
);
backend.add(import('@backstage/plugin-search-backend/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-catalog/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-techdocs/alpha'));

backend.add(import('@backstage/plugin-scaffolder-backend-module-github'));
backend.add(import('@backstage-community/plugin-sonarqube-backend'));

backend.start();
TypeScript

After that, it will be possible to call the publish:github action defined in our template, which is responsible for creating a new GitHub repository with the Spring Boot app source code.

Enable Sonarqube Integration

In the next step, we need to install and configure the Sonarqube plugin. This time, we need to install both backend and frontend modules. Let’s begin with a frontend part. Firstly, we have to execute the following yarn command from the project root directory:

$ yarn --cwd packages/app add @backstage-community/plugin-sonarqube
ShellSession

Then, we need to edit the packages/app/src/components/catalog/EntityPage.tsx file to import the EntitySonarQubeCard object from the @backstage-community/plugin-sonarqube plugin. After that, we can include EntitySonarQubeCard component to the frontend page. For example, it can be placed as a part of the overview content.

import { EntitySonarQubeCard } from '@backstage-community/plugin-sonarqube';

// ... other imports
// ... other contents

const overviewContent = (
  <Grid container spacing={3} alignItems="stretch">
    {entityWarningContent}
    <Grid item md={6}>
      <EntityAboutCard variant="gridItem" />
    </Grid>
    <Grid item md={6} xs={12}>
      <EntityCatalogGraphCard variant="gridItem" height={400} />
    </Grid>
    <Grid item md={6}>
      <EntitySonarQubeCard variant="gridItem" />
    </Grid>
    <Grid item md={4} xs={12}>
      <EntityLinksCard />
    </Grid>
    <Grid item md={8} xs={12}>
      <EntityHasSubcomponentsCard variant="gridItem" />
    </Grid>
  </Grid>
);
TypeScript

Then, we can proceed with the backend plugin. Once again, we are installing with the yarn command:

$ yarn --cwd packages/backend add @backstage-community/plugin-sonarqube-backend
ShellSession

Finally, the same as for the GitHub plugin, go to the packages/backend/app/index.ts file and add a single highlighted line there to import the @backstage-community/plugin-sonarqube-backend to the backend module.

import { createBackend } from '@backstage/backend-defaults';

const backend = createBackend();

backend.add(import('@backstage/plugin-app-backend/alpha'));
backend.add(import('@backstage/plugin-proxy-backend/alpha'));
backend.add(import('@backstage/plugin-scaffolder-backend/alpha'));
backend.add(import('@backstage/plugin-techdocs-backend/alpha'));
backend.add(import('@backstage/plugin-auth-backend'));
backend.add(import('@backstage/plugin-auth-backend-module-guest-provider'));
backend.add(import('@backstage/plugin-catalog-backend/alpha'));
backend.add(
  import('@backstage/plugin-catalog-backend-module-scaffolder-entity-model'),
);
backend.add(import('@backstage/plugin-permission-backend/alpha'));
backend.add(
  import('@backstage/plugin-permission-backend-module-allow-all-policy'),
);
backend.add(import('@backstage/plugin-search-backend/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-catalog/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-techdocs/alpha'));

backend.add(import('@backstage/plugin-scaffolder-backend-module-github'));
backend.add(import('@backstage-community/plugin-sonarqube-backend'));

backend.start();
TypeScript

Note that previously, we added the sonarqube section with the access token to the app-config.yaml file and we included annotation with the SonarCloud project key into the Backstage Component manifest. Thanks to that, we don’t need to do anything more in this part.

Enable CircleCI Integration

In order to install the CircleCI plugin, we need to execute the following yarn command:

$ yarn add --cwd packages/app @circleci/backstage-plugin
ShellSession

Then, we have to edit the packages/app/src/components/catalog/EntityPage.tsx file. The same as before we need to include the import section and choose a place on the frontend page to display the content.

// ... other imports

import {
  EntityCircleCIContent,
  isCircleCIAvailable,
} from '@circleci/backstage-plugin';

// ... other contents

const cicdContent = (
  <EntitySwitch>
    <EntitySwitch.Case if={isCircleCIAvailable}>
      <EntityCircleCIContent />
    </EntitySwitch.Case>
    <EntitySwitch.Case>
      <EmptyState
        title="No CI/CD available for this entity"
        missing="info"
        description="You need to add an annotation to your component if you want to enable CI/CD for it. You can read more about annotations in Backstage by clicking the button below."
        action={
          <Button
            variant="contained"
            color="primary"
            href="https://backstage.io/docs/features/software-catalog/well-known-annotations"
          >
            Read more
          </Button>
        }
      />
    </EntitySwitch.Case>
  </EntitySwitch>
);
TypeScript

Final Run

That’s all we need to configure before running the Backstage instance. Once again, we need to start the instance with the yarn dev command. After running the app, we should go to the “Create…” section in the left menu pane. You should see our custom template under the name “Create a Spring Boot app”. Click the “CHOOSE” button to create a new component from that template.

backstage-templates

Then, we will see the form with several input parameters. I will just change the app name to the sample-spring-boot-app-backstage and leave the default values everywhere else.

backstage-app-create

Then, let’s just click the “CREATE” button on the next page.

After that, Backstage will generate all the required things from our sample template.

backstage-process

We can go to the app page in the Backstage catalog. As you see it contains the “Code Quality” section with the latest Sonrqube report for our newly generated app.

backstage-overview

We can also switch to the “CI/CD” tab to see the history of the app builds in the CircleCI.

backstage-cicd

If you want to visit the example repository generated from the sample template discussed today, you can find it here.

Final Thoughts

Backstage is an example of a no-code IDP (Internal Developer Portal). IDP is an important part of the relatively new trend in software development called “Platform Engineering”. In this article, I showed you how to create “Golden Path Templates” using the technology called Scaffolder. Then, you could see how to run Backstage on the local machine and how to create an app source from the template. We installed some useful plugins to integrate our portal with GitHub, CircleCI, and Sonarqube. Plugins installation may cause some problems, especially for people without experience in Node.js and React. Hope it helps!

The post Getting Started with Backstage appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/06/13/getting-started-with-backstage/feed/ 6 15266
Interesting Facts About Java Streams and Collections https://piotrminkowski.com/2024/04/25/interesting-facts-about-java-streams-and-collections/ https://piotrminkowski.com/2024/04/25/interesting-facts-about-java-streams-and-collections/#comments Thu, 25 Apr 2024 11:25:07 +0000 https://piotrminkowski.com/?p=15228 This article will show some interesting features of Java Streams and Collections you may not heard about. We will look at both the latest API enhancements as well as the older ones that have existed for years. That’s my private list of features I used recently or I just came across while reading articles about […]

The post Interesting Facts About Java Streams and Collections appeared first on Piotr's TechBlog.

]]>
This article will show some interesting features of Java Streams and Collections you may not heard about. We will look at both the latest API enhancements as well as the older ones that have existed for years. That’s my private list of features I used recently or I just came across while reading articles about Java. If you are interested in Java you can find some similar articles on my blog. In one of them, you can find a list of less-known but useful Java libraries.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. Once you clone the repository, you can switch to the jdk22 branch. Then you should just follow my instructions.

Mutable or Immutable

The approach to the collections immutability in Java can be annoying for some of you. How do you know if a Java Collection is mutable or immutable? Java does not provide dedicated interfaces and implementations for mutable or immutable collections like e.g. Kotlin. Of course, you can switch to the Eclipse Collections library, which provides a clear differentiation of types between readable, mutable, and immutable. However, if you are still with a standard Java Collections let’s analyze the situation by the example of the java.util.List interface. Some years ago Java 16 introduced a new Stream.toList() method to convert between streams and collections. Probably, you are using it quite often 🙂 It is worth mentioning that this method returns an unmodifiable List and allows nulls.

var l = Stream.of(null, "Green", "Yellow").toList();
assertEquals(3, l.size());
assertThrows(UnsupportedOperationException.class, () -> l.add("Red"));
assertThrows(UnsupportedOperationException.class, () -> l.set(0, "Red"));
Java

So, the Stream.toList() method is not just a replacement for the older approach based on the Collectors.toList(). On the other hand, an older Collectors.toList() method returns a modifiable List and also allows nulls.

var l = Stream.of(null, "Green", "Yellow").collect(Collectors.toList());
l.add("Red");
assertEquals(4, l.size());
Java

Finally, let’s see how to achieve the next possible option here. We need an unmodifiable List and does not allow nulls. In order to achieve it, we have to use the Collectors.toUnmodifiableList() method as shown below.

assertThrows(NullPointerException.class, () ->
        Stream.of(null, "Green", "Yellow")
                .collect(Collectors.toUnmodifiableList()));
Java

Grouping and Aggregations with Java Streams

Java Streams introduces several useful methods that allow us to group and aggregate collections using different criteria. We can find those methods in the java.util.stream.Collectors class. Let’s create the Employee record for testing purposes.

public record Employee(String firstName, 
                       String lastName, 
                       String position, 
                       int salary) {}
Java

Now, let’s assume we have a stream of employees and we want to calculate the sum of salary grouped by the position. In order to achieve this, we should combine two methods from the Collectors class: groupingBy and summingInt. If you would like to count the average salary by the position, you can just replace the summingInt method with the averagingInt method.

Stream<Employee> s1 = Stream.of(
    new Employee("AAA", "BBB", "developer", 10000),
    new Employee("AAB", "BBC", "architect", 15000),
    new Employee("AAC", "BBD", "developer", 13000),
    new Employee("AAD", "BBE", "tester", 7000),
    new Employee("AAE", "BBF", "tester", 9000)
);

var m = s1.collect(Collectors.groupingBy(Employee::position, 
   Collectors.summingInt(Employee::salary)));
assertEquals(3, m.size());
assertEquals(m.get("developer"), 23000);
assertEquals(m.get("architect"), 15000);
assertEquals(m.get("tester"), 16000);
Java

For more simple grouping we can the partitioningBy method. It always returns a map with two entries, one where the predicate is true and the second one where it is false. So, for the same stream as in the previous example, we can use partitioningBy to divide employees into those with salaries higher than 10000, and lower or equal to 10000.

var m = s1.collect(Collectors.partitioningBy(emp -> emp.salary() > 10000));
assertEquals(2, m.size());
assertEquals(m.get(true).size(), 2);
assertEquals(m.get(false).size(), 3);
Java

Let’s take a look at another example. This time, we will count the number of times each element occurs in the collection. Once again, we can use the groupingBy method, but this time in conjunction with the Collectors.counting() method.

var s = Stream.of(2, 3, 4, 2, 3, 5, 1, 3, 4, 4)
   .collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));
assertEquals(5, m.size());
assertEquals(m.get(4), 3);
Java

The Map.merge() Method

In the previous examples, we use methods provided by Java Streams to perform grouping and aggregations. Now, the question is if we can do the same thing with the standard Java collections without converting them to streams. The answer is yes – we can easily do it with the Map.merge() method. It is probably the most versatile operation among all the Java key-value methods. The Map.merge() method either puts a new value under the given key (if absent) or updates the existing key with a given value. Let’s rewrite the previous examples, to switch from Java streams to collections. Here’s the implementation for counting the number of times each element occurs in the collection.

var map = new HashMap<Integer, Integer>();
var nums = List.of(2, 3, 4, 2, 3, 5, 1, 3, 4, 4);
nums.forEach(num -> map.merge(num, 1, Integer::sum));
assertEquals(5, map.size());
assertEquals(map.get(4), 3);
Java

Then, we can implement the operation of calculating the sum of salary grouped by the position. So, we are grouping by the emp.position() and calculating the total salary by summing the previous value with the value taken from the current Employee in the list. The results are the same as in the examples from the previous section.

var s1 = List.of(
   new Employee("AAA", "BBB", "developer", 10000),
   new Employee("AAB", "BBC", "architect", 15000),
   new Employee("AAC", "BBD", "developer", 13000),
   new Employee("AAD", "BBE", "tester", 7000),
   new Employee("AAE", "BBF", "tester", 9000)
);
var map = new HashMap<String, Integer>();
s1.forEach(emp -> map.merge(emp.position(), emp.salary(), Integer::sum));
assertEquals(3, map.size());
assertEquals(map.get("developer"), 23000);
Java

Use EnumSet for Java Enum

If you are storing enums inside Java collections you should use EnumSet instead of e.g. more popular HashSet. The EnumSet and EnumMap collections are specialized versions of Set and Map that are built for enums. Those abstracts guarantee less memory consumption and much better performance. They are also providing some methods dedicated to simplifying integration with Java Enum. In order to compare processing time between EnumSet and a standard Set we can prepare a simple test. In this test, I’m creating a subset of Java Enum inside the EnumSet and then checking out if all the values exist in the target EnumSet.

var x = EnumSet.of(
    EmployeePosition.SRE,
    EmployeePosition.ARCHITECT,
    EmployeePosition.DEVELOPER);
long beg = System.nanoTime();
for (int i = 0; i < 100_000_000; i++) {
   var es = EnumSet.allOf(EmployeePosition.class);
   es.containsAll(x);
}
long end = System.nanoTime();
System.out.println(x.getClass() + ": " + (end - beg)/1e9);
Java

Here’s a similar test without EnumSet:

var x = Set.of(
    EmployeePosition.SRE,
    EmployeePosition.ARCHITECT,
    EmployeePosition.DEVELOPER);
long beg = System.nanoTime();
for (int i = 0; i < 100_000_000; i++) {
   var hs = Set.of(EmployeePosition.values());
   hs.containsAll(x);
}
long end = System.nanoTime();
System.out.println(x.getClass() + ": " + (end - beg)/1e9);
Java

The difference in time consumption before both two variants visible above is pretty significant.

class java.util.ImmutableCollections$SetN: 8.577672411
class java.util.RegularEnumSet: 0.184956851
ShellSession

Java 22 Stream Gatherers

The latest JDK 22 release introduces a new addition to Java streams called gatherers. Gatherers enrich the Java Stream API with capabilities for custom intermediate operations. Thanks to that we can transform data streams in ways that were previously complex or not directly supported by the existing API. First of all, this is a preview feature, so we need to explicitly enable it in the compiler configuration. Here’s the modification in the Maven pom.xml:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-compiler-plugin</artifactId>
  <version>3.13.0</version>
  <configuration>
    <release>22</release>
    <compilerArgs>--enable-preview</compilerArgs>
  </configuration>
</plugin>
XML

The aim of that article is not to provide you with a detailed explanation of Stream Gatherers. So, if you are looking for an intro it’s worth reading the following article. However, just to give you an example of how we can leverage gatherers I will provide a simple implementation of a circuit breaker with the slidingWindow() method. In order to show you, what exactly happens inside I’m logging the intermediate elements with the peek() method. Our implementation will open the circuit breaker if there are more than 10 errors in the specific period.

var errors = Stream.of(2, 0, 1, 3, 4, 2, 3, 0, 3, 1, 0, 0, 1)
                .gather(Gatherers.windowSliding(4))
                .peek(a -> System.out.println(a))
                .map(x -> x.stream().collect(summing()) > 10)
                .toList();
System.out.println(errors);
Java

The size of our sliding window is 4. Therefore, each time the slidingWindow() method creates a list with 4 subsequent elements. For each intermediate list, I’m summing the values and checking if the total number of errors is greater than 10. Here’s the output. As you see only the circuit breaker is opened for the [3, 4, 2, 3] fragment of the source stream.

[2, 0, 1, 3]
[0, 1, 3, 4]
[1, 3, 4, 2]
[3, 4, 2, 3]
[4, 2, 3, 0]
[2, 3, 0, 3]
[3, 0, 3, 1]
[0, 3, 1, 0]
[3, 1, 0, 0]
[1, 0, 0, 1]
[false, false, false, true, false, false, false, false, false, false]
Java

Use the Stream.reduce() Method

Finally, the last feature in our article. I believe that the Stream.reduce() method is not a very well-known and widely used stream method. However, it is very interesting. For example, we can use it to sum all the numbers in the Java List. The first parameter of the reduce method is an initial value, while the second is an accumulator algorithm.

var listOfNumbers = List.of(1, 2, 3, 4, 5);
var sum = listOfNumbers.stream().reduce(0, Integer::sum);
assertEquals(15, sum);
Java

Final Thoughts

Of course, it is just a small list of interesting facts about Java streams and collections. If you have some other favorite features you can put them in the article comments.

The post Interesting Facts About Java Streams and Collections appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/04/25/interesting-facts-about-java-streams-and-collections/feed/ 3 15228
Java Flight Recorder on Kubernetes https://piotrminkowski.com/2024/02/13/java-flight-recorder-on-kubernetes/ https://piotrminkowski.com/2024/02/13/java-flight-recorder-on-kubernetes/#respond Tue, 13 Feb 2024 07:44:13 +0000 https://piotrminkowski.com/?p=14957 In this article, you will learn how to continuously monitor apps on Kubernetes with Java Flight Recorder and Cryostat. Java Flight Recorder (JFR) is a tool for collecting diagnostic and profiling data generated by the Java app. It is designed for use even in heavily loaded production environments since it causes almost no performance overhead. […]

The post Java Flight Recorder on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to continuously monitor apps on Kubernetes with Java Flight Recorder and Cryostat. Java Flight Recorder (JFR) is a tool for collecting diagnostic and profiling data generated by the Java app. It is designed for use even in heavily loaded production environments since it causes almost no performance overhead. We can say that Java Flight Recorder acts similarly to an airplane’s black box. Even if the JVM crashes, we can analyze the diagnostic data collected just before the failure. This fact makes JFR especially usable in an environment with many running apps – like Kubernetes.

Assuming that we are running many Java apps on Kubernetes, we should interested in the tool that helps to automatically gather data generated by Java Flight Recorder. Here comes Cryostat. It allows us to securely manage JFR recordings for the containerized Java workloads. With the built-in discovery mechanism, it can detect all the apps that expose JFR data. Depending on the use case, we can store and analyze recordings directly on the Kubernetes cluster Cryostat Dashboard or export recorded data to perform a more in-depth analysis.

If you are interested in more topics related to Java apps on Kubernetes, you can take a look at some other posts on my blog. The following article describes a list of best practices for running Java apps Kubernetes. You can also read e.g. on how to resize CPU limit to speed up Java startup on Kubernetes here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you need to go to the callme-service directory. After that, you should just follow my instructions. Let’s begin.

Install Cryostat on Kubernetes

In the first step, we install Cryostat on Kubernetes using its operator. In order to use and manage operators on Kubernetes, we should have the Operator Lifecycle Manager (OLM) installed on the cluster. The operator-sdk binary provides a command to easily install and uninstall OLM:

$ operator-sdk olm install

Alternatively, you can use Helm chart for Cryostat installation on Kubernetes. Firstly, let’s add the following repository:
$ helm repo add openshift https://charts.openshift.io/

Then, install the chart with the following command:
$ helm install my-cryostat openshift/cryostat --version 0.4.0

Once the OLM is running on our cluster, we can proceed to the Cryostat installation. We can find the required YAML manifest with the Subscription declaration in the Operator Hub. Let’s just apply the manifest to the target with the following command:

$ kubectl create -f https://operatorhub.io/install/cryostat-operator.yaml

By default, this operator will be installed in the operators namespace and will be usable from all namespaces in the cluster. After installation, we can verify if the operator works fine by executing the following command:

$ kubectl get csv -n operators

In order to simplify the Cryostat installation process, we can use OpenShift. With OpenShift we don’t need to install OLM, since it is already there. We just need to find the “Red Hat build of Cryostat” operator in the Operator Hub and install it using OpenShift Console. By default, the operator is available in the openshift-operators namespace.

Then, let’s create a namespace dedicated to running Cryostat and our sample app. The name of the namespace is demo-jfr.

$ kubectl create ns demo-jfr

Cryostat recommends using a cert-manager for traffic encryption. In our exercise, we disable that integration for simplification purposes. However, in the production environment, you should install “cert-manager” unless you do not use another solution for encrypting traffic. In order to run Cryostat in the selected namespace, we need to create the Cryostat object. The parameter spec.enableCertManager should be set to false.

apiVersion: operator.cryostat.io/v1beta1
kind: Cryostat
metadata:
  name: cryostat-sample
  namespace: demo-jfr
spec:
  enableCertManager: false
  eventTemplates: []
  minimal: false
  reportOptions:
    replicas: 0
  storageOptions:
    pvc:
      annotations: {}
      labels: {}
      spec: {}
  trustedCertSecrets: []

If everything goes fine, you should see the following pod in the demo-jfr namespace:

$ kubectl get po -n demo-jfr
NAME                               READY   STATUS    RESTARTS   AGE
cryostat-sample-5c57c9b8b8-smzx9   3/3     Running   0          60s

Here’s a list of Kubernetes Services. The Cryostat Dashboard is exposed by the cryostat-sample Service under the 8181 port.

$ kubectl get svc -n demo-jfr
NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
cryostat-sample           ClusterIP   172.31.56.83    <none>        8181/TCP,9091/TCP   70m
cryostat-sample-grafana   ClusterIP   172.31.155.26   <none>        3000/TCP            70m

We can access the Cryostat dashboard using the Kubernetes Ingress or OpenShift Route. Currently, there are no apps to monitor.

Create Sample Java App

We build a sample Java app using the Spring Boot framework. Our app exposes a single REST endpoint. As you see the endpoint implementation is very simple. The pingWithRandomDelay() method adds a random delay between 0 and 3 seconds and returns the string. However, there is one interesting thing inside that method. We are creating the ProcessingEvent object (1). Then, we call its begin method just before sleeping the thread (2). After the method is resumed we call the commit method on the ProcessingEvent object (3). In this inconspicuous way, we are generating our first custom JFR event. This event aims to monitor the processing time of our method.

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);

   private Random random = new Random();
   private AtomicInteger index = new AtomicInteger();

   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping-with-random-delay")
   public String pingWithRandomDelay() throws InterruptedException {
      int r = new Random().nextInt(3000);
      int i = index.incrementAndGet();
      ProcessingEvent event = new ProcessingEvent(i); // (1)
      event.begin(); // (2)
      LOGGER.info("Ping with random delay: id={}, name={}, version={}, delay={}", i,
             buildProperties.isPresent() ? buildProperties.get().getName() : "callme-service", version, r);
      Thread.sleep(r);
      event.commit(); // (3)
      return "I'm callme-service " + version;
   }

}

Let’s switch to the ProcessingEvent implementation. Our custom event needs to extend the jdk.jfr.Event abstract class. It contains a single parameter id. We can use some additional labels to improve the event presentation in the JFR graphical tools. The event will be visible under the name set in the @Name annotation and under the category set in the @Category annotation. We also need to annotate the parameter @Label to make it visible as part of the event.

@Name("ProcessingEvent")
@Category("Custom Events")
@Label("Processing Time")
public class ProcessingEvent extends Event {
    @Label("Event ID")
    private Integer id;

    public ProcessingEvent(Integer id) {
        this.id = id;
    }

    public Integer getId() {
        return id;
    }

    public void setId(Integer id) {
        this.id = id;
    }
}

Of course, our app will generate a lot of standard JFR events useful for profiling and monitoring. But we could also monitor our custom event.

Build App Image and Deploy on Kubernetes

Once we finish the implementation, we may build the container image of our Spring Boot app. Spring Boot comes with a feature for building container images based on the Cloud Native Buildpacks. In the Maven pom.xml you will find a dedicated profile under the build-image id. Once you activate such a profile, it will build the image using the Paketo builder-jammy-base image.

<profile>
  <id>build-image</id>
  <build>
    <plugins>
      <plugin>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-maven-plugin</artifactId>
        <configuration>
          
        </configuration>
        <executions>
          <execution>
            <goals>
              <goal>build-image</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</profile>

Before running the build we should start Docker on the local machine. After that, we should execute the following Maven command:

$ mvn clean package -Pbuild-image -DskipTests

With the build-image profile activated, Spring Boot Maven Plugin builds the image of our app. You should have a similar result as shown below. In my case, the image tag is piomin/callme-service:1.2.1.

By default, Paketo Java Buildpacks uses BellSoft Liberica JDK. With the Paketo BellSoft Liberica Buildpack, we can easily enable Java Flight Recorder for the container using the BPL_JFR_ENABLED environment variable. In order to expose data for Cryostat, we also need to enable the JMX port. In theory, we could use BPL_JMX_ENABLED and BPL_JMX_PORT environment variables for that. However, that option includes some additional configuration to the java command parameters that break the Cryostat discovery. This issue has been already described here. Therefore we will use the JAVA_TOOL_OPTIONS environment variable to set the required JVM parameters directly on the running command.

Instead of exposing the JMX port for discovery, we can include the Cryostat agent in the app dependencies. In that case, we should set the address of the Cryostat API in the Kubernetes Deployment manifest. However, I prefer an approach that doesn’t require any changes on the app side.

Now, let’s back to the Cryostat app discovery. Cryostat is able to automatically detect pods with a JMX port exposed. It requires the concrete configuration of the Kubernetes Service. We need to set the name of the port to jfr-jmx. In theory, we can expose JMX on any port we want, but for me anything other than 9091 caused discovery problems on Cryostat. In the Deployment definition, we have to set the BPL_JFR_ENABLED env to true, and the JAVA_TOOL_OPTIONS to -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9091.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
  template:
    metadata:
      labels:
        app: callme-service
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service:1.2.1
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
            - containerPort: 9091
          env:
            - name: VERSION
              value: "v1"
            - name: BPL_JFR_ENABLED
              value: "true"
            - name: JAVA_TOOL_OPTIONS
              value: "-Dcom.sun.management.jmxremote.port=9091 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  - port: 9091
    name: jfr-jmx
  selector:
    app: callme-service

Let’s apply our deployment manifest to the demo-jfr namespace:

$ kubectl apply -f k8s/deployment-jfr.yaml -n demo-jfr

Here’s a list of pods of our callme-service app:

$ kubectl get po -n demo-jfr -l app=callme-service -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP            NODE
callme-service-6bc5745885-kvqfr   1/1     Running   0          31m   10.134.0.29   worker-cluster-lvsqq-1

Using Cryostat with JFR

View Default Dashboards

Cryostat automatically detects all the pods related to the Kubernetes Service that expose the JMX port. Once we switch to the Cryostat Dashboard, we will see the name of our pod in the “Target” dropdown. The default dashboard shows diagrams illustrating CPU load, heap memory usage, and a number of running Java threads.

java-flight-recorder-kubernetes-dashboard

Then, we can go to the “Recordings” section. It shows a list of active recordings made by Java Flight Recorder for our app running on Kubernetes. By default, Cryostat creates and starts a single recording per each detected target.

We can expand the selected record to see a detailed view. It provides a summarized panel divided into several different categories like heap, memory leak, or exceptions. It highlights warnings with a yellow color and problems with a red color.

java-flight-recorder-kubernetes-panel

We can display a detailed description of each case. We just need to click on the selected field with a problem name. The detailed description will appear in the context menu.

java-flight-recorder-kubernetes-description

Create and Use a Custom Event Template

We can create a custom recording strategy by defining a new event template. Firstly, we need to go to the “Events” section, and then to the “Event Templates” tab. There are three built-in templates. We can use each of them as a base for our custom template. After deciding which of them to choose we can download it to our laptop. The default file extension is *.jfc.

java-flight-recorder-kubernetes-event-templates

In order to edit the *.jfc files we need a special tool called JDK Mission Control. Each vendor provides such a tool for their distribution of JDK. In our case, it is BellSoft Liberica. Once we download and install Liberica Mission Control on the laptop we should go to Window -> Flight Recording Template Manager.

java-flight-recorder-kubernetes-mission-control

With the Flight Recording Template Manager, we can import and edit an exported event template. I choose the higher monitoring for “Garbage Collection”, “Allocation Profiling”, “Compiler”, and “Thread Dump”.

java-flight-recorder-kubernetes-template-manager

Once a new template is ready, we should save it under the selected name. For me, it is the “Continuous Detailed” name. After that, we need to export the template to the file.

Then, we need to switch to the Cryostat Dashboard. We have to import the newly created template exported to the *.jfc file.

Once you import the template, you should see a new strategy in the “Event Templates” section.

We can create a recording based on our custom “Continuous_Detailed” template. After some time, Cryostat should gather data generated by the Java Flight Recorder for the app running on Kubernetes. However, this time we want to make some advanced analysys using Liberica Mission Control rather than just with the Cryostat Dashboard. Therefore we will export the recording to the *.jfr file. Such a file may be then imported to the JDK Mission Control tool.

Use the JDK Mission Control Tool

Let’s open the exported *.jfr file with Liberica Mission Control. Once we do it, we can analyze all the important aspects related to the performance of our Java app. We can display a table with memory allocation per the object type.

We can display a list of running Java threads.

Finally, we go to the “Event Browser” section. In the “Custom Events” category we should find our custom event under the name determined by the @Label annotation on the ProcessingEvent class. We can see the history of all generated JFR events together with the duration, start time, and the name of the processing thread.

Final Thoughts

Cryostat helps you to manage the Java Flight Recorder on Kubernetes at scale. It provides a graphical dashboard that allows to monitoring of all the Java workloads that expose JFR data over JMX. The important thing is that even after an app crash we can export the archived monitoring report and analyze it using advanced tools like JDK Mission Control.

The post Java Flight Recorder on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/02/13/java-flight-recorder-on-kubernetes/feed/ 0 14957
Kubernetes Testing with CircleCI, Kind, and Skaffold https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/ https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/#respond Tue, 28 Nov 2023 13:04:18 +0000 https://piotrminkowski.com/?p=14706 In this article, you will learn how to use tools like Kind or Skaffold to build integration tests on CircleCI for apps running on Kubernetes. Our main goal in this exercise is to build the app image and verify the Deployment on Kubernetes in the CircleCI pipeline. Skaffold and Jib Maven plugin build the image […]

The post Kubernetes Testing with CircleCI, Kind, and Skaffold appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use tools like Kind or Skaffold to build integration tests on CircleCI for apps running on Kubernetes. Our main goal in this exercise is to build the app image and verify the Deployment on Kubernetes in the CircleCI pipeline. Skaffold and Jib Maven plugin build the image from the source and deploy it on Kind using YAML manifests. Finally, we will run some load tests on the deployed app using the Grafana k6 tool and its integration with CircleCI.

If you want to build and run tests against Kubernetes, you can read my article about integration tests with JUnit. On the other hand, if you are looking for other testing tools for testing in a Kubernetes-native environment you can refer to that article about Testkube.

Introduction

Before we start, let’s do a brief introduction. There are three simple Spring Boot apps that communicate with each other. The first-service app calls the endpoint exposed by the caller-service app, and then the caller-service app calls the endpoint exposed by the callme-service app. The diagram visible below illustrates that architecture.

kubernetes-circleci-arch

So in short, our goal is to deploy all the sample apps on Kind during the CircleCI build and then test the communication by calling the endpoint exposed by the first-service through the Kubernetes Service.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. It contains three apps: first-service, caller-service, and callme-service. The main Skaffold config manifest is available in the project root directory. Required Kubernetes YAML manifests are always placed inside the k8s directory. Once you take a look at the source code, you should just follow my instructions. Let’s begin.

Our sample Spring Boot apps are very simple. They are exposing a single “ping” endpoint over HTTP and call “ping” endpoints exposed by other apps. Here’s the @RestController in the first-service app:

@RestController
@RequestMapping("/first")
public class FirstController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(FirstController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() 
         ? buildProperties.get().getName() : "first-service", version);
      String response = restTemplate.getForObject(
         "http://caller-service:8080/caller/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm first-service " + version + ". Calling... " + response;
   }

}

Here’s the @RestController inside the caller-service app. The endpoint is called by the first-service app through the RestTemplate bean.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallerController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.or(Optional::empty), version);
      String response = restTemplate.getForObject(
         "http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }

}

Finally, here’s the @RestController inside the callme-service app. It also exposes a single GET /callme/ping endpoint called by the caller-service app:

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallmeController.class);
   private static final String INSTANCE_ID = UUID.randomUUID().toString();
   private Random random = new Random();

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() 
         ? buildProperties.get().getName() : "callme-service", version);
      return "I'm callme-service " + version;
   }

}

Build and Deploy Images with Skaffold and Jib

Firstly, let’s take a look at the main Maven pom.xml in the project root directory. We use the latest version of Spring Boot and the latest LTS version of Java for compilation. All three app modules inherit settings from the parent pom.xml. In order to build the image with Maven we are including jib-maven-plugin. Since it is still using Java 17 in the default base image, we need to override this behavior with the <from>.
            </from>
          </configuration>
        </plugin>
      </plugins>
    </build>
  </profile>
</profiles>

Now, let’s take a look at the main skaffold.yaml file. Skaffold builds the image using Jib support and deploys all three apps on Kubernetes using manifests available in the k8s/deployment.yaml file inside each app module. Skaffold disables JUnit tests for Maven and activates the jib profile. It is also able to deploy Istio objects after activating the istio Skaffold profile. However, we won’t use it today.

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: simple-istio-services
build:
  artifacts:
    - image: piomin/first-service
      jib:
        project: first-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/caller-service
      jib:
        project: caller-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/callme-service
      jib:
        project: callme-service
        args:
          - -Pjib
          - -DskipTests
  tagPolicy:
    gitCommit: {}
manifests:
  rawYaml:
    - '*/k8s/deployment.yaml'
deploy:
  kubectl: {}
profiles:
  - name: istio
    manifests:
      rawYaml:
        - k8s/istio-*.yaml
        - '*/k8s/deployment-versions.yaml'
        - '*/k8s/istio-*.yaml'

Here’s the typical Deployment for our apps. The app is running on the 8080 port.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: first-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: first-service
  template:
    metadata:
      labels:
        app: first-service
    spec:
      containers:
        - name: first-service
          image: piomin/first-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

For testing purposes, we need to expose the first-service outside of the Kind cluster. In order to do that, we will use the Kubernetes NodePort Service. Our app will be available under the 30000 port.

apiVersion: v1
kind: Service
metadata:
  name: first-service
  labels:
    app: first-service
spec:
  type: NodePort
  ports:
  - port: 8080
    name: http
    nodePort: 30000
  selector:
    app: first-service

Note that all other Kubernetes services (“caller-service” and “callme-service”) are exposed only internally using a default ClusterIP type.

How It Works

In this section, we will discuss how we would run the whole process locally. Of course, our goal is to configure it as the CircleCI pipeline. In order to expose the Kubernetes Service outside Kind we need to define the externalPortMappings section in the configuration manifest. As you probably remember, we are exposing our app under the 30000 port. The following file is available in the repository under the k8s/kind-cluster-test.yaml path:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
  - role: control-plane
    extraPortMappings:
      - containerPort: 30000
        hostPort: 30000
        listenAddress: "0.0.0.0"
        protocol: tcp

Assuming we already installed kind CLI on our machine, we need to execute the following command to create a new cluster:

$ kind create cluster --name c1 --config k8s/kind-cluster-test.yaml

You should have the same result as visible on my screen:

We have a single-node Kind cluster ready. There is a single c1-control-plane container running on Docker. As you see, it exposes 30000 port outside of the cluster:

The Kubernetes context is automatically switched to kind-c1. So now, we just need to run the following command from the repository root directory to build and deploy the apps:

$ skaffold run

If you see a similar output in the skaffold run logs, it means that everything works fine.

kubernetes-circleci-skaffold

We can verify a list of Kubernetes services. The first-service is exposed under the 30000 port as expected.

$ kubectl get svc
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
caller-service   ClusterIP   10.96.47.193   <none>        8080/TCP         2m24s
callme-service   ClusterIP   10.96.98.53    <none>        8080/TCP         2m24s
first-service    NodePort    10.96.241.11   <none>        8080:30000/TCP   2m24s

Assuming you have already installed the Grafana k6 tool locally, you may run load tests using the following command:

$ k6 run first-service/src/test/resources/k6/load-test.js

That’s all. Now, let’s define the same actions with the CircleCI workflow.

Test Kubernetes Deployment with the CircleCI Workflow

The CircleCI config.yml file should be placed in the .circle directory. We are doing two things in our pipeline. In the first step, we are executing Maven unit tests without the Kubernetes cluster. That’s why we need a standard executor with OpenJDK 21 and the maven ORB. In order to run Kind during the CircleCI build, we need to have access to the Docker daemon. Therefore, we use the latest version of the ubuntu-2204 machine.

version: 2.1

orbs:
  maven: circleci/maven@1.4.1

executors:
  jdk:
    docker:
      - image: 'cimg/openjdk:21.0'
  machine_executor_amd64:
    machine:
      image: ubuntu-2204:2023.10.1
    environment:
      architecture: "amd64"
      platform: "linux/amd64"

After that, we can proceed to the job declaration. The name of our job is deploy-k8s. It uses the already-defined machine executor. Let’s discuss the required steps after running a standard checkout command:

  1. We need to install the kubectl CLI and copy it to the /usr/local/bin directory. Skaffold uses kubectl to interact with the Kubernetes cluster.
  2. After that, we have to install the skaffold CLI
  3. Our job also requires the kind CLI to be able to create or delete Kind clusters on Docker…
  4. … and the Grafana k6 CLI to run load tests against the app deployed on the cluster
  5. There is a good chance that this step won’t required once CircleCI releases a new version of ubuntu-2204 machine (probably 2024.1.1 according to the release strategy). For now, ubuntu-2204 provides OpenJDK 17, so we need to install OpenJDK 17 to successfully build the app from the source code
  6. After installing all the required tools we can create a new Kubernetes with the kind create cluster command.
  7. Once a cluster is ready, we can deploy our apps using the skaffold run command.
  8. Once the apps are running on the cluster, we can proceed to the tests phase. We are running the test defined inside the first-service/src/test/resources/k6/load-test.js file.
  9. After doing all the required steps, it is important to remove the Kind cluster
 jobs:
  deploy-k8s:
    executor: machine_executor_amd64
    steps:
      - checkout
      - run: # (1)
          name: Install Kubectl
          command: |
            curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
            chmod +x kubectl
            sudo mv ./kubectl /usr/local/bin/kubectl
      - run: # (2)
          name: Install Skaffold
          command: |
            curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64
            chmod +x skaffold
            sudo mv skaffold /usr/local/bin
      - run: # (3)
          name: Install Kind
          command: |
            [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
            chmod +x ./kind
            sudo mv ./kind /usr/local/bin/kind
      - run: # (4)
          name: Install Grafana K6
          command: |
            sudo gpg -k
            sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
            echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
            sudo apt-get update
            sudo apt-get install k6
      - run: # (5)
          name: Install OpenJDK 21
          command: |
            java -version
            sudo apt-get update && sudo apt-get install openjdk-21-jdk
            sudo update-alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
            sudo update-alternatives --set javac /usr/lib/jvm/java-21-openjdk-amd64/bin/javac
            java -version
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
      - run: # (6)
          name: Create Kind Cluster
          command: |
            kind create cluster --name c1 --config k8s/kind-cluster-test.yaml
      - run: # (7)
          name: Deploy to K8s
          command: |
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
            skaffold run
      - run: # (8)
          name: Run K6 Test
          command: |
            kubectl get svc
            k6 run first-service/src/test/resources/k6/load-test.js
      - run: # (9)
          name: Delete Kind Cluster
          command: |
            kind delete cluster --name c1

Here’s the definition of our load test. It has to be written in JavaScript. It defines some thresholds like a % of maximum failed requests or maximum response time for 95% of requests. As you see, we are testing the http://localhost:30000/first/ping endpoint:

import { sleep } from 'k6';
import http from 'k6/http';

export const options = {
  duration: '60s',
  vus: 10,
  thresholds: {
    http_req_failed: ['rate<0.25'],
    http_req_duration: ['p(95)<1000'],
  },
};

export default function () {
  http.get('http://localhost:30000/first/ping');
  sleep(2);
}

Finally, the last part of the CircleCI config file. It defines pipeline workflow. In the first step, we are running tests with Maven. After that, we proceeded to the deploy-k8s job.

workflows:
  build-and-deploy:
    jobs:
      - maven/test:
          name: test
          executor: jdk
      - deploy-k8s:
          requires:
            - test

Once we push a change to the sample Git repository we trigger a new CircleCI build. You can verify it by yourself here in my CircleCI project page.

As you see all the pipeline steps have been finished successfully.

kubernetes-circleci-build

We can display logs for every single step. Here are the logs from the k6 load test step.

There were some errors during the warm-up. However, the test shows that our scenario works on the Kubernetes cluster.

Final Thoughts

CircleCI is one of the most popular CI/CD platforms. Personally, I’m using it for running builds and tests for all my demo repositories on GitHub. For the sample projects dedicated to the Kubernetes cluster, I want to verify such steps as building images with Jib, Kubernetes deployment scripts, or Skaffold configuration. This article shows how to easily perform such tests with CircleCI and Kubernetes cluster running on Kind. Hope it helps 🙂

The post Kubernetes Testing with CircleCI, Kind, and Skaffold appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/feed/ 0 14706