Quarkus Archives - Piotr's TechBlog https://piotrminkowski.com/category/quarkus/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 08 Dec 2025 23:10:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Quarkus Archives - Piotr's TechBlog https://piotrminkowski.com/category/quarkus/ 32 32 181738725 A Book: Hands-On Java with Kubernetes https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/ https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/#respond Mon, 08 Dec 2025 16:05:58 +0000 https://piotrminkowski.com/?p=15892 My book about Java and Kubernetes has finally been published! The book “Hands-On Java with Kubernetes” is the result of several months of work and, in fact, a summary of my experiences over the last few years of research and development. In this post, I want to share my thoughts on this book, explain why […]

The post A Book: Hands-On Java with Kubernetes appeared first on Piotr's TechBlog.

]]>
My book about Java and Kubernetes has finally been published! The book “Hands-On Java with Kubernetes” is the result of several months of work and, in fact, a summary of my experiences over the last few years of research and development. In this post, I want to share my thoughts on this book, explain why I chose to write and publish it, and briefly outline its content and concept. To purchase the latest version, go to this link.

Here is a brief overview of all my published books.

Motivation

I won’t hide that this post is mainly directed at my blog subscribers and people who enjoy reading it and value my writing style. As you know, all posts and content on my blog, along with sample application repositories on GitHub, are always accessible to you for free. Over the past eight years, I have worked to publish high-quality content on my blog, and I plan to keep doing so. It is a part of my life, a significant time commitment, but also a lot of fun and a hobby.

I want to explain why I decided to write this book, why now, and why in this way. But first, a bit of background. I wrote my last book first, then my first book, over seven years ago. It focused on topics I was mainly involved with at the time, specifically Spring Boot and Spring Cloud. Since then, a lot of time has passed, and much has changed – not only in the technology itself but also a little in my personal life. Today, I am more involved in Kubernetes and container topics than, for example, Spring Cloud. For years, I have been helping various organizations transition from traditional application architectures to cloud-native models based on Kubernetes. Of course, Java remains my main area of expertise. Besides Spring Boot, I also really like the Quarkus framework. You can read a lot about both in my book on Kubernetes.

Based on my experience over the past few years, involving development teams is a key factor in the success of the Kubernetes platform within an organization. Ultimately, it is the applications developed by these teams that are deployed there. For developers to be willing to use Kubernetes, it must be easy for them to do so. That is why I persuade organizations to remove barriers to using Kubernetes and to design it in a way that makes it easier for development teams. On my blog and in this book, I aim to demonstrate how to quickly and simply launch applications on Kubernetes using frameworks such as Spring Boot and Quarkus.

It’s an unusual time to publish a book. AI agents are producing more and more technical content online. More often than not, instead of grabbing a book, people turn to an AI chatbot for a quick answer, though not always the best one. Still, a book that thoroughly introduces a topic and offers a step-by-step guide remains highly valuable.

Content of the Book

This book demonstrates that Java is an excellent choice for building applications that run on Kubernetes. In the first chapter, I’ll show you how to quickly build your application, create its image, and run it on Kubernetes without writing a single line of YAML or Dockerfile. This chapter also covers the minimum Kubernetes architecture you must understand to manage applications effectively in this environment. The second chapter, on the other hand, demonstrates how to effectively organize your local development environment to work with a Kubernetes cluster. You’ll see several options for running a distribution of your cluster locally and learn about the essential set of tools you should have. The third chapter outlines best practices for building applications on the Kubernetes platform. Most of the presented requirements are supported by simple examples and explanations of the benefits of meeting them. The fourth chapter presents the most valuable tools for the inner development loop with Kubernetes. After reading the first four chapters, you will understand the main Kubernetes components related to application management, enabling you to navigate the platform efficiently. You’ll also learn to leverage Spring Boot and Quarkus features to adapt your application to Kubernetes requirements.

In the following chapters, I will focus on the benefits of migrating applications to Kubernetes. The first area to cover is security. Chapter five discusses mechanisms and tools for securing applications running in a cluster. Chapter six describes Spring and Quarkus projects that enable native integration with the Kubernetes API from within applications. In chapter seven, you’ll learn about the service mesh tool and the benefits of using it to manage HTTP traffic between microservices. Chapter eight addresses the performance and scalability of Java applications in a Kubernetes environment. Chapter Eight demonstrates how to design a CI/CD process that runs entirely within the cluster, leveraging Kubernetes-native tools for pipeline building and the GitOps approach. This book also covers AI. In the final, ninth chapter, you’ll learn how to run a simple Java application that integrates with an AI model deployed on Kubernetes.

Publication

I decided to publish my book on Leanpub. Leanpub is a platform for writing, publishing, and selling books, especially popular among technical content authors. I previously published a book with Packt, but honestly, I was alone during the writing process. Leanpub is similar but offers several key advantages over publishers like Packt. First, it allows you to update content collaboratively with readers and keep it current. Even though my book is finished, I don’t rule out adding more chapters, such as on AI on Kubernetes. I also look forward to your feedback and plan to improve the content and examples in the repository continuously. Overall, this has been another exciting experience related to publishing technical content.

And when you buy such a book, you can be sure that most of the royalties go to me as the author, unlike with other publishers, where most of the royalties go to them as promoters. So, I’m looking forward to improving my book with you!

Conclusion

My book aims to bring together all the most interesting elements surrounding Java application development on Kubernetes. It is intended not only for developers but also for architects and DevOps teams who want to move to the Kubernetes platform.

The post A Book: Hands-On Java with Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/feed/ 0 15892
MCP with Quarkus LangChain4j https://piotrminkowski.com/2025/11/24/mcp-with-quarkus-langchain4j/ https://piotrminkowski.com/2025/11/24/mcp-with-quarkus-langchain4j/#respond Mon, 24 Nov 2025 07:45:15 +0000 https://piotrminkowski.com/?p=15845 This article shows how to use Quarkus LangChain4j support for MCP (Model Context Protocol) on both the server and client sides. You will learn how to serve tools and prompts on the server side and discover them in the Quarkus MCP client-side application. The Model Context Protocol is a standard for managing contextual interactions with […]

The post MCP with Quarkus LangChain4j appeared first on Piotr's TechBlog.

]]>
This article shows how to use Quarkus LangChain4j support for MCP (Model Context Protocol) on both the server and client sides. You will learn how to serve tools and prompts on the server side and discover them in the Quarkus MCP client-side application. The Model Context Protocol is a standard for managing contextual interactions with AI models. It provides a standardized way to connect AI models to external data sources and tools. It can help with building complex workflows on top of LLMs.

This article is the second part of a series describing some of the Quarkus AI project’s most notable features.  Before reading this article, I recommend checking out two previous parts of the tutorial:

You can also compare Quarkus’ support for MCP with similar support on the Spring AI side. You can find the article I mentioned on my blog here.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

Architecture for the Quarkus MCP Scenario

Let’s start with a diagram of our application architecture. Two Quarkus applications act as MCP servers. They connect to the in-memory database and use Quarkus LangChain4j MCP Server support to expose @Tool methods to the MCP client-side app. The client-side app communicates with the OpenAI model. It includes the tools exposed by the server-side apps in the user query to the AI model. The person-mcp-server app provides @Tool methods for searching persons in the database table. The account-mcp-server is doing the same for the persons’ accounts.

quarkus-mcp-arch

Build an MCP Server with Quarkus

Both MCP server applications are similar. They connect to the H2 database via the Quarkus Panache ORM extension. Both provide MCP API via Server-Sent Events (SSE) transport. Here’s a list of required Maven dependencies:

<dependencies>
  <dependency>
    <groupId>io.quarkiverse.mcp</groupId>
    <artifactId>quarkus-mcp-server-sse</artifactId>
  </dependency>
  <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-hibernate-orm-panache</artifactId>
  </dependency>
  <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-jdbc-h2</artifactId>
  </dependency>
  <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-junit5</artifactId>
    <scope>test</scope>
  </dependency>
</dependencies>
XML

Let’s start with the person-mcp-server application. Here’s the @Entity class for interacting with the person table. It uses Panache support to avoid the need for getter and setter declarations.

@Entity
public class Person extends PanacheEntityBase {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    public Long id;
    public String firstName;
    public String lastName;
    public int age;
    public String nationality;
    @Enumerated(EnumType.STRING)
    public Gender gender;

}
Java

The PersonRepository class contains a single method for searching persons by their nationality:

@ApplicationScoped
public class PersonRepository implements PanacheRepository<Person> {

    public List<Person> findByNationality(String nationality) {
        return find("nationality", nationality).list();
    }

}
Java

Next, prepare the “tools service” that searches for a single person by ID or a list of people of a given nationality in the database. Each method must be annotated with @Tool and include a description in the description field. Quarkus LangChain4j does not allow a Java List to be returned, so we need to wrap it using a dedicated Persons object.

@ApplicationScoped
public class PersonTools {

    PersonRepository personRepository;

    public PersonTools(PersonRepository personRepository) {
        this.personRepository = personRepository;
    }

    @Tool(description = "Find person by ID")
    public Person getPersonById(
            @ToolArg(description = "Person ID") Long id) {
        return personRepository.findById(id);
    }

    @Tool(description = "Find all persons by nationality")
    public Persons getPersonsByNationality(
            @ToolArg(description = "Nationality") String nationality) {
        return new Persons(personRepository.findByNationality(nationality));
    }
}
Java

Here’s our List<Person> wrapper:

public class Persons {

    private List<Person> persons;

    public Persons(List<Person> persons) {
        this.persons = persons;
    }

    public List<Person> getPersons() {
        return persons;
    }

    public void setPersons(List<Person> persons) {
        this.persons = persons;
    }
}
Java

The implementation of the account-mcp-server application is essentially very similar. Here’s the @Entity class for interacting with the account table. It uses Panache support to avoid the need for getter and setter declarations.

@Entity
public class Account extends PanacheEntityBase {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    public Long id;
    public String number;
    public int balance;
    public Long personId;

}
Java

The AccountRepository class contains a single method for searching accounts by person ID:

@ApplicationScoped
public class AccountRepository implements PanacheRepository<Account> {

    public List<Account> findByPersonId(Long personId) {
        return find("personId", personId).list();
    }

}
Java

Once again, the list inside the “tools service” must be wrapped by the dedicated object. The single method annotated with @Tool returns a list of accounts assigned to a given person.

@ApplicationScoped
public class AccountTools {

    AccountRepository accountRepository;

    public AccountTools(AccountRepository accountRepository) {
        this.accountRepository = accountRepository;
    }

    @Tool(description = "Find all accounts by person ID")
    public Accounts getAccountsByPersonId(
            @ToolArg(description = "Person ID") Long personId) {
        return new Accounts(accountRepository.findByPersonId(personId));
    }

}
Java

The person-mcp-server starts on port 8082, while the account-mcp-server listens on port 8081. To change the default HTTP, use the quarkus.http.port property in your application.properties file.

Build an MCP Client with Quarkus

Our application interacts with the OpenAI chat model, so we must include the Quarkus LangChain4j OpenAI extension. In turn, to integrate the client-side application with MCP-compliant servers, we need to include the quarkus-langchain4j-mcp extension.

<dependencies>
  <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-rest-jackson</artifactId>
  </dependency>
  <dependency>
    <groupId>io.quarkiverse.langchain4j</groupId>
    <artifactId>quarkus-langchain4j-openai</artifactId>
    <version>${quarkus.langchain4j.version}</version>
  </dependency>
  <dependency>
    <groupId>io.quarkiverse.langchain4j</groupId>
    <artifactId>quarkus-langchain4j-mcp</artifactId>
    <version>${quarkus.langchain4j.version}</version>
  </dependency>
</dependencies>
XML

The sample-client Quarkus app interacts with both the person-mcp-server app and the account-mcp-server app. Therefore, it defines two AI services. As with a standard AI application in Quarkus, those services must be annotated with @RegisterAiService. Then we define methods and prompt templates, also with annotations @UserMessage or @SystemMessage. If a given method is to use one of the MCP servers, it must be annotated with @McpToolBox. The name inside the annotation corresponds to the MCP server name set in the configuration properties. The PersonService AI service visible below uses the person-service MCP server.

@ApplicationScoped
@RegisterAiService
public interface PersonService {

    @SystemMessage("""
        You are a helpful assistant that generates realistic person data.
        Always respond with valid JSON format.
        """)
    @UserMessage("""
        Find persons with {nationality} nationality.
        Output **only valid JSON**, no explanations, no markdown, no ```json blocks.
        """)
    @McpToolBox("person-service")
    Persons findByNationality(String nationality);

    @SystemMessage("""
        You are a helpful assistant that generates realistic person data.
        Always respond with valid JSON format.
        """)
    @UserMessage("How many persons come from {nationality} ?")
    @McpToolBox("person-service")
    int countByNationality(String nationality);

}
Java

The AI service shown above corresponds to this configuration. You need to specify the MCP server’s name and address. As you remember, person-mcp-server listens on port 8082. The client application uses the name person-service here, and the standard endpoint to MCP SSE for Quarkus is /mcp/sse. To explore the solution itself, it is also worth enabling logging of MCP requests and responses.

quarkus.langchain4j.mcp.person-service.transport-type = http
quarkus.langchain4j.mcp.person-service.url = http://localhost:8082/mcp/sse
quarkus.langchain4j.mcp.person-service.log-requests = true
quarkus.langchain4j.mcp.person-service.log-responses = true
Plaintext

Here is a similar implementation for the AccountService AI service. It interacts with the MCP server configured under the account-service name.

@ApplicationScoped
@RegisterAiService
public interface AccountService {

    @SystemMessage("""
        You are a helpful assistant that generates realistic data.
        Return a single number.
        """)
    @UserMessage("How many accounts has person with {personId} ID ?")
    @McpToolBox("account-service")
    int countByPersonId(int personId);

    @UserMessage("""
        How many accounts has person with {personId} ID ?
        Return person name, nationality and a total balance on his/her accounts.
        """)
    @McpToolBox("account-service")
    String balanceByPersonId(int personId);

}
Java

Here’s the corresponding configuration for that service. No surprises.

quarkus.langchain4j.mcp.account-service.transport-type = http
quarkus.langchain4j.mcp.account-service.url = http://localhost:8081/mcp/sse
quarkus.langchain4j.mcp.account-service.log-requests = true
quarkus.langchain4j.mcp.account-service.log-responses = true
Plaintext

Finally, we must provide some configuration to integrate our Quarkus application with Open AI chat model. It assumes that Open AI token is available as the OPEN_AI_TOKEN environment variable.

quarkus.langchain4j.chat-model.provider = openai
quarkus.langchain4j.log-requests = true
quarkus.langchain4j.log-responses = true
quarkus.langchain4j.openai.api-key = ${OPEN_AI_TOKEN}
quarkus.langchain4j.openai.timeout = 20s
Plaintext

We can test individual AI services by calling endpoints provided by the client-side application. There are two endpoints GET /count-by-person-id/{personId} and GET /balance-by-person-id/{personId} that use LLM prompts to calculate number of persons and a total balances amount of all accounts belonging to a given person.

@Path("/accounts")
public class AccountResource {

    private final AccountService accountService;

    public AccountResource(AccountService accountService) {
        this.accountService = accountService;
    }

    @POST
    @Path("/count-by-person-id/{personId}")
    public int countByPersonId(int personId) {
        return accountService.countByPersonId(personId);
    }

    @POST
    @Path("/balance-by-person-id/{personId}")
    public String balanceByPersonId(int personId) {
        return accountService.balanceByPersonId(personId);
    }

}
Java

MCP for Promps

MCP servers can also provide other functionalities beyond just tools. Let’s go back to the person-mcp-server app for a moment. To share a prompt message, you can create a class that defines methods returning the PromptMessage object. Then, we must annotate such methods with @Prompt, and their arguments with @PromptArg.

@ApplicationScoped
public class PersonPrompts {

    final String findByNationalityPrompt = """
        Find persons with {nationality} nationality.
        Output **only valid JSON**, no explanations, no markdown, no ```json blocks.
        """;

    @Prompt(description = "Find by nationality.")
    PromptMessage findByNationalityPrompt(@PromptArg(description = "The nationality") String nationality) {
        return PromptMessage.withUserRole(new TextContent(findByNationalityPrompt));
    }

}
Java

Once we start the application, we can use Quarkus Dev UI to verify a list of provided tools and prompts.

Client-side integration with MCP prompts is a bit more complex than with tools. We must inject the McpClient to a resource controller to load a given prompt programmatically using its name.

@Path("/persons")
public class PersonResource {

    @McpClientName("person-service")
    McpClient mcpClient;
    
    // OTHET METHODS...
    
    @POST
    @Path("/nationality-with-prompt/{nationality}")
    public List<Person> findByNationalityWithPrompt(String nationality) {
        Persons p = personService.findByNationalityWithPrompt(loadPrompt(nationality), nationality);
        return p.getPersons();
    }
    
    private String loadPrompt(String nationality) {
        McpGetPromptResult prompt = mcpClient.getPrompt("findByNationalityPrompt", Map.of("nationality", nationality));
        return ((TextContent) prompt.messages().getFirst().content().toContent()).text();
    }
}
Java

In this case, the Quarkus AI service should not define the @UserMessage on the entire method, but just as the method argument. Then a prompt message is loaded from the MCP server and filled with the nationality parameter value before sending to the AI model.

@ApplicationScoped
@RegisterAiService
public interface PersonService {

    // OTHER METHODS...
    
    @SystemMessage("""
        You are a helpful assistant that generates realistic person data.
        Always respond with valid JSON format.
        """)
    @McpToolBox("person-service")
    Persons findByNationalityWithPrompt(@UserMessage String userMessage, String nationality);

}
Java

Testing MCP Tools with Quarkus

Quarkus provides a dedicated module for testing MCP tools. We can use after including the following dependency in the Maven pom.xml:

<dependency>
  <groupId>io.quarkiverse.mcp</groupId>
  <artifactId>quarkus-mcp-server-test</artifactId>
  <scope>test</scope>
</dependency>
XML

The following test verifies the MCP tool methods provided by the person-mcp-server application. The McpAssured class allows us to use SSE, streamable, and WebSocket test clients. To create a new client for SSE, invoke the newConnectedSseClient() static method. After that, we can use one of several available variants of the toolsCall(...) method to verify the response returned by a given @Tool.

@QuarkusTest
public class PersonToolsTest {

    ObjectMapper mapper = new ObjectMapper();

    @Test
    public void testGetPersonsByNationality() {
        McpAssured.McpSseTestClient client = McpAssured.newConnectedSseClient();
        client.when()
                .toolsCall("getPersonsByNationality", Map.of("nationality", "Denmark"),
                        r -> {
                            try {
                                Persons p = mapper.readValue(r.content().getFirst().asText().text(), Persons.class);
                                assertFalse(p.getPersons().isEmpty());
                            } catch (JsonProcessingException e) {
                                throw new RuntimeException(e);
                            }
                        })
                .thenAssertResults();
    }

    @Test
    public void testGetPersonById() {
        McpAssured.McpSseTestClient client = McpAssured.newConnectedSseClient();
        client.when()
                .toolsCall("getPersonById", Map.of("id", 10),
                        r -> {
                            try {
                                Person p = mapper.readValue(r.content().getFirst().asText().text(), Person.class);
                                assertNotNull(p);
                                assertNotNull(p.id);
                            } catch (JsonProcessingException e) {
                                throw new RuntimeException(e);
                            }
                        })
                .thenAssertResults();
    }
}
Java

Running Quarkus Applications

Finally, let’s run all our quarkus applications. Go to the mcp/account-mcp-server directory and run the application in development mode:

$ cd mcp/account-mcp-server
$ mvn quarkus:dev
ShellSession

Then do the same for the person-mcp-server application.

$ cd mcp/person-mcp-server
$ mvn quarkus:dev
ShellSession

Before running the last sample-client application, export the OpenAI API token as the OPEN_AI_TOKEN environment variable.

$ cd mcp/sample-client
$ export OPEN_AI_TOKEN=<YOUR_OPENAI_TOKEN>
$ mvn quarkus:dev
ShellSession

We can verify a list of tools or prompts exposed by each MCP server application by visiting its Quarkus Dev UI console. It provides a dedicated “MCP Server tile.”

quarkus-mcp-devui

Here’s a list of tools provided by the person-mcp-server app via Quarkus Dev UI.

Then, we can switch to the sample-client Dev UI console. We can verify and test all interactions with the MCP servers from our client-side app.

quarkus-mcp-client-ui

Once all the sample applications are running, we can test the MCP communication by calling the HTTP endpoints exposed by the sample-client app. Both person-mcp-server and account-mcp-server load some test data on startup using the import.sql file. Here are the test API calls for all the REST endpoints.

$ curl -X POST http://localhost:8080/persons/nationality/Denmark
$ curl -X POST http://localhost:8080/persons/count-by-nationality/Denmark
$ curl -X POST http://localhost:8080/persons/nationality-with-prompt/Denmark
$ curl -X POST http://localhost:8080/accounts/count-by-person-id/2
$ curl -X POST http://localhost:8080/accounts/balance-by-person-id/2
ShellSession

Conclusion

With Quarkus, creating applications that use MCP is not difficult. If you understand the idea of tool calling in AI, understanding the MCP-based approach is not difficult for you. This article shows you how to connect your application to several MCP servers, implement tests to verify the elements shared by a given application using MCP, and support on the Quarkus Dev UI side.

The post MCP with Quarkus LangChain4j appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/11/24/mcp-with-quarkus-langchain4j/feed/ 0 15845
Quarkus with Buildpacks and OpenShift Builds https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/ https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/#respond Wed, 19 Nov 2025 08:50:04 +0000 https://piotrminkowski.com/?p=15806 In this article, you will learn how to build Quarkus application images using Cloud Native Buildpacks and OpenShift Builds. Some time ago, I published a blog post about building with OpenShift Builds based on the Shipwright project. At that time, Cloud Native Buildpacks were not supported at the OpenShift Builds level. It was only supported […]

The post Quarkus with Buildpacks and OpenShift Builds appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to build Quarkus application images using Cloud Native Buildpacks and OpenShift Builds. Some time ago, I published a blog post about building with OpenShift Builds based on the Shipwright project. At that time, Cloud Native Buildpacks were not supported at the OpenShift Builds level. It was only supported in the community project. I demonstrated how to add the appropriate build strategy yourself and use it to build an image for a Spring Boot application. However, OpenShift Builds, since version 1.6, support building with Cloud Native Buildpacks. Currently, Quarkus, Go, Node.js, and Python are supported. In this article, we will focus on Quarkus and also examine the built-in support for Buildpacks within Quarkus itself.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

Quarkus Buildpacks Extension

Recently, support for Cloud Native Buildpacks in Quarkus has been significantly enhanced. Here you can access the repository containing the source code for the Paketo Quarkus buildpack. To implement this solution, add one dependency to your application.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-container-image-buildpack</artifactId>
</dependency>
XML

Next, run the build command with Maven and activate the quarkus.container-image.build parameter. Also, set the appropriate Java version needed for your application. For the sample Quarkus application in this article, the Java version is 21.

mvn clean package \
  -Dquarkus.container-image.build=true \
  -Dquarkus.buildpack.builder-env.BP_JVM_VERSION=21
ShellSession

To build, you need Docker or Podman running. Here’s the output from the command run earlier.

As you can see, Quarkus uses, among other buildpacks, the buildpack as mentioned earlier.

The new image is now available for use.

$ docker images sample-quarkus/person-service:1.0.0-SNAPSHOT
REPOSITORY                      TAG              IMAGE ID       CREATED        SIZE
sample-quarkus/person-service   1.0.0-SNAPSHOT   e0b58781e040   45 years ago   160MB
ShellSession

Quarkus with OpenShift Builds Shipwright

Install the Openshift Build Operator

Now, we will move the image building process to the OpenShift cluster. OpenShift offers built-in support for creating container images directly within the cluster through OpenShift Builds, using the BuildConfig solution. For more details, please refer to my previous article. However, in this article, we explore a new technology for building container images called OpenShift Builds with Shipwright. To enable this solution on OpenShift, you need to install the following operator.

After installing this operator, you will see a new item in the “Build” menu called “Shiwright”. Switch to it, then select the “ClusterBuildStrategies” tab. There are two strategies on the list designed for Cloud Native Buildpacks. We are interested in the buildpacks strategy.

Create and Run Build with Shipwright

Finally, we can create the Shiwright Build object. It contains three sections. In the first step, we define the address of the container image repository where we will push our output image. For simplicity, we will use the internal registry provided by the OpenShift cluster itself. In the source section, we specify the repository address where the application source code is located. In the last section, we need to set the build strategy. We chose the previously mentioned buildpacks strategy for Cloud Native Buildpacks. Some parameters need to be set for the buildpacks strategy: run-image and cnb-builder-image. The cnb-builder-image indicates the name of the builder image containing the buildpacks. The run-image refers to a base image used to run the application. We will also activate the buildpacks Maven profile during the build to set the Quarkus property that switches from fast-jar to uber-jar packaging.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildpack-quarkus-build
spec:
  env:
    - name: BP_JVM_VERSION
      value: '21'
  output:
    image: 'image-registry.openshift-image-registry.svc:5000/builds/sample-quarkus-microservice:1.0'
  paramValues:
    - name: run-image
      value: 'paketobuildpacks/run-java-21-ubi9-base:latest'
    - name: cnb-builder-image
      value: 'paketobuildpacks/builder-jammy-java-tiny:latest'
    - name: env-vars
      values:
        - value: BP_MAVEN_ADDITIONAL_BUILD_ARGUMENTS=-Pbuildpacks
  retention:
    atBuildDeletion: true
  source:
    git:
      url: 'https://github.com/piomin/sample-quarkus-microservice.git'
    type: Git
  strategy:
    kind: ClusterBuildStrategy
    name: buildpacks
YAML

Here’s the Maven buildpacks profile that sets a single Quarkus property quarkus.package.jar.type. We must change it to uber-jar, because the paketobuildpacks/builder-jammy-java-tiny builder expects a single jar instead of the multi-folder layout used by the default fast-jar format. Of course, I would prefer to use the paketocommunity/builder-ubi-base builder, which can recognize the fast-jar format. However, at this time, it does not function correctly with OpenShift Builds.

<profiles>
  <profile>
    <id>buildpacks</id>
    <activation>
      <property>
        <name>buildpacks</name>
      </property>
    </activation>
    <properties>
      <quarkus.package.jar.type>uber-jar</quarkus.package.jar.type>
    </properties>
  </profile>
</profiles>
XML

To start the build, you can use the OpenShift console or execute the following command:

shp build run buildpack-quarkus-build --follow
ShellSession

We can switch to the OpenShift Console. As you can see, our build is running.

The history of such builds is available on OpenShift. You can also review the build logs.

Finally, you should see your image in the list of OpenShift internal image streams.

$ oc get imagestream
NAME                          IMAGE REPOSITORY                                                                                                    TAGS                UPDATED
sample-quarkus-microservice   default-route-openshift-image-registry.apps.pminkows.95az.p1.openshiftapps.com/builds/sample-quarkus-microservice   1.2,0.0.1,1.1,1.0   13 hours ago
ShellSession

Conclusion

OpenShift Build Shipwright lets you perform the entire application image build process on the OpenShift cluster in a standardized manner. Cloud Native Buildpacks is a popular mechanism for building images without writing a Dockerfile yourself. In this case, support for Buildpacks on the OpenShift side is an interesting alternative to the Source-to-Image approach.

The post Quarkus with Buildpacks and OpenShift Builds appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/11/19/quarkus-with-buildpacks-and-openshift-builds/feed/ 0 15806
AI Tool Calling with Quarkus LangChain4j https://piotrminkowski.com/2025/06/23/ai-tool-calling-with-quarkus-langchain4j/ https://piotrminkowski.com/2025/06/23/ai-tool-calling-with-quarkus-langchain4j/#comments Mon, 23 Jun 2025 05:19:14 +0000 https://piotrminkowski.com/?p=15757 This article will show you how to use Quarkus LangChain4j AI support with the most popular chat models for the “tool calling” feature. Tool calling (sometimes referred to as function calling) is a typical pattern in AI applications that enables a model to interact with APIs or tools, extending its capabilities. The most popular AI […]

The post AI Tool Calling with Quarkus LangChain4j appeared first on Piotr's TechBlog.

]]>
This article will show you how to use Quarkus LangChain4j AI support with the most popular chat models for the “tool calling” feature. Tool calling (sometimes referred to as function calling) is a typical pattern in AI applications that enables a model to interact with APIs or tools, extending its capabilities. The most popular AI models are trained to know when to call a function. The Quarkus LangChain4j extension offers built-in support for tool calling. In this article, you will learn how to define tool methods to get data from the third-party APIs and the internal database.

This article is the second part of a series describing some of the Quarkus AI project’s most notable features. Before reading on, I recommend checking out my introduction to Quarkus LangChain4j, which is available here. The first part describes such features as prompts, structured output, and chat memory. There is also a similar tutorial series about Spring AI. You can compare Quarkus support for tool calling described here with a similar Spring AI support described in the following post.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

Tool Calling Motivation

For ease of comparison, this article will implement an identical scenario to an analogous application written in Spring AI. You can find a GitHub sample repository with the Spring AI app here. As you know, the “tool calling” feature helps us solve a common AI model challenge related to internal or live data sources. If we want to augment a model with such data, our applications must allow it to interact with a set of APIs or tools. In our case, the internal database (H2) contains information about the structure of our stock wallet. The sample Quarkus application asks an AI model about the total value of the wallet based on daily stock prices or the highest value for the last few days. The model must retrieve the structure of our stock wallet and the latest stock prices.

Use Tool Calling with Quarkus LangChain4j

Create ShareTools

Let’s begin with the ShareTools implementation, which is responsible for getting a list of the wallet’s shares from a database. It defines a single method annotated with @Tool. The most crucial element here is to provide a clear description of the method within the @Tool annotation. It allows the AI model to understand the function’s responsibilities. The method returns the number of shares for each company in our portfolio. It is retrieved from the database through the Quarkus Panache ORM repository.

@ApplicationScoped
public class ShareTools {

    private ShareRepository shareRepository;

    public ShareTools(ShareRepository shareRepository) {
        this.shareRepository = shareRepository;
    }

    @Tool("Return number of shares for each company in my wallet")
    public List<Share> getNumberOfShares() {
        return shareRepository.findAll().list();
    }
}
Java

The sample application launches an embedded, in-memory database and inserts test data into the stock table. Our wallet contains the most popular companies on the U.S. stock market, including Amazon, Meta, and Microsoft. Here’s a dataset inserted on application startup.

insert into share(id, company, quantity) values (1, 'AAPL', 100);
insert into share(id, company, quantity) values (2, 'AMZN', 300);
insert into share(id, company, quantity) values (3, 'META', 300);
insert into share(id, company, quantity) values (4, 'MSFT', 400);
SQL

Create StockTools

The StockTools The class is responsible for interaction with the TwelveData stock API. It defines two methods. The getLatestStockPrices method returns only the latest close price for a specified company. It is a tool calling version of the method provided within the pl.piomin.services.functions.stock.StockService function. The second method is more complicated. It must return historical daily close prices for a defined number of days. Each price must be correlated with a quotation date.

@ApplicationScoped
public class StockTools {

    private Logger log;
    private StockDataClient stockDataClient;

    public StockTools(@RestClient StockDataClient stockDataClient, Logger log) {
        this.stockDataClient = stockDataClient;
        this.log = log;
    }
    
    @ConfigProperty(name = "STOCK_API_KEY", defaultValue = "none")
    String apiKey;

    @Tool("Return latest stock prices for a given company")
    public StockResponse getLatestStockPrices(String company) {
        log.infof("Get stock prices for: %s", company);
        StockData data = stockDataClient.getStockData(company, apiKey, "1min", 1);
        DailyStockData latestData = data.getValues().get(0);
        log.infof("Get stock prices (%s) -> %s", company, latestData.getClose());
        return new StockResponse(Float.parseFloat(latestData.getClose()));
    }

    @Tool("Return historical daily stock prices for a given company")
    public List<DailyShareQuote> getHistoricalStockPrices(String company, int days) {
        log.infof("Get historical stock prices: %s for %d days", company, days);
        StockData data = stockDataClient.getStockData(company, apiKey, "1min", days);
        return data.getValues().stream()
                .map(d -> new DailyShareQuote(company, Float.parseFloat(d.getClose()), d.getDatetime()))
                .toList();
    }

}
Java

Here’s the DailyShareQuote Java record returned in the response list.

public record DailyShareQuote(String company, float price, String datetime) {
}
Java

Here’s a @RestClient responsible for calling the TwelveData stock API.

@RegisterRestClient(configKey = "stock-api")
public interface StockDataClient {

    @GET
    @Path("/time_series")
    StockData getStockData(@RestQuery String symbol,
                           @RestQuery String apikey,
                           @RestQuery String interval,
                           @RestQuery int outputsize);
}
Java

For the demo, you can easily enable complete logging of both communication with the AI model through LangChain4j and with the stock API via @RestClient.

quarkus.langchain4j.log-requests = true
quarkus.langchain4j.log-responses = true
quarkus.rest-client.stock-api.url = https://api.twelvedata.com
quarkus.rest-client.logging.scope = request-response
quarkus.rest-client.stock-api.scope = all
%dev.quarkus.log.category."org.jboss.resteasy.reactive.client.logging".level = DEBUG
Plaintext

Quarkus LangChain4j Tool Calling Flow

You can easily register @Tools on your Quarkus AI service with the tools argument inside the @RegisterAiService annotation. The calculateWalletValueWithTools() method calculates the value of our stock wallet in dollars. It uses the latest daily stock prices for each company’s shares from the wallet. Since this method directly returns the response received from the AI model, it is essential to perform additional validation of the content received. For this purpose, a so-called guardrail should be implemented and set in place. We can easily achieve it with the @OutputGuardrails annotation. The calculateHighestWalletValue method calculates the value of our stock wallet in dollars for each day in the specified period determined by the days variable. Then it must return the day with the highest stock wallet value.

@RegisterAiService(tools = {StockTools.class, ShareTools.class})
public interface WalletAiService {

    @UserMessage("""
    What’s the current value in dollars of my wallet based on the latest stock daily prices ?
    
    Return subtotal value in dollars for each company in my wallet.
    In the end, return the total value in dollars wrapped by ***.
    """)
    @OutputGuardrails(WalletGuardrail.class)
    String calculateWalletValueWithTools();

    @UserMessage("""
    On which day during last {days} days my wallet had the highest value in dollars based on the historical daily stock prices ?
    """)
    String calculateHighestWalletValue(int days);
}
Java

Here’s the implementation of the guardrail that validates the response returned by the calculateWalletValueWithTools method. It verifies if the total value in dollars is wrapped by *** and starts with the $ sign.

@ApplicationScoped
public class WalletGuardrail implements OutputGuardrail {

    Pattern pattern = Pattern.compile("\\*\\*\\*(.*?)\\*\\*\\*");

    private Logger log;
    
    public WalletGuardrail(Logger log) {
        this.log = log;
    }

    @Override
    public OutputGuardrailResult validate(AiMessage responseFromLLM) {
        try {
            Matcher matcher = pattern.matcher(responseFromLLM.text());
            if (matcher.find()) {
                String amount = matcher.group(1);
                log.infof("Extracted amount: %s", amount);
                if (amount.startsWith("$")) {
                    return success();
                }
            }
        } catch (Exception e) {
            return reprompt("Invalid text format", e, "Make sure you return a valid requested text");
        }
        return failure("Total amount not found");
    }
}
Java

Here’s the REST endpoints implementation. It uses the WalletAiService bean to interact with the AI model. It exposes two endpoints: GET /wallet/with-tools and GET /wallet/highest-day/{days}.

@Path("/wallet")
@Produces(MediaType.TEXT_PLAIN)
public class WalletController {

    private final WalletAiService walletAiService;

    public WalletController(WalletAiService walletAiService) {
        this.walletAiService = walletAiService;
    }

    @GET
    @Path("/with-tools")
    public String calculateWalletValueWithTools() {
        return walletAiService.calculateWalletValueWithTools();
    }

    @GET
    @Path("/highest-day/{days}")
    public String calculateHighestWalletValue(int days) {
        return walletAiService.calculateHighestWalletValue(days);
    }

}
Java

The following diagram illustrates the flow for the second use case, which returns the day with the highest stock wallet value. First, it must connect to the database and retrieve the stock wallet structure, which contains the number of shares for each company. Then, it must call the stock API for every company found in the wallet. So, finally, the method calculateHighestWalletValue should be called four times with different values of the company name parameter and a value of the days determined by the HTTP endpoint path variable. Once all the data is collected, the AI model calculates the highest wallet value and returns it together with the quotation date.

quarkus-tool-calling-arch

Automated Testing

Most of my repositories are automatically updated to the latest versions of libraries. After updating the library version, automated tests are run to verify that everything works as expected. To verify the correctness of today’s scenario, we will mock stock API calls while integrating with the actual OpenAI service. To mock API calls, you can use the quarkus-junit5-mockito extension.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-junit5</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-junit5-mockito</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>io.rest-assured</groupId>
  <artifactId>rest-assured</artifactId>
  <scope>test</scope>
</dependency>
XML

The following JUnit test verifies two endpoints exposed by WalletController. As you may remember, there is also an output guardrail set on the AI service called by the GET /wallet/with-tools endpoint.

@QuarkusTest
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
class WalletControllerTest {

    @InjectMock
    @RestClient
    StockDataClient stockDataClient;

    @BeforeEach
    void setUp() {
        // Mock the stock data responses
        StockData aaplStockData = createMockStockData("AAPL", "150.25");
        StockData amznStockData = createMockStockData("AMZN", "120.50");
        StockData metaStockData = createMockStockData("META", "250.75");
        StockData msftStockData = createMockStockData("MSFT", "300.00");

        // Mock the stock data client responses
        when(stockDataClient.getStockData(eq("AAPL"), anyString(), anyString(), anyInt()))
            .thenReturn(aaplStockData);
        when(stockDataClient.getStockData(eq("AMZN"), anyString(), anyString(), anyInt()))
            .thenReturn(amznStockData);
        when(stockDataClient.getStockData(eq("META"), anyString(), anyString(), anyInt()))
            .thenReturn(metaStockData);
        when(stockDataClient.getStockData(eq("MSFT"), anyString(), anyString(), anyInt()))
            .thenReturn(msftStockData);
    }

    private StockData createMockStockData(String symbol, String price) {
        DailyStockData dailyData = new DailyStockData();
        dailyData.setDatetime("2023-01-01");
        dailyData.setOpen(price);
        dailyData.setHigh(price);
        dailyData.setLow(price);
        dailyData.setClose(price);
        dailyData.setVolume("1000");

        StockData stockData = new StockData();
        stockData.setValues(List.of(dailyData));
        return stockData;
    }

    @Test
    @Order(1)
    void testCalculateWalletValueWithTools() {
        given()
          .when().get("/wallet/with-tools")
          .then().statusCode(200)
                 .contentType(ContentType.TEXT)
                 .body(notNullValue())
                 .body(not(emptyString()));
    }

    @Test
    @Order(2)
    void testCalculateHighestWalletValue() {
        given()
          .pathParam("days", 7)
          .when().get("/wallet/highest-day/{days}")
          .then().statusCode(200)
                 .contentType(ContentType.TEXT)
                 .body(notNullValue())
                 .body(not(emptyString()));
    }
}
Java

Tests can be automatically run, for example, by the CircleCI pipeline on each dependency update via the pull request.

Run the Application to Verify Tool Calling

Before starting the application, we must set environment variables with the AI model and stock API tokens.

$ export OPEN_AI_TOKEN=<YOUR_OPEN_AI_TOKEN>
$ export STOCK_API_KEY=<YOUR_STOCK_API_KEY>
ShellSession

Then, run the application in development mode with the following command:

mvn quarkus:dev
ShellSession

Once the application is started, you can call the first endpoint. The GET /wallet/with-tools calculates the total least value of the stock wallet structure stored in the database.

curl http://localhost:8080/wallet/with-tools
ShellSession

You can see either the response from the chat AI model or the exception thrown after an unsuccessful validation using a guardrail. If LLM response validation fails, the REST endpoint returns the HTTP 500 code.

quarkus-tool-calling-guardrail

Here’s the successfully validated LLM response.

quarkus-tool-calling-success

The sample Quarkus application logs the whole communication with the AI model. Here, you can see a first request containing a list of registered functions (tools) along with their descriptions.

quarkus-tool-calling-logs

Then we can call the GET /wallet/highest-day/{days} endpoint to return the day with the highest wallet value. Let’s calculate it for the last 7 days.

curl http://localhost:8080/wallet/highest-day/7
ShellSession

Here’s the response.

Finally, you can perform a similar test as before, but for the Mistral AI model. Before running the application, set your API token for Mistral AI and rename the default model to mistralai.

$ export MISTRAL_AI_TOKEN=<YOUR_MISTRAL_AI_TOKEN>
$ export AI_MODEL_PROVIDER=mistralai
ShellSession

Then, run the sample Quarkus application with the following command and repeat the same “tool calling” tests as before.

mvn quarkus:dev -Pmistral-ai
ShellSession

Final Thoughts

Quarkus LangChain4j provides a seamless way to run tools in AI-powered conversations. You can register a tool by adding it as a part of the @RegisterAiService annotation. Also, you can easily add a guardrail on the selected AI service method. Tools are a vital part of agentic AI and the MCP concepts. It is therefore essential to understand it properly. You can expect more articles on Quarkus LangChain4j soon, including on MCP.

The post AI Tool Calling with Quarkus LangChain4j appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/06/23/ai-tool-calling-with-quarkus-langchain4j/feed/ 2 15757
Getting Started with Quarkus LangChain4j and Chat Model https://piotrminkowski.com/2025/06/18/getting-started-with-quarkus-langchain4j-and-chat-model/ https://piotrminkowski.com/2025/06/18/getting-started-with-quarkus-langchain4j-and-chat-model/#respond Wed, 18 Jun 2025 16:36:08 +0000 https://piotrminkowski.com/?p=15736 This article will teach you how to use the Quarkus LangChain4j project to build applications based on different chat models. The Quarkus AI Chat Model offers a portable and straightforward interface, enabling seamless interaction with these models. Our sample Quarkus application will switch between three popular chat models provided by OpenAI, Mistral AI, and Ollama. […]

The post Getting Started with Quarkus LangChain4j and Chat Model appeared first on Piotr's TechBlog.

]]>
This article will teach you how to use the Quarkus LangChain4j project to build applications based on different chat models. The Quarkus AI Chat Model offers a portable and straightforward interface, enabling seamless interaction with these models. Our sample Quarkus application will switch between three popular chat models provided by OpenAI, Mistral AI, and Ollama. This article is the first in a series explaining AI concepts with Quarkus LangChain4j. Look for more on my blog in this area soon. The idea of this tutorial is very similar to the series on Spring AI. Therefore, you will be able to easily compare the two approaches, as the sample application will do the same thing as an analogous Spring Boot application.

If you like Quarkus, then you can find quite a few articles about it on my blog. Just go to the Quarkus category and find the topic you are interested in.

SourceCode

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

Motivation

Whenever I create a new article or example related to AI, I like to define the problem I’m trying to solve. The problem this example solves is very trivial. I publish numerous small demo apps to explain complex technology concepts. These apps typically require data to display a demo output. Usually, I add demo data by myself or use a library like Datafaker to do it for me. This time, we can leverage the AI Chat Models API for that. Let’s begin!

The Quarkus-related topic I’m describing today, I also explained earlier for Spring Boot. For a comparison of the features offered by both frameworks for simple interaction with the AI chat model, you can read this article on Spring AI.

Dependencies

The sample application uses the current latest version of the Quarkus framework.

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>io.quarkus.platform</groupId>
      <artifactId>quarkus-bom</artifactId>
      <version>${quarkus.platform.version}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>
XML

You can easily switch between multiple AI model implementations by activating a dedicated Maven profile. By default, the open-ai profile is active. It includes the quarkus-langchain4j-openai module in the Maven dependencies. You can also activate the mistral-ai and ollama profile. In that case, the quarkus-langchain4j-mistral-ai or quarkus-langchain4j-ollama module will be included instead of the LangChain4j OpenAI extension.

<profiles>
  <profile>
    <id>open-ai</id>
    <activation>
      <activeByDefault>true</activeByDefault>
    </activation>
    <dependencies>
      <dependency>
        <groupId>io.quarkiverse.langchain4j</groupId>
        <artifactId>quarkus-langchain4j-openai</artifactId>
        <version>${quarkus-langchain4j.version}</version>
      </dependency>
    </dependencies>
  </profile>
  <profile>
    <id>mistral-ai</id>
    <dependencies>
      <dependency>
        <groupId>io.quarkiverse.langchain4j</groupId>
        <artifactId>quarkus-langchain4j-mistral-ai</artifactId>
        <version>${quarkus-langchain4j.version}</version>
      </dependency>
    </dependencies>
  </profile>
  <profile>
    <id>ollama</id>
    <dependencies>
      <dependency>
        <groupId>io.quarkiverse.langchain4j</groupId>
        <artifactId>quarkus-langchain4j-ollama</artifactId>
        <version>${quarkus-langchain4j.version}</version>
      </dependency>
    </dependencies>
  </profile>
</profiles>
XML

The sample Quarkus application is simple. It exposes some REST endpoints and communicates with a selected AI model to return an AI-generated response via each endpoint. So, you need to include only core Quarkus modules like quarkus-rest-jackson or quarkus-arc. To implement JUnit tests with REST API, it also includes the quarkus-junit5 and rest-assured modules in the test scope.

<dependencies>
  <!-- Core Quarkus dependencies -->
  <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-rest-jackson</artifactId>
  </dependency>
  <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-arc</artifactId>
  </dependency>

  <!-- Test dependencies -->
  <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-junit5</artifactId>
    <scope>test</scope>
  </dependency>
  <dependency>
    <groupId>io.rest-assured</groupId>
    <artifactId>rest-assured</artifactId>
    <scope>test</scope>
  </dependency>
</dependencies>
XML

Quarkus LangChain4j Chat Models Integration

Quarkus provides an innovative approach to interacting with AI chat models. First, you need to annotate your interface by defining AI-oriented methods with the @RegisterAiService annotation. Then you must add a proper description and input prompt inside the @SystemMessage and @UserMessage annotations. Here is the sample PersonAiService interaction, which defines two methods. The generatePersonList method aims to ask the AI model to generate a list of 10 unique persons in a form consistent with the input object structure. The getPersonById method must read the previously generated list from chat memory and return a person’s data with a specified id field.

@RegisterAiService
@ApplicationScoped
public interface PersonAiService {

    @SystemMessage("""
        You are a helpful assistant that generates realistic person data.
        Always respond with valid JSON format.
        """)
    @UserMessage("""
        Generate exactly 10 unique persons

        Requirements:
        - Each person must have a unique integer ID (like 1, 2, 3, etc.)
        - Use realistic first and last names per each nationality
        - Ages should be between 18 and 80
        - Return ONLY the JSON array, no additional text
        """)
    PersonResponse generatePersonList(@MemoryId int userId);

    @SystemMessage("""
        You are a helpful assistant that can recall generated person data from chat memory.
        """)
    @UserMessage("""
        In the previously generated list of persons for user {userId}, find and return the person with id {id}.
        
        Return ONLY the JSON object, no additional text.
        """)
    Person getPersonById(@MemoryId int userId, int id);

}
Java

There are a few more things to add regarding the code snippet above. The beans created by @RegisterAiService are @RequestScoped by default. The Quarkus LangChain4j documentation states that this is possible, allowing objects to be deleted from the chat memory. In the case seen above, the list of people is generated per user ID, which acts as the key by which we search the chat memory. To guarantee that the getPersonById method finds a list of persons generated per @MemoryId the PersonAiService interface must be annotated with @ApplicationScoped. The InMemoryChatMemoryStore implementation is enabled by default, so you don’t need to declare any additional beans to use it.

Quarkus LangChain4j can automatically map the LLM’s JSON response to the output POJO. However, until now, it has not been possible to map it directly to the output collection. Therefore, you must wrap the output list with the additional class, as shown below.

public class PersonResponse {

    private List<Person> persons;

    public List<Person> getPersons() {
        return persons;
    }

    public void setPersons(List<Person> persons) {
        this.persons = persons;
    }
}
Java

Here’s the Person class:

public class Person {

    private Integer id;
    private String firstName;
    private String lastName;
    private int age;
    private String nationality;
    private Gender gender;
    
    // GETTERS and SETTERS

}
Java

Finally, the last part of our implementation is REST endpoints. Here’s the REST controller that injects and uses PersonAiService to interact with the AI chat model. It exposes two endpoints: GET /api/{userId}/persons and GET /api/{userId}/persons/{id}. You can generate several lists of persons by specifying the userId path parameter.

@Path("/api")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
public class PersonController {

    private static final Logger LOG = Logger.getLogger(PersonController.class);

    PersonAiService personAiService;

    public PersonController(PersonAiService personAiService) {
        this.personAiService = personAiService;
    }

    @GET
    @Path("/{userId}/persons")
    public PersonResponse generatePersons(@PathParam("userId") int userId) {
        return personAiService.generatePersonList(userId);
    }

    @GET
    @Path("/{userId}/persons/{id}")
    public Person getPersonById(@PathParam("userId") int userId, @PathParam("id") int id) {
        return personAiService.getPersonById(userId, id);
    }

}
Java

Use Different AI Models with Quarkus LangChain4j

Configuration Properties

Here is a configuration defined within the application.properties file. Before proceeding, you must generate the OpenAI and Mistral AI API tokens and export them as environment variables. Additionally, you can enable logging of requests and responses in AI model communication. It is also worth increasing the default timeout for a single request from 10 seconds to a higher value, such as 20 seconds.

quarkus.langchain4j.chat-model.provider = ${AI_MODEL_PROVIDER:openai}
quarkus.langchain4j.log-requests = true
quarkus.langchain4j.log-responses = true

# OpenAI Configuration
quarkus.langchain4j.openai.api-key = ${OPEN_AI_TOKEN}
quarkus.langchain4j.openai.timeout = 20s

# Mistral AI Configuration
quarkus.langchain4j.mistralai.api-key = ${MISTRAL_AI_TOKEN}
quarkus.langchain4j.mistralai.timeout = 20s

# Ollama Configuration
quarkus.langchain4j.ollama.base-url = ${OLLAMA_BASE_URL:http://localhost:11434}
Plaintext

To run a sample Quarkus application and connect it with OpenAI, you must set the OPEN_AI_TOKEN environment variable. Since the open-ai Maven profile is activated by default, you don’t need to set anything else while running an app.

$ export OPEN_AI_TOKEN=<your_openai_token>
$ mvn quarkus:dev
ShellSession

Then, you can call the GET /api/{userId}/persons endpoint with different userId path variable values. Here are sample API requests and responses.

quarkus-langchain4j-calls

After that, you can call the GET /api/{userId}/persons/{id} endpoint to return a specified person found in the chat memory.

Switch Between AI Models

Then, you can repeat the same exercise with the Mistral AI model. You must set the AI_MODEL_PROVIDER to mistral, export its API token as the MISTRAL_AI_TOKEN environment variable, and enable the mistral-ai profile while running the app.

$ export AI_MODEL_PROVIDER=mistralai
$ export MISTRAL_AI_TOKEN=<your_mistralai_token>
$ mvn quarkus:dev -Pmistral-ai
ShellSession

The app should start successfully.

quarkus-langchain4j-logs

Once it happens, you can repeat the same sequence of requests as before for OpenAI.

$ curl http://localhost:8080/api/1/persons
$ curl http://localhost:8080/api/2/persons
$ curl http://localhost:8080/api/1/persons/1
$ curl http://localhost:8080/api/2/persons/1
ShellSession

You can check the request sent to the AI model in the application logs.

Here’s a log showing an AI chat model response:

Finally, you can run a test with ollama. By default, the LangChain4j extension for Ollama uses the llama3.2 model. You can change it by setting the quarkus.langchain4j.ollama.chat-model.model-id property in the application.properties file. Assuming that you use the llama3.3 model, here’s your configuration:

quarkus.langchain4j.ollama.base-url = ${OLLAMA_BASE_URL:http://localhost:11434}
quarkus.langchain4j.ollama.chat-model.model-id = llama3.3
quarkus.langchain4j.ollama.timeout = 60s
Plaintext

Before proceeding, you must run the llama3.3 model on your laptop. Of course, you can choose another, smaller model, because llama3.3 is 42 GB.

ollama run llama3.3
ShellSession

It can take a lot of time. However, a model is finally ready to use.

Once a model is running, you can set the AI_MODEL_PROVIDER environment variable to ollama and activate the ollama profile for the app:

$ export AI_MODEL_PROVIDER=ollama
$ mvn quarkus:dev -Pollama
ShellSession

This time, our application is connected to the llama3.3 model started with ollama:

quarkus-langchain4j-ollama

With the Quarkus LangChain4j Ollama extension, you can take advantage of dev services support. It means that you don’t need to install and run Ollama on your laptop or run a model with ollama CLI. Quarkus will run Ollama as a Docker container and automatically run a selected AI model on it. In that case, you don’t need to set the quarkus.langchain4j.ollama.base-url property. Before switching to that option, let’s use a smaller AI model by setting the quarkus.langchain4j.ollama.chat-model.model-id = mistral property. Then start the app in the same way as before.

Final Thoughts

I must admit that the Quarkus LangChain4j extension is enjoyable to use. With a few simple annotations, you can configure your application to talk to the AI model of your choice correctly. In this article, I presented a straightforward example of integrating Quarkus with an AI chat model. However, we quickly reviewed features such as prompts, structured output, and chat memory. You can expect more articles in the Quarkus series with AI soon.

The post Getting Started with Quarkus LangChain4j and Chat Model appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/06/18/getting-started-with-quarkus-langchain4j-and-chat-model/feed/ 0 15736
Consul with Quarkus and SmallRye Stork https://piotrminkowski.com/2024/11/18/consul-with-quarkus-and-smallrye-stork/ https://piotrminkowski.com/2024/11/18/consul-with-quarkus-and-smallrye-stork/#respond Mon, 18 Nov 2024 12:34:11 +0000 https://piotrminkowski.com/?p=15444 This article will teach you to use HashiCorp Consul as a discovery and configuration server for your Quarkus microservices. I wrote a similar article some years ago. However, there have been several significant improvements in the Quarkus ecosystem since that time. What I have in mind is mainly the Quarkus Stork project. This extension focuses […]

The post Consul with Quarkus and SmallRye Stork appeared first on Piotr's TechBlog.

]]>
This article will teach you to use HashiCorp Consul as a discovery and configuration server for your Quarkus microservices. I wrote a similar article some years ago. However, there have been several significant improvements in the Quarkus ecosystem since that time. What I have in mind is mainly the Quarkus Stork project. This extension focuses on service discovery and load balancing for cloud-native applications. It can seamlessly integrate with the Consul or Kubernetes discovery and provide various load balancer types over the Quarkus REST client. Our sample applications will also load configuration properties from the Consul Key-Value store and use the Smallrye Mutiny Consul client to register the app in the discovery server.

If you are looking for other interesting articles about Quarkus, you will find them in my blog. For example, you will read more about testing strategies with Quarkus and Pact here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions 🙂

Architecture

Before proceeding to the implementation, let’s take a look at the diagram of our system architecture. There are three microservices: employee-servicedepartament-service, and organization-service. They are communicating with each other through a REST API. They use the Consul Key-Value store as a distributed configuration backend. Every instance of service is registering itself in Consul. A load balancer is included in the application. It reads a list of registered instances of a target service from the Consul using the Quarkus Stork extension. Then it chooses an instance using a provided algorithm.

Running Consul Instance

We will run a single-node Consul instance as a Docker container. By default, Consul exposes HTTP API and a UI console on the 8500 port. Let’s expose that port outside the container.

docker run -d --name=consul \
   -e CONSUL_BIND_INTERFACE=eth0 \
   -p 8500:8500 \
   consul
ShellSession

Dependencies

Let’s analyze a list of the most important Maven dependencies using the department-service application as an example. Our application exposes REST endpoints and connects to the in-memory H2 database. We use the Quarkus REST client and the SmallRye Stork Service Discovery library to implement communication between the microservices. On the other hand, the io.quarkiverse.config:quarkus-config-consul is responsible for reading configuration properties the Consul Key-Value store. With the smallrye-mutiny-vertx-consul-client library the application is able to interact directly with the Consul HTTP API. This may not be necessary in the future, once the Stork project will implement the registration and deregistration mechanism. Currently it is not ready. Finally, we will Testcontainers to run Consul and tests our apps against it with the Quarkus JUnit support.

	<dependencies>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-rest-jackson</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-rest-client-jackson</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-hibernate-orm-panache</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-jdbc-h2</artifactId>
		</dependency>
		<dependency>
			<groupId>com.h2database</groupId>
			<artifactId>h2</artifactId>
			<scope>runtime</scope>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-smallrye-stork</artifactId>
		</dependency>
		<dependency>
			<groupId>io.smallrye.reactive</groupId>
			<artifactId>smallrye-mutiny-vertx-consul-client</artifactId>
		</dependency>
		<dependency>
			<groupId>io.smallrye.stork</groupId>
			<artifactId>stork-service-discovery-consul</artifactId>
		</dependency>
		<dependency>
			<groupId>io.smallrye.stork</groupId>
			<artifactId>stork-service-registration-consul</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-scheduler</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkiverse.config</groupId>
			<artifactId>quarkus-config-consul</artifactId>
			<version>${quarkus-consul.version}</version>
		</dependency>
		<dependency>
			<groupId>io.rest-assured</groupId>
			<artifactId>rest-assured</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-junit5</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>org.testcontainers</groupId>
			<artifactId>consul</artifactId>
			<version>1.20.3</version>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>org.testcontainers</groupId>
			<artifactId>junit-jupiter</artifactId>
			<version>1.20.3</version>
			<scope>test</scope>
		</dependency>
	</dependencies>
XML

Discovery and Load Balancing with Quarkus Stork for Consul

Let’s begin with the Quarkus Stork part. In the previous section, we included libraries required to provide service discovery and load balancing with Stork: quarkus-smallrye-stork and stork-service-discovery-consul. Now, we can proceed to the implementation. Here’s the EmployeeClient interface from the department-service responsible for calling the GET /employees/department/{departmentId} endpoint exposed by the employee-service. Instead of setting the target URL inside the @RegisterRestClient annotation we should refer to the name of the service registered in Consul.

@Path("/employees")
@RegisterRestClient(baseUri = "stork://employee-service")
public interface EmployeeClient {

    @GET
    @Path("/department/{departmentId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId);

}
Java

That service name should also be used in the configuration properties. The following property indicates that Stork will use Consul as a discovery server for the employee-service name.

quarkus.stork.employee-service.service-discovery.type = consul
Plaintext

Once we create a REST client with the additional annotations, we must inject it into the DepartmentResource class using the @RestClient annotation. Afterward, we can use that client to interact with the employee-service while calling the GET /departments/organization/{organizationId}/with-employees from the department-service.

@Path("/departments")
@Produces(MediaType.APPLICATION_JSON)
public class DepartmentResource {

    private Logger logger;
    private DepartmentRepository repository;
    private EmployeeClient employeeClient;

    public DepartmentResource(Logger logger,
                              DepartmentRepository repository,
                              @RestClient EmployeeClient employeeClient) {
        this.logger = logger;
        this.repository = repository;
        this.employeeClient = employeeClient;
    }

    // ... other methods for REST endpoints 

    @Path("/organization/{organizationId}")
    @GET
    public List<Department> findByOrganization(@PathParam("organizationId") Long organizationId) {
        logger.infof("Department find: organizationId=%d", organizationId);
        return repository.findByOrganization(organizationId);
    }

    @Path("/organization/{organizationId}/with-employees")
    @GET
    public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
        logger.infof("Department find with employees: organizationId=%d", organizationId);
        List<Department> departments = repository.findByOrganization(organizationId);
        departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
        return departments;
    }

}
Java

Let’s take a look at the implementation of the GET /employees/department/{departmentId} in the employee-service called by the EmployeeClient in the department-service.

@Path("/employees")
@Produces(MediaType.APPLICATION_JSON)
public class EmployeeResource {

    private Logger logger;
    private EmployeeRepository repository;

    public EmployeeResource(Logger logger,
                            EmployeeRepository repository) {
        this.logger = logger;
        this.repository = repository;
    }

    @Path("/department/{departmentId}")
    @GET
    public List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId) {
        logger.infof("Employee find: departmentId=%s", departmentId);
        return repository.findByDepartment(departmentId);
    }

    @Path("/organization/{organizationId}")
    @GET
    public List<Employee> findByOrganization(@PathParam("organizationId") Long organizationId) {
        logger.infof("Employee find: organizationId=%s", organizationId);
        return repository.findByOrganization(organizationId);
    }
    
    // ... other methods for REST endpoints

}
Java

Similarly in the organization-service, we define two REST clients for interacting with employee-service and department-service.

@Path("/departments")
@RegisterRestClient(baseUri = "stork://department-service")
public interface DepartmentClient {

    @GET
    @Path("/organization/{organizationId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Department> findByOrganization(@PathParam("organizationId") Long organizationId);

    @GET
    @Path("/organization/{organizationId}/with-employees")
    @Produces(MediaType.APPLICATION_JSON)
    List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId);

}

@Path("/employees")
@RegisterRestClient(baseUri = "stork://employee-service")
public interface EmployeeClient {

    @GET
    @Path("/organization/{organizationId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Employee> findByOrganization(@PathParam("organizationId") Long organizationId);

}
Java

It involves the need to include the following two configuration properties that set the discovery service type for the target services.

quarkus.stork.employee-service.service-discovery.type = consul
quarkus.stork.department-service.service-discovery.type = consul
Plaintext

The OrganizationResource class injects and uses both previously created clients.

@Path("/organizations")
@Produces(MediaType.APPLICATION_JSON)
public class OrganizationResource {

    private Logger logger;
    private OrganizationRepository repository;
    private DepartmentClient departmentClient;
    private EmployeeClient employeeClient;

    public OrganizationResource(Logger logger,
                                OrganizationRepository repository,
                                @RestClient DepartmentClient departmentClient,
                                @RestClient EmployeeClient employeeClient) {
        this.logger = logger;
        this.repository = repository;
        this.departmentClient = departmentClient;
        this.employeeClient = employeeClient;
    }

    // ... other methods for REST endpoints

    @Path("/{id}/with-departments")
    @GET
    public Organization findByIdWithDepartments(@PathParam("id") Long id) {
        logger.infof("Organization find with departments: id={}", id);
        Organization organization = repository.findById(id);
        organization.setDepartments(departmentClient.findByOrganization(organization.getId()));
        return organization;
    }

    @Path("/{id}/with-departments-and-employees")
    @GET
    public Organization findByIdWithDepartmentsAndEmployees(@PathParam("id") Long id) {
        logger.infof("Organization find with departments and employees: id={}", id);
        Organization organization = repository.findById(id);
        organization.setDepartments(departmentClient.findByOrganizationWithEmployees(organization.getId()));
        return organization;
    }

    @Path("/{id}/with-employees")
    @GET
    public Organization findByIdWithEmployees(@PathParam("id") Long id) {
        logger.infof("Organization find with employees: id={}", id);
        Organization organization = repository.findById(id);
        organization.setEmployees(employeeClient.findByOrganization(organization.getId()));
        return organization;
    }

}
Java

Registration in Consul with Quarkus

After including Stork, the Quarkus REST client automatically splits traffic between all the instances of the application existing in the discovery server. However, each application must register itself in the discovery server. Quarkus Stork won’t do that. Theoretically, there is the stork-service-registration-consul module that should register the application instance on startup. As far as I know, this feature is still under active development. For now, we will include a mentioned library and use the same property for enabling the registrar feature.

quarkus.stork.employee-service.service-registrar.type = consul
Plaintext

Our sample applications will interact directly with the Consul server using the SmallRye Mutiny reactive client. Let’s define the ClientConsul bean. It is registered only if the quarkus.stork.employee-service.service-registrar.type property with the consul value exists.

@ApplicationScoped
public class EmployeeBeanProducer {

    @ConfigProperty(name = "consul.host", defaultValue = "localhost")  String host;
    @ConfigProperty(name = "consul.port", defaultValue = "8500") int port;

    @Produces
    @LookupIfProperty(name = "quarkus.stork.employee-service.service-registrar.type", 
                      stringValue = "consul")
    public ConsulClient consulClient(Vertx vertx) {
        return ConsulClient.create(vertx, new ConsulClientOptions()
                .setHost(host)
                .setPort(port));
    }

}
Java

The bean responsible for catching the startup and shutdown events is annotated with @ApplicationScoped. It defines two methods: onStart and onStop. It also injects the ConsulClient bean. Quarkus dynamically generates the HTTP listen port number on startup and saves it in the quarkus.http.port property. Therefore, the startup task needs to wait a moment to ensure that the application is running. We will run it 3 seconds after receiving the startup event. Every instance of the application needs to have a unique id in Consul. Therefore, we retrieve the number of running port and use that number as the id suffix. The name of the service is taken from the quarkus.application.name property. The instance of the application should save id in order to be able to deregister itself on shutdown.

@ApplicationScoped
public class EmployeeLifecycle {

    @ConfigProperty(name = "quarkus.application.name")
    private String appName;
    private int port;

    private Logger logger;
    private Instance<ConsulClient> consulClient;
    private ScheduledExecutorService executor;

    public EmployeeLifecycle(Logger logger,
                             Instance<ConsulClient> consulClient,
                             ScheduledExecutorService executor) {
        this.logger = logger;
        this.consulClient = consulClient;
        this.executor = executor;
    }

    void onStart(@Observes StartupEvent ev) {
        if (consulClient.isResolvable()) {
            executor.schedule(() -> {
                port = ConfigProvider.getConfig().getValue("quarkus.http.port", Integer.class);
                consulClient.get().registerService(new ServiceOptions()
                                .setPort(port)
                                .setAddress("localhost")
                                .setName(appName)
                                .setId(appName + "-" + port),
                        result -> logger.infof("Service %s-%d registered", appName, port));
            }, 3000, TimeUnit.MILLISECONDS);
        }
    }

    void onStop(@Observes ShutdownEvent ev) {
        if (consulClient.isResolvable()) {
            consulClient.get().deregisterService(appName + "-" + port,
                    result -> logger.infof("Service %s-%d deregistered", appName, port));
        }
    }
}
Java

Read Configuration Properties from Consul

The io.quarkiverse.config:quarkus-config-consul is already included in dependencies. Once the quarkus.consul-config.enabled property is set to true, the Quarkus application tries to read properties from the Consul Key-Value store. The quarkus.consul-config.properties-value-keys property indicates the location of the properties file stored in Consul. Here are the properties that exists in the classpath application.properties. For example, the default config location for the department-service is config/department-service.

quarkus.application.name = department-service
quarkus.application.version = 1.1
quarkus.consul-config.enabled = true
quarkus.consul-config.properties-value-keys = config/${quarkus.application.name}
Plaintext

Let’s switch to the Consul UI. It is available under the same 8500 port as the API. In the “Key/Value” section we create configuration for all three sample applications.

These are configuration properties for department-service. They are targeted for the development mode. We enable the dynamically generated port number to run several instances on the same workstation. Our application use an in-memory H2 database. It loads the import.sql script on startup to initialize a demo data store. We also enable Quarkus Stork service discovery for the employee-service REST client and registration in Consul.

quarkus.http.port = 0
quarkus.datasource.db-kind = h2
quarkus.hibernate-orm.database.generation = drop-and-create
quarkus.hibernate-orm.sql-load-script = src/main/resources/import.sql
quarkus.stork.employee-service.service-discovery.type = consul
quarkus.stork.department-service.service-registrar.type = consul
Plaintext

Here are the configuration properties for the employee-service.

quarkus-stork-consul-config

Finally, let’s take a look at the organization-service configuration in Consul.

Run Applications in the Development Mode

Let’s run our three sample Quarkus applications in the development mode. Both employee-service and department-service should have two instances running. We don’t have to take care about port conflicts, since they are aqutomatically generated on startup.

$ cd employee-service
$ mvn quarkus:dev
$ mvn quarkus:dev

$ cd department-service
$ mvn quarkus:dev
$ mvn quarkus:dev

$ cd organization-service
$ mvn quarkus:dev
ShellSession

Once we start all the instances we can switch to the Consul UI. You should see exactly the same services in your web console.

quarkus-stork-consul-services

There are two instances of the employee-service and deparment-service. We can check out the list of registered instances for the selected application.

quarkus-stork-consul-service

This step is optional. To simplify tests I also included API gateway that integrates with Consul discovery. It listens on the static 8080 port and forwards requests to the downstream services, which listen on the dynamic ports. Since Quarkus does not provide a module dedicates for the API gateway, I used Spring Cloud Gateway with Spring Cloud Consul for that. Therefore, you need to use the following command to run the application:

$ cd gateway-service
$ mvn spring-boot:run
ShellSession

Afterward, we can make some API tests with or without the gateway. With the gateway-service, we can use the 8080 port with the /api base context path. Let’s call the following three endpoints. The first one is exposed by the department-service, while the another two by the organization-service.

$ curl http://localhost:8080/api/departments/organization/1/with-employees
$ curl http://localhost:8080/api/organizations/1/with-departments
$ curl http://localhost:8080/api/organizations/1/with-departments-and-employees
ShellSession

Each Quarkus service listens on the dynamic port and register itself in Consul using that port number. Here’s the department-service logs from startup and during test communication.

After including the quarkus-micrometer-registry-prometheus module each application instance exposes metrics under the GET /q/metrics endpoint. There are several metrics related to service discovery published by the Quarkus Stork extension.

$ curl http://localhost:51867/q/metrics | grep stork
# TYPE stork_service_discovery_instances_count counter
# HELP stork_service_discovery_instances_count The number of service instances discovered
stork_service_discovery_instances_count_total{service_name="employee-service"} 12.0
# TYPE stork_service_selection_duration_seconds summary
# HELP stork_service_selection_duration_seconds The duration of the selection operation
stork_service_selection_duration_seconds_count{service_name="employee-service"} 6.0
stork_service_selection_duration_seconds_sum{service_name="employee-service"} 9.93934E-4
# TYPE stork_service_selection_duration_seconds_max gauge
# HELP stork_service_selection_duration_seconds_max The duration of the selection operation
stork_service_selection_duration_seconds_max{service_name="employee-service"} 0.0
# TYPE stork_service_discovery_failures counter
# HELP stork_service_discovery_failures The number of failures during service discovery
stork_service_discovery_failures_total{service_name="employee-service"} 0.0
# TYPE stork_service_discovery_duration_seconds_max gauge
# HELP stork_service_discovery_duration_seconds_max The duration of the discovery operation
stork_service_discovery_duration_seconds_max{service_name="employee-service"} 0.0
# TYPE stork_service_discovery_duration_seconds summary
# HELP stork_service_discovery_duration_seconds The duration of the discovery operation
stork_service_discovery_duration_seconds_count{service_name="employee-service"} 6.0
stork_service_discovery_duration_seconds_sum{service_name="employee-service"} 2.997176541
# TYPE stork_service_selection_failures counter
# HELP stork_service_selection_failures The number of failures during service selection
stork_service_selection_failures_total{service_name="employee-service"} 0.0
ShellSession

Advanced Load Balancing with Quarkus Stork and Consul

Quarkus Stork provides several load balancing strategies to efficiently distribute requests across multiple instances of a application. It can ensure optimal resource usage, better performance, and high availability. By default, Quarkus Stork uses round-robin algorithm. To override the default strategy, we first need to include a library responsible for providing the selected load-balancing algorithm. For example, let’s choose the least-response-time strategy, which collects response times of the calls made with service instances and picks an instance based on this information.

<dependency>
    <groupId>io.smallrye.stork</groupId>
    <artifactId>stork-load-balancer-least-response-time</artifactId>
</dependency>
XML

Then, we have to change the default strategy in configuration properties for the selected client. Let’s add the following property to the config/department-service in Consul Key-Value store.

quarkus.stork.employee-service.load-balancer.type=least-response-time
Plaintext

After that, we can restart the instance of department-service and retest the communication between services.

Testing Integration Between Quarkus and Consul

We have already included the org.testcontainers:consul artifact to the Maven dependencies. Thanks to that, we can create JUnit tests with Quarkus and Testcontainers Consul. Since Quarkus doen’t provide a built-in support for testing Consul container, we need to create the class that implements the QuarkusTestResourceLifecycleManager interface. It is responsible for starting and stopping Consul container during JUnit tests. After starting the container, we add required configuration properties to enable in-memory database creation and a service registration in Consul.

public class ConsulResource implements QuarkusTestResourceLifecycleManager {

    private ConsulContainer consulContainer;

    @Override
    public Map<String, String> start() {
        consulContainer = new ConsulContainer("hashicorp/consul:latest")
                .withConsulCommand(
                """
                kv put config/department-service - <<EOF
                department.name=abc
                quarkus.datasource.db-kind=h2
                quarkus.hibernate-orm.database.generation=drop-and-create
                quarkus.stork.department-service.service-registrar.type=consul
                EOF
                """
                );

        consulContainer.start();

        String url = consulContainer.getHost() + ":" + consulContainer.getFirstMappedPort();

        return ImmutableMap.of(
                "quarkus.consul-config.agent.host-port", url,
                "consul.host", consulContainer.getHost(),
                "consul.port", consulContainer.getFirstMappedPort().toString()
        );
    }

    @Override
    public void stop() {
        consulContainer.stop();
    }
}
Java

To start Consul container during the test, we need to annotate the test class with @QuarkusTestResource(ConsulResource.class). The test loads configuration properties from Consul on startup and registers the service. Then, it verifies that REST endpoints exposed by the department-service work fine and the registered service exists in Consul.

@QuarkusTest
@QuarkusTestResource(ConsulResource.class)
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class DepartmentResourceConsulTests {

    @ConfigProperty(name = "department.name", defaultValue = "")
    private String name;
    @Inject
    ConsulClient consulClient;

    @Test
    @Order(1)
    void add() {
        Department d = new Department();
        d.setOrganizationId(1L);
        d.setName(name);

        given().body(d).contentType(ContentType.JSON)
                .when().post("/departments").then()
                .statusCode(200)
                .body("id", notNullValue())
                .body("name", is(name));
    }

    @Test
    @Order(2)
    void findAll() {
        when().get("/departments").then()
                .statusCode(200)
                .body("size()", is(4));
    }

    @Test
    @Order(3)
    void checkRegister() throws InterruptedException {
        Thread.sleep(5000);
        Uni<ServiceList> uni = Uni.createFrom().completionStage(() -> consulClient.catalogServices().toCompletionStage());
        List<Service> services = uni.await().atMost(Duration.ofSeconds(3)).getList();
        final long count = services.stream()
                .filter(svc -> svc.getName().equals("department-service")).count();
        assertEquals(1 ,count);
    }
}
Java

Final Thoughts

This article introduces Quarkus Stork for Consul discovery and client-side load balancing. It shows how to integrate Quarkus with Consul Key-Value store for distributed configuration. It also covers the topics like integration testing with Testcontainers support, metrics, service registration and advanced load-balancing strategies.

The post Consul with Quarkus and SmallRye Stork appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/11/18/consul-with-quarkus-and-smallrye-stork/feed/ 0 15444
Pact with Quarkus 3 https://piotrminkowski.com/2024/04/19/pact-with-quarkus-3/ https://piotrminkowski.com/2024/04/19/pact-with-quarkus-3/#respond Fri, 19 Apr 2024 09:42:36 +0000 https://piotrminkowski.com/?p=15216 This article will teach you how to write contract tests with Pact for the app built on top of version 3 of the Quarkus framework. It is an update to the previously described topic in the “Contract Testing with Quarkus and Pact” article. Therefore we will not focus on the details related to the integration […]

The post Pact with Quarkus 3 appeared first on Piotr's TechBlog.

]]>
This article will teach you how to write contract tests with Pact for the app built on top of version 3 of the Quarkus framework. It is an update to the previously described topic in the “Contract Testing with Quarkus and Pact” article. Therefore we will not focus on the details related to the integration between Pact and Quarkus, but rather on the migration from version 2 to 3 of the Quarkus framework. There are some issues worth discussing.

You can find several other articles about Quarkus on my blog. For example, you can read about advanced testing techniques with Quarkus here. There is also an interesting article about Quarkus the Testcontainer’s support in the local development with Kafka.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. It contains three microservices written in Quarkus. I migrated them from Quarkus 2 to 3, to the latest version of the Pact Quarkus extension, and from Java 17 to 21. In order to proceed with the exercise, you need to clone my GitHub repository. Then you should just follow my instructions.

Let’s do a quick recap before proceeding. We are implementing several contact tests with Pact to verify interactions between our three microservices: employee-service, department-service, and organization-service. We use Pact Broker to store and share contract definitions between the microservices. Here’s the diagram that illustrates the described architecture

pact-quarkus-3-arch

Update to Java 21

There are no issues with migration to Java 21 in Quarkus. We need to change the version of Java used in the Maven compilation inside the pom.xml file. However, the situation is more complicated with the CircleCI build. Firstly, we use the ubuntu-2204 machine in the builds to access the Docker daemon. We need Docker to run the container with the Pact broker. Although CircleCI provides the image for OpenJDK 21, there is still Java 17 installed on the latest version of ubuntu-2204. This situation will probably change during the next months. But now, we need to install OpenJDK 21 on that machine. After that, we may run Pact broker and JUnit tests using the latest Java LTS version. Here’s the CircleCI config.yaml file:

version: 2.1

jobs:
  analyze:
    executor:
      name: docker/machine
      image: ubuntu-2204:2024.01.2
    steps:
      - checkout
      - run:
          name: Install OpenJDK 21
          command: |
            java -version
            sudo apt-get update && sudo apt-get install openjdk-21-jdk
            sudo update-alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
            sudo update-alternatives --set javac /usr/lib/jvm/java-21-openjdk-amd64/bin/javac
            java -version
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
      - docker/install-docker-compose
      - maven/with_cache:
          steps:
            - run:
                name: Build Images
                command: mvn package -DskipTests -Dquarkus.container-image.build=true
      - run:
          name: Run Pact Broker
          command: docker-compose up -d
      - maven/with_cache:
          steps:
            - run:
                name: Run Tests
                command: mvn package pact:publish -Dquarkus.container-image.build=false
      - maven/with_cache:
          steps:
            - run:
                name: Sonar Analysis
                command: mvn package sonar:sonar -DskipTests -Dquarkus.container-image.build=false


orbs:
  maven: circleci/maven@1.4.1
  docker: circleci/docker@2.6.0

workflows:
  maven_test:
    jobs:
      - analyze:
          context: SonarCloud
YAML

Here’s the root Maven pom.xml. It declares the Maven plugin responsible for publishing contracts to the Pact broker. Each time the Pact JUnit is executed successfully, it tries to publish the JSON pacts to the broker. The ordering of Maven modules is not random. The organization-service generates and publishes pacts for verifying contracts with department-service and employee-service, so it has to be run at the beginning. As you see, we use the current latest version of Quarkus – 3.9.3.

<properties>
  <java.version>21</java.version>
  <surefire-plugin.version>3.2.5</surefire-plugin.version>
  <quarkus.version>3.9.3</quarkus.version>
  <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  <maven.compiler.source>${java.version}</maven.compiler.source>
  <maven.compiler.target>${java.version}</maven.compiler.target>
</properties>

<modules>
  <module>organization-service</module>
  <module>department-service</module>
  <module>employee-service</module>
</modules>

<build>
  <plugins>
    <plugin>
      <groupId>au.com.dius.pact.provider</groupId>
      <artifactId>maven</artifactId>
        <version>4.6.9</version>
      <configuration>
        <pactBrokerUrl>http://localhost:9292</pactBrokerUrl>
      </configuration>
    </plugin>
  </plugins>
</build>
XML

Here’s the part of the docker-compose.yml responsible for running a Pact broker. It requires a Postgres database.

version: "3.7"
services:
  postgres:
    container_name: postgres
    image: postgres
    environment:
      POSTGRES_USER: pact
      POSTGRES_PASSWORD: pact123
      POSTGRES_DB: pact
    ports:
      - "5432"
  pact-broker:
    container_name: pact-broker
    image: pactfoundation/pact-broker
    ports:
      - "9292:9292"
    depends_on:
      - postgres
    links:
      - postgres
    environment:
      PACT_BROKER_DATABASE_USERNAME: pact
      PACT_BROKER_DATABASE_PASSWORD: pact123
      PACT_BROKER_DATABASE_HOST: postgres
      PACT_BROKER_DATABASE_NAME: pact
YAML

Update Quarkus and Pact

Dependencies

Firstly, let’s take a look at the list of dependencies. With the latest versions of Quarkus, we should take care of the REST provider and client used in our app. For example, if we use the quarkus-resteasy-jackson module to expose REST services, we should also use the quarkus-resteasy-client-jackson module to call the services. On the other hand, if we use quarkus-rest-jackson on the server side, we should also use quarkus-rest-client-jackson on the client side. In order to implement Pact tests in our app, we need to include the quarkus-pact-consumer module for the contract consumer and the quarkus-pact-provider on the contract provider side. Finally, we will use Wiremock to replace a Pact mock server.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-resteasy-jackson</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-resteasy-client-jackson</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-junit5</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-junit5-mockito</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>io.rest-assured</groupId>
  <artifactId>rest-assured</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>io.quarkiverse.pact</groupId>
  <artifactId>quarkus-pact-consumer</artifactId>
  <version>1.3.0</version>
  <scope>provided</scope>
</dependency>
<dependency>
  <groupId>io.quarkiverse.pact</groupId>
  <artifactId>quarkus-pact-provider</artifactId>
  <version>1.3.0</version>
  <scope>provided</scope>
</dependency>
<dependency>
  <groupId>com.github.tomakehurst</groupId>
  <artifactId>wiremock-jre8</artifactId>
  <version>3.0.1</version>
  <scope>test</scope>
</dependency>
XML

Tests Implementation with Quarkus 3 and Pact Consumer

In that exercise, I’m simplifying tests as much as possible. Therefore we will use the REST client directly to verify the contract on the consumer side. However, if you are looking for more advanced examples please go to that repository. Coming back to our exercise, let’s take a look at the example of a declarative REST client used in the department-service.

@ApplicationScoped
@Path("/employees")
@RegisterRestClient(configKey = "employee")
public interface EmployeeClient {

    @GET
    @Path("/department/{departmentId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId);

}
Java

There are some significant changes in the Pact tests on the consumer side. Some of them were forced by the error related to migration to Quarkus 3 described here. I found a smart workaround for that problem proposed by one of the contributors (1). This workaround replaces the Pact built-in mock server with Wiremock. We will start Wiremock on the dynamic port (2). We also need to implement @QuarkusTestResource to start the Wiremock container before the tests and shut it down after the tests (3). Then, we can switch to the latest version of Pact API by returning the V4Pact object (4) in the @Pact method and updating the @PactTestFor annotation accordingly (5). Finally, instead of the Pact MockServer, we use the wrapper PactMockServer dedicated to Wiremock (6).

@QuarkusTest
@ExtendWith(PactConsumerTestExt.class)
@ExtendWith(PactMockServerWorkaround.class) // (1)
@MockServerConfig(port = "0") // (2)
@QuarkusTestResource(WireMockQuarkusTestResource.class) // (3)
public class EmployeeClientContractTests extends PactConsumerTestBase {

    @Pact(provider = "employee-service", consumer = "department-service")
    public V4Pact callFindDepartment(PactDslWithProvider builder) { // (4)
        DslPart body = PactDslJsonArray.arrayEachLike()
                .integerType("id")
                .stringType("name")
                .stringType("position")
                .numberType("age")
                .closeObject();
        return builder.given("findByDepartment")
                .uponReceiving("findByDepartment")
                    .path("/employees/department/1")
                    .method("GET")
                .willRespondWith()
                    .status(200)
                    .body(body).toPact(V4Pact.class);
    }

    @Test
    // (5)
    @PactTestFor(providerName = "employee-service", pactVersion = PactSpecVersion.V4)
    public void verifyFindDepartmentPact(final PactMockServer mockServer) { // (6)
        EmployeeClient client = RestClientBuilder.newBuilder()
                .baseUri(URI.create(mockServer.getUrl()))
                .build(EmployeeClient.class);
        List<Employee> employees = client.findByDepartment(1L);
        System.out.println(employees);
        assertNotNull(employees);
        assertTrue(employees.size() > 0);
        assertNotNull(employees.get(0).getId());
    }
}
Java

Here’s our PactMockServer wrapper:

public class PactMockServer {

    private final String url;
    private final int port;

    public PactMockServer(String url, int port) {
        this.url = url;
        this.port = port;
    }

    public String getUrl() {
        return url;
    }

    public int getPort() {
        return port;
    }
}
Java

Implement Mock Server with Wiremock

In the first step, we need to provide an implementation of the QuarkusTestResourceLifecycleManager for starting the Wiremock server during the tests.

public class WireMockQuarkusTestResource implements 
        QuarkusTestResourceLifecycleManager {
        
    private static final Logger LOGGER = Logger
       .getLogger(WireMockQuarkusTestResource.class);

    private WireMockServer wireMockServer;

    @Override
    public Map<String, String> start() {
        final HashMap<String, String> result = new HashMap<>();

        this.wireMockServer = new WireMockServer(options()
                .dynamicPort()
                .notifier(createNotifier(true)));
        this.wireMockServer.start();

        return result;
    }

    @Override
    public void stop() {
        if (this.wireMockServer != null) {
            this.wireMockServer.stop();
            this.wireMockServer = null;
        }
    }

    @Override
    public void inject(final TestInjector testInjector) {
        testInjector.injectIntoFields(wireMockServer,
          new TestInjector.AnnotatedAndMatchesType(InjectWireMock.class, 
                                                   WireMockServer.class));
    }

    private static Notifier createNotifier(final boolean verbose) {
        final String prefix = "[WireMock] ";
        return new Notifier() {

            @Override
            public void info(final String s) {
                if (verbose) {
                    LOGGER.info(prefix + s);
                }
            }

            @Override
            public void error(final String s) {
                LOGGER.warn(prefix + s);
            }

            @Override
            public void error(final String s, final Throwable throwable) {
                LOGGER.warn(prefix + s, throwable);
            }
        };
    }
}
Java

Let’s create the annotation:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.FIELD)
public @interface InjectWireMock {
}
Java

I’m not very sure it is required. But here’s the test base extended by our tests.

public class PactConsumerTestBase {

   @InjectWireMock
   protected WireMockServer wiremock;

   @BeforeEach
   void initWiremockBeforeEach() {
      wiremock.resetAll();
      configureFor(new WireMock(this.wiremock));
   }

   protected void forwardToPactServer(final PactMockServer wrapper) {
      wiremock.resetAll();  
      stubFor(any(anyUrl())
         .atPriority(1)
         .willReturn(aResponse().proxiedFrom(wrapper.getUrl()))
      );
   }

}
Java

Here’s the workaround implementation used as the test extension included with the @ExtendWith annotation:

public class PactMockServerWorkaround implements ParameterResolver {
    
  @Override
  public boolean supportsParameter(ParameterContext parameterContext, 
                                   ExtensionContext extensionContext)
      throws ParameterResolutionException {

     return parameterContext.getParameter().getType() == PactMockServer.class;
  }

  @Override
  @SuppressWarnings("unchecked")
  public Object resolveParameter(ParameterContext parameterContext, 
                                 ExtensionContext extensionContext)
      throws ParameterResolutionException {

      final ExtensionContext.Store store = extensionContext
           .getStore(ExtensionContext.Namespace.create("pact-jvm"));

      if (store.get("providers") == null) {
         return null;
      }

      final List<Pair<ProviderInfo, List<String>>> providers = store
         .get("providers", List.class);
      var pair = providers.get(0);
      final ProviderInfo providerInfo = pair.getFirst();

      var mockServer = store.get("mockServer:" + providerInfo.getProviderName(),
                MockServer.class);

      return new PactMockServer(mockServer.getUrl(), mockServer.getPort());
   }
}
Java

I intentionally do not comment on this workaround. Maybe it could be somehow improved. I wish that everything would work fine just after migrating the Pact extension to Quarkus 3 without any workarounds. However, thanks to the workaround, I was able to run my Pact tests successfully and then update all the required dependencies to the latest.

Final Thoughts

This article guides you through the changes required to migrate your microservices and Pact contract tests from Qaurkus 2 to 3. For me, it is important to automatically update all the dependencies in my demo projects to be up-to-date as described here. I’m using Renovate to automatically scan and update Maven pom.xml dependencies. Once it updates the version of the dependency it runs all the JUnit tests for the verification. The process is automatically performed on the CircleCI. You can view the history of builds of the sample repository used in that article.

The post Pact with Quarkus 3 appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/04/19/pact-with-quarkus-3/feed/ 0 15216
Serverless on Azure Function with Quarkus https://piotrminkowski.com/2024/01/31/serverless-on-azure-function-with-quarkus/ https://piotrminkowski.com/2024/01/31/serverless-on-azure-function-with-quarkus/#comments Wed, 31 Jan 2024 08:57:11 +0000 https://piotrminkowski.com/?p=14865 This article will teach you how to create and run serverless apps on Azure Function using the Quarkus Funqy extension. You can compare it to the Spring Boot and Spring Cloud support for Azure functions described in my previous article. There are also several other articles about Quarkus on my blog. If you are interested […]

The post Serverless on Azure Function with Quarkus appeared first on Piotr's TechBlog.

]]>
This article will teach you how to create and run serverless apps on Azure Function using the Quarkus Funqy extension. You can compare it to the Spring Boot and Spring Cloud support for Azure functions described in my previous article. There are also several other articles about Quarkus on my blog. If you are interested in the Kubernetes native solutions you can read more about serverless functions on OpenShift here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The Quarkus app used in the article is located in the account-function directory. After you go to that directory you should just follow my further instructions.

Prerequisites

There are some prerequisites before you start the exercise. You need to install JDK17+ and Maven on your local machine. You also need to have an account on Azure and az CLI to interact with that account. Once you install the az CLI and log in to Azure you can execute the following command for verification:

$ az account show

If you would like to test Azure Functions locally, you need to install Azure Functions Core Tools. You can find detailed installation instructions in Microsoft Docs here. For macOS, there are three required commands to run:

$ brew tap azure/functions
$ brew install azure-functions-core-tools@4
$ brew link --overwrite azure-functions-core-tools@4

Create Resources on Azure

Before proceeding with the source code, we must create several required resources on the Azure cloud. In the first step, we will prepare a resource group for all the required objects. The name of the group is quarkus-serverless. The location depends on your preferences. For me it is eastus.

$ az group create -l eastus -n quarkus-serverless

In the next step, we need to create a storage account. The Azure Function service requires it, but we will also use that account during the local development with Azure Functions Core Tools.

$ az storage account create -n pminkowsserverless \
     -g quarkus-serverless \
     -l eastus \
     --sku Standard_LRS

In order to run serverless apps with the Quarkus Azure extension, we need to create the Azure Function App instances. Of course, we use the previously created resource group and storage account. The name of my Function App instance is pminkows-account-function. We can also set a default OS type (Linux), functions version (4), and a runtime stack (Java) for each Function App.

$ az functionapp create -n pminkows-account-function \
     -c eastus \
     --os-type Linux \
     --functions-version 4 \
     -g quarkus-serverless \
     --runtime java \
     --runtime-version 17.0 \
     -s pminkowsserverless

Now, let’s switch to the Azure Portal. Then, find the quarkus-serverless resource group. You should have the same list of resources inside this group as shown below. It means that our environment is ready and we can proceed to the app implementation.

quarkus-azure-function-resources

Building Serverless Apps with Quarkus Funqy HTTP

In this article, we will consider the simplest option for building and running Quarkus apps on Azure Functions. Therefore, we include the Quarkus Funqy HTTP extension. It provides a simple way to expose services as HTTP endpoints, but shouldn’t be treated as a replacement for REST over HTTP. In case you need the full REST functionality you can use, e.g. the Quarkus RESTEasy module with Azure Function Java library. In order to deploy the app on the Azure Function service, we need to include the quarkus-azure-functions-http extension. Our function will also store data in the H2 in-memory database through the Panache module integration. Here’s a list of required Maven dependencies:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-funqy-http</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-azure-functions-http</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-hibernate-orm-panache</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-jdbc-h2</artifactId>
</dependency>

With the quarkus-azure-functions extension we don’t need to include and configure any Maven plugin to deploy an app on Azure. That extension will do the whole deployment work for us. By default, Quarkus uses the Azure CLI in the background to authenticate and deploy to Azure. We just need to provide several configuration properties with the quarkus.azure-functions prefix inside the Quarkus application.properties file. In the configuration section, we have to set the name of the Azure Function App instance (pminkows-account-function), the target resource group (quarkus-serverless), the region (eastus), the service plan (EastUSLinuxDynamicPlan). We will also add several properties responsible for database connection and setting the root API context path (/api).

quarkus.azure-functions.app-name = pminkows-account-function
quarkus.azure-functions.app-service-plan-name = EastUSLinuxDynamicPlan
quarkus.azure-functions.resource-group = quarkus-serverless
quarkus.azure-functions.region = eastus
quarkus.azure-functions.runtime.java-version = 17

quarkus.datasource.db-kind = h2
quarkus.datasource.username = sa
quarkus.datasource.password = password
quarkus.datasource.jdbc.url = jdbc:h2:mem:testdb
quarkus.hibernate-orm.database.generation = drop-and-create

quarkus.http.root-path = /api

Here’s our @Entity class. We take advantage of the Quarkus Panache active record pattern.

@Entity
public class Account extends PanacheEntity {
    public String number;
    public int balance;
    public Long customerId;
}

Let’s take a look at the implementation of our Quarkus HTTP functions. By default, with the Quarkus Funqy extension, the URL path to execute a function is the function name. We just need to annotate the target method with @Funq. In case we want to override a default path, we put the request name as the annotation value field. There are two methods. The addAccount method is responsible for adding new accounts and is exposed under the add-account path. On the other hand, the findByNumber method allows us to find the account by its number. We can access it under the by-number path. This approach allows us to deploy multiple Funqy functions on a single Azure Function.

public class AccountFunctionResource {

    @Inject
    Logger log;

    @Funq("add-account")
    @Transactional
    public Account addAccount(Account account) {
        log.infof("Add: %s", account);
        Account.persist(account);
        return account;
    }

    @Funq("by-number")
    public Account findByNumber(Account account) {
        log.infof("Find: %s", account.number);
        return Account
                .find("number", account.number)
                .singleResult();
    }
}

Running Azure Functions Locally with Quarkus

Before we deploy our functions on Azure, we can run and test them locally. I assume you have already the Azure Functions Core Tools according to the “Prerequisites” section. Firstly, we need to build the app with the following Maven command:

$ mvn clean package

Then, we can take advantage of Quarkus Azure Extension and use the following Maven command to run the app in Azure Functions local environment:

$ mvn quarkus:run

Here’s the output after running the command visible above. As you see, there is just a single Azure function QuarkusHttp, although we have two methods annotated with @Funq. Quarkus allows us to invoke multiple Funqy functions using a single, wildcarded route http://localhost:8081/api/{*path}.

quarkus-azure-function-local

All the required Azure Function configuration files like host.json, local.settings.json and function.json are autogenerated by Quarkus during the build. You can find them in the target/azure-functions directory.

Here’s the auto-generated function.json with our Azure Function definition:

{
  "scriptFile" : "../account-function-1.0.jar",
  "entryPoint" : "io.quarkus.azure.functions.resteasy.runtime.Function.run",
  "bindings" : [ {
    "type" : "httpTrigger",
    "direction" : "in",
    "name" : "req",
    "route" : "{*path}",
    "methods" : [ "GET", "HEAD", "POST", "PUT", "OPTIONS" ],
    "dataType" : "binary",
    "authLevel" : "ANONYMOUS"
  }, {
    "type" : "http",
    "direction" : "out",
    "name" : "$return"
  } ]
}

Let’s call our local function. In the first step, we will add a new account by calling the addAccount function:

$ curl http://localhost:8081/api/add-account \
    -d "{\"number\":\"124\",\"customerId\":1, \"balance\":1000}" \
    -H "Content-Type: application/json"

Then, we can find the account by its number. For GET requests, the Funqy HTTP Binding allows to use of a query parameter mapping for function input parameters. The query parameter names are mapped to properties on the bean class.

$ curl http://localhost:8081/api/by-number?number=124

Deploy Quarkus Serverless on Azure Functions

Finally, we can deploy our sample Quarkus serverless app on Azure. As you probably remember, we already have all the required settings in the application.properties file. So now, we just need to run the following Maven command:

$ mvn quarkus:deploy

Here’s the output of the command. As you see, there is still one Azure Function with a wildcard in the path.

Let’s switch to the Azure Portal. Here’s a page with the pminkows-account-function details:

quarkus-azure-function-portal

We can call a similar query several times with different input data to test the service:

$ curl https://pminkows-account-function.azurewebsites.net/api/add-account \
    -d "{\"number\":\"127\",\"customerId\":4, \"balance\":1000}" \
    -H "Content-Type: application/json"

Here’s the invocation history visible in the Azure Monitor for our QuarkusHttp function.

Final Thoughts

In this article, I’m showing you a simplified scenario of running a Quarkus serverless app on Azure Function. You don’t need to know much about Azure Function to run such a service, since Quarkus handles all the required things around for you.

The post Serverless on Azure Function with Quarkus appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/01/31/serverless-on-azure-function-with-quarkus/feed/ 2 14865
Introduction to gRPC with Quarkus https://piotrminkowski.com/2023/09/15/introduction-to-grpc-with-quarkus/ https://piotrminkowski.com/2023/09/15/introduction-to-grpc-with-quarkus/#respond Fri, 15 Sep 2023 23:11:34 +0000 https://piotrminkowski.com/?p=14508 In this article, you will learn how to implement and consume gRPC services with Quarkus. Quarkus provides built-in support for gRPC through the extension. We will create a simple app, which uses that extension and also interacts with the Postgres database through the Panache Reactive ORM module. You can compare gRPC support available in Quarkus […]

The post Introduction to gRPC with Quarkus appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to implement and consume gRPC services with Quarkus. Quarkus provides built-in support for gRPC through the extension. We will create a simple app, which uses that extension and also interacts with the Postgres database through the Panache Reactive ORM module. You can compare gRPC support available in Quarkus with Spring Boot by reading the following article on my blog. This is a good illustration of what Quarkus may simplify in your development.

Source Code

If you would like to try it by yourself, you can always take a look at my source code. In order to do that, you need to clone my GitHub repository. It contains several different Quarkus apps. For the current article, please refer to the person-grpc-service app. You should go to that directory and then just follow my instructions 🙂

Generate Model Classes and Services for gRPC

In the first step, we will generate model classes and gRPC services using the .proto manifests. The same as for the Spring Boot app we will create the Protobuf manifest and place it inside the src/main/proto directory. We need to include some additional Protobuf schemas to use the google.protobuf.* package (1). Our gRPC service will provide methods for searching persons using various criteria and a single method for adding a new person (2). Those methods will use primitives from the google.protobuf.* package and model classes defined inside the .proto file as messages. The Person message represents a single model class. It contains three fields: idname, age and gender (3). The Persons message contains a list of Person objects (4). The gender field inside the Person message is an enum (5).

syntax = "proto3";

package model;

option java_package = "pl.piomin.quarkus.grpc.model";
option java_outer_classname = "PersonProto";

// (1)
import "google/protobuf/empty.proto";
import "google/protobuf/wrappers.proto";

// (2)
service PersonsService {
  rpc FindByName(google.protobuf.StringValue) returns (Persons) {}
  rpc FindByAge(google.protobuf.Int32Value) returns (Persons) {}
  rpc FindById(google.protobuf.Int64Value) returns (Person) {}
  rpc FindAll(google.protobuf.Empty) returns (Persons) {}
  rpc AddPerson(Person) returns (Person) {}
}

// (3)
message Person {
  int64 id = 1;
  string name = 2;
  int32 age = 3;
  Gender gender = 4;
}

// (4)
message Persons {
  repeated Person person = 1;
}

// (5)
enum Gender {
  MALE = 0;
  FEMALE = 1;
}

Once again I will refer here to my previous article about Spring Boot for gRPC. With Quarkus we don’t need to include any Maven plugin responsible for generating Java classes. This feature is automatically included by the Quarkus gRPC module. This saves a lot of time – especially at the beginning with Java gRPC (I know this from my own experience). Of course, if you want to override a default behavior, you can include your own plugin. Here’s the example from the official Quarkus docs.

Now, we just need to build the project with the mvn clean package command. It will automatically generate the following list of classes (I highlighted the two most important for us):

By default, Quarkus generates our gRPC classes in the target/generated-sources/grpc directory. Let’s include it as the source directory using the build-helper-maven-plugin Maven plugin.

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>build-helper-maven-plugin</artifactId>
  <executions>
    <execution>
      <id>add-source</id>
      <phase>generate-sources</phase>
      <goals>
        <goal>add-source</goal>
      </goals>
      <configuration>
        <sources>
          <source>target/generated-sources/grpc</source>
        </sources>
      </configuration>
    </execution>
  </executions>
</plugin>

Dependencies

By default, the quarkus-grpc extension relies on the reactive programming model. Therefore we will include a reactive database driver and a reactive version of the Panache Hibernate module. It is also worth adding the RESTEasy Reactive module. Thanks to that, we will be able to run e.g. Quarkus Dev UI, which also provides useful features for the gRPC services. Of course, we are going to write some JUnit tests to verify core functionalities, so that’s why quarkus-junit5 is included in the Maven pom.xml.

<dependencies>
  <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-resteasy-reactive-jackson</artifactId>
  </dependency>
  <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-grpc</artifactId>
  </dependency>
  <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-hibernate-reactive-panache</artifactId>
  </dependency>
  <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-reactive-pg-client</artifactId>
  </dependency>
  <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-junit5</artifactId>
    <scope>test</scope>
  </dependency>
</dependencies>

Using the Quarkus gRPC Extension

Once we included all the required libraries and generated classes for Protobuf integration we can with the implementation. Let’s begin with the persistence layer. We already have message classes generated, but we still need to create an entity class. Here’s the PersonEntity class. We will take advantage of PanacheEntity, and thanks to that we don’t need to e.g. define getters/setters.

@Entity
public class PersonEntity extends PanacheEntity {

    public String name;
    public int age;
    public Gender gender;

}

Thanks to the Quarkus Panache field access rewrite, when your users read person.name they will actually call your getName() accessor, and similarly for field writes and the setter. This allows for proper encapsulation at runtime as all the fields calls will be replaced by the corresponding getter/setter calls.

After that, we can create the repository class. It implements the reactive version of the PanacheRepository interface. The important thing is that the PanacheRepository interface returns Mutiny Uni objects. Therefore, if need to add some additional methods inside the repository we should also return results as the Uni object.

@ApplicationScoped
public class PersonRepository implements PanacheRepository<PersonEntity> {

    public Uni<List<PersonEntity>> findByName(String name){
        return find("name", name).list();
    }

    public Uni<List<PersonEntity>> findByAge(int age){
        return find("age", age).list();
    }
}

Finally, we can proceed to the most important element in our tutorial – the implementation of the gRPC service. The implementation class should be annotated with @GrpcService (1). In order to interact with the database reactively I also had to annotate it with @WithSession (2). It creates a Mutiny session for each method gRPC method inside. We can also register optional interceptors, for example, to log incoming requests and outgoing responses (3). Our service class needs to implement the PersonsService interface generated by the Quarkus gRPC extension (4).

Then we can go inside the class. In the first step, we will inject the repository bean (5). After that, we will override all the gRPC methods generated from the .proto manifest (6). All those methods use the PersonRepository bean to interact with the database. Once they obtain a result it is required to convert it to the Protobuf object (7). When we add a new person to the database, we need to do it in the transaction scope (8).

@GrpcService // (1)
@WithSession // (2)
@RegisterInterceptor(LogInterceptor.class) // (3)
public class PersonsServiceImpl implements PersonsService { // (4)

    private PersonRepository repository; // (5)

    public PersonsServiceImpl(PersonRepository repository) {
        this.repository = repository;
    }

    @Override // (6)
    public Uni<PersonProto.Persons> findByName(StringValue request) {
        return repository.findByName(request.getValue())
                .map(this::mapToPersons); // (7)
    }

    @Override
    public Uni<PersonProto.Persons> findByAge(Int32Value request) {
        return repository.findByAge(request.getValue())
                .map(this::mapToPersons);
    }

    @Override
    public Uni<PersonProto.Person> findById(Int64Value request) {
        return repository.findById(request.getValue())
                .map(this::mapToPerson);
    }

    @Override
    public Uni<PersonProto.Persons> findAll(Empty request) {
        return repository.findAll().list()
                .map(this::mapToPersons);
    }

    @Override
    @WithTransaction // (8)
    public Uni<PersonProto.Person> addPerson(PersonProto.Person request) {
        PersonEntity entity = new PersonEntity();
        entity.age = request.getAge();
        entity.name = request.getName();
        entity.gender = Gender.valueOf(request.getGender().name());
        return repository.persist(entity)
           .map(personEntity -> mapToPerson(entity));
    }

    private PersonProto.Persons mapToPersons(List<PersonEntity> list) {
        PersonProto.Persons.Builder builder = 
           PersonProto.Persons.newBuilder();
        list.forEach(p -> builder.addPerson(mapToPerson(p)));
        return builder.build();
    }

    private PersonProto.Person mapToPerson(PersonEntity entity) {
        PersonProto.Person.Builder builder = 
           PersonProto.Person.newBuilder();
        if (entity != null) {
            return builder.setAge(entity.age)
                    .setName(entity.name)
                    .setId(entity.id)
                    .setGender(PersonProto.Gender
                       .valueOf(entity.gender.name()))
                    .build();
        } else {
            return null;
        }
    }
}

At the end let’s discuss the optional step. With Quarkus gRPC we can implement a server interceptor by creating the @ApplicationScoped bean implementing ServerInterceptor. In order to apply an interceptor to all exposed services, we should annotate it with @GlobalInterceptor. In our case, the interceptor is registered to a single service with @RegisterInterceptor annotation. Then we will the SimpleForwardingServerCall class to log outgoing messages, and SimpleForwardingServerCallListener to log outgoing messages.

@ApplicationScoped
public class LogInterceptor  implements ServerInterceptor {

    Logger log;

    public LogInterceptor(Logger log) {
        this.log = log;
    }

    @Override
    public <ReqT, RespT> ServerCall.Listener<ReqT> interceptCall(ServerCall<ReqT, RespT> call, Metadata headers, ServerCallHandler<ReqT, RespT> next) {

        ServerCall<ReqT, RespT> listener = new ForwardingServerCall.SimpleForwardingServerCall<ReqT, RespT>(call) {
            @Override
            public void sendMessage(RespT message) {
                log.infof("[Sending message] %s",  message.toString().replaceAll("\n", " "));
                super.sendMessage(message);
            }
        };

        return new ForwardingServerCallListener.SimpleForwardingServerCallListener<ReqT>(next.startCall(listener, headers)) {
            @Override
            public void onMessage(ReqT message) {
                log.infof("[Received message] %s", message.toString().replaceAll("\n", " "));
                super.onMessage(message);
            }
        };
    }
}

Quarkus JUnit gRPC Tests

Of course, there must be tests in our app. Thanks to the Quarkus dev services and built-in integration with Testcontainers we don’t have to take care of starting the database. Just remember to run the Docker daemon on your laptop. After annotating the test class with @QuarkusTest we can inject the gRPC client generated from the .proto manifest with @GrpcClient (1). Then, we can use the PersonsService interface to call our gRPCS methods. By default, Quarkus starts gRPC endpoints on 9000 port. The Quarkus gRPC client works in the reactive mode, so we will leverage the CompletableFuture class to obtain and verify results in the tests (2).

@QuarkusTest
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonsServiceTests {

    static Long newId;

    @GrpcClient // (1)
    PersonsService client;

    @Test
    @Order(1)
    void shouldAddNew() throws ExecutionException, InterruptedException, TimeoutException {
        CompletableFuture<Long> message = new CompletableFuture<>(); // (2)
        client.addPerson(PersonProto.Person.newBuilder()
                        .setName("Test")
                        .setAge(20)
                        .setGender(PersonProto.Gender.MALE)
                        .build())
                .subscribe().with(res -> message.complete(res.getId()));
        Long id = message.get(1, TimeUnit.SECONDS);
        assertNotNull(id);
        newId = id;
    }

    @Test
    @Order(2)
    void shouldFindAll() throws ExecutionException, InterruptedException, TimeoutException {
        CompletableFuture<List<PersonProto.Person>> message = new CompletableFuture<>();
        client.findAll(Empty.newBuilder().build())
                .subscribe().with(res -> message.complete(res.getPersonList()));
        List<PersonProto.Person> list = message.get(1, TimeUnit.SECONDS);
        assertNotNull(list);
        assertFalse(list.isEmpty());
    }

    @Test
    @Order(2)
    void shouldFindById() throws ExecutionException, InterruptedException, TimeoutException {
        CompletableFuture<PersonProto.Person> message = new CompletableFuture<>();
        client.findById(Int64Value.newBuilder().setValue(newId).build())
                .subscribe().with(message::complete);
        PersonProto.Person p = message.get(1, TimeUnit.SECONDS);
        assertNotNull(p);
        assertEquals("Test", p.getName());
        assertEquals(newId, p.getId());
    }

    @Test
    @Order(2)
    void shouldFindByAge() throws ExecutionException, InterruptedException, TimeoutException {
        CompletableFuture<PersonProto.Persons> message = new CompletableFuture<>();
        client.findByAge(Int32Value.newBuilder().setValue(20).build())
                .subscribe().with(message::complete);
        PersonProto.Persons p = message.get(1, TimeUnit.SECONDS);
        assertNotNull(p);
        assertEquals(1, p.getPersonCount());
    }

    @Test
    @Order(2)
    void shouldFindByName() throws ExecutionException, InterruptedException, TimeoutException {
        CompletableFuture<PersonProto.Persons> message = new CompletableFuture<>();
        client.findByName(StringValue.newBuilder().setValue("Test").build())
                .subscribe().with(message::complete);
        PersonProto.Persons p = message.get(1, TimeUnit.SECONDS);
        assertNotNull(p);
        assertEquals(1, p.getPersonCount());
    }
}

Running and Testing Quarkus Locally

Let’s run our Quarkus app locally in the dev mode:

$ mvn quarkus:dev

Quarkus will start the Postgres database and expose gRPC services on the port 9000:

We can go to the Quarkus Dev UI console. It is available under the address http://localhost:8080/q/dev-ui. Once you do it, you should see the gRPC tile as shown below:

quarkus-grpc-ui

Click the Services link inside that tile. You will be redirected to the site with a list of available gRPC services. After that, we may expand the row with the model.PersonsService name. It allows to perform a test call of the selected gRPC method. Let’s choose the AddPerson tab. Then we can insert the request in the JSON format and send it to the server by clicking the Send button.

quarkus-grpc-ui-send

If you don’t like UI interfaces you can use the grpcurl CLI instead. By default, the gRPC server is started on a port 9000 in the PLAINTEXT mode. In order to print a list of available services we need to execute the following command:

$ grpcurl --plaintext localhost:9000 list
grpc.health.v1.Health
model.PersonsService

Then, let’s print the list of methods exposed by the model.PersonsService:

$ grpcurl --plaintext localhost:9000 list model.PersonsService
model.PersonsService.AddPerson
model.PersonsService.FindAll
model.PersonsService.FindByAge
model.PersonsService.FindById
model.PersonsService.FindByName

We can also print the details about each method by using the describe keyword in the command:

$ grpcurl --plaintext localhost:9000 describe model.PersonsService.FindById
model.PersonsService.FindById is a method:
rpc FindById ( .google.protobuf.Int64Value ) returns ( .model.Person );

Finally, let’s call the endpoint described with the command visible above. We are going to find the previously added person (via UI) by the id field value.

$ grpcurl --plaintext -d '1' localhost:9000 model.PersonsService.FindById
{
  "id": "1",
  "name": "Test",
  "age": 20,
  "gender": "FEMALE"
}

Running Quarkus gRPC on OpenShift

In the last step, we will run our app on OpenShift and try to interact with the gRPC service through the OpenShift Route. Fortunately, we can leverage Quarkus extension for OpenShift. We can include the quarkus-openshift dependency in the optional Maven profile.

<profile>
  <id>openshift</id>
  <activation>
  <property>
    <name>openshift</name>
  </property>
  </activation>
  <dependencies>
    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-openshift</artifactId>
    </dependency>
  </dependencies>
  <properties>
    <quarkus.kubernetes.deploy>true</quarkus.kubernetes.deploy>
    <quarkus.profile>openshift</quarkus.profile>
  </properties>
</profile>

Once we run a Maven build with the option -Popenshift it will activate the profile. Thanks to that Quarkus will handle all things required for building image and running it on the target cluster.

$ mvn clean package -Popenshift -DskipTests

In order to test our gRPC service via the OpenShift Route we need to expose it over SSL/TLS. Here’s the secret that contains both the certificate and private key. It was issued by the cert-manager for the the Route hostname.

Quarkus Kubernetes Extension offers the ability to automatically generate Kubernetes resources based on the defaults and user-supplied configuration using dekorate. It currently supports generating resources for vanilla Kubernetes, OpenShift, and Knative.

We need to configure several things in the Quarkus app. We didn’t have to take care of it when running in the dev mode. First of all, we need to provide Postgres database connection settings (1). Thanks to the Quarkus OpenShift module we can generate YAML manifest using configuration properties. The database is available on OpenShift under the person-db address. The rest of the credentials can be taken from the person-db secret (2).

Then we will mount a secret with a TLS certificate and private key to the DeploymentConfig (3). It will be available inside the pod under the /mnt/app-secret path. Then, we can enable SSL for the gRPC service by setting the certificate and private key for the server (4). We should also enable the reflection service, to allow tools like grpcurl to interact with our services. Once the SSL/TLS for the service is configured, we can create the passthrough Route that exposes it outside the OpenShift cluster (5).

# (1)
%prod.quarkus.datasource.username = ${POSTGRES_USER}
%prod.quarkus.datasource.password = ${POSTGRES_PASSWORD}
%prod.quarkus.datasource.reactive.url = vertx-reactive:postgresql://person-db:5432/${POSTGRES_DB}

# (2)
quarkus.openshift.env.mapping.postgres_user.from-secret = person-db
quarkus.openshift.env.mapping.postgres_user.with-key = database-user
quarkus.openshift.env.mapping.postgres_password.from-secret = person-db
quarkus.openshift.env.mapping.postgres_password.with-key = database-password
quarkus.openshift.env.mapping.postgres_db.from-secret = person-db
quarkus.openshift.env.mapping.postgres_db.with-key = database-name

# (3)
quarkus.openshift.app-secret = secure-callme-cert

# (4)
%prod.quarkus.grpc.server.ssl.certificate = /mnt/app-secret/tls.crt
%prod.quarkus.grpc.server.ssl.key = /mnt/app-secret/tls.key
%prod.quarkus.grpc.server.enable-reflection-service = true

# (5)
quarkus.openshift.route.expose = true
quarkus.openshift.route.target-port = grpc
quarkus.openshift.route.tls.termination = passthrough

quarkus.kubernetes-client.trust-certs = true
%openshift.quarkus.container-image.group = demo-grpc
%openshift.quarkus.container-image.registry = image-registry.openshift-image-registry.svc:5000

Here are the logs of the Quarkus app after running on OpenShift. As you see the gRPC server enables reflection service and is exposed over SSL/TLS.

quarkus-grpc-logs

Let’s also display information about our app Route. We will interact with the person-grpc-service-demo-grpc.apps-crc.testing address using the grpcurl tool.

$ oc get route
NAME                  HOST/PORT                                        PATH   SERVICES              PORT   TERMINATION   WILDCARD
person-grpc-service   person-grpc-service-demo-grpc.apps-crc.testing          person-grpc-service   grpc   passthrough   None

Finally, we have to set the key and certificates used for securing the gRPC server as the parameters of grpcurl. It should work properly as shown below. You can also try other gRPC commands used previously for the app running locally in dev mode.

$ grpcurl -key grpc.key -cert grpc.crt -cacert ca.crt \
  person-grpc-service-demo-grpc.apps-crc.testing:443 list
grpc.health.v1.Health
model.PersonsService

Final Thoughts

Quarkus simplifies the development of gRPC services. For example, it allows us to easily generate Java classes from .proto files or configure SSL/TLS for the server with application.properties. No matter if we run it locally or remotely e.g. on the OpenShift cluster, we can easily achieve it by using Quarkus features like Dev Services or Kubernetes Extension. This article shows how to take advantage of Quarkus gRPC features and use gRPC services on OpenShift.

The post Introduction to gRPC with Quarkus appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/09/15/introduction-to-grpc-with-quarkus/feed/ 0 14508
Contract Testing with Quarkus and Pact https://piotrminkowski.com/2023/05/09/contract-testing-with-quarkus-and-pact/ https://piotrminkowski.com/2023/05/09/contract-testing-with-quarkus-and-pact/#respond Tue, 09 May 2023 09:57:17 +0000 https://piotrminkowski.com/?p=14166 In this article, you will learn how to create contract tests for Quarkus apps using Pact. Consumer-driven contract testing is one of the most popular strategies for verifying communication between microservices. In short, it is an approach to ensure that services can successfully communicate with each other without implementing integration tests. There are some tools […]

The post Contract Testing with Quarkus and Pact appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create contract tests for Quarkus apps using Pact. Consumer-driven contract testing is one of the most popular strategies for verifying communication between microservices. In short, it is an approach to ensure that services can successfully communicate with each other without implementing integration tests. There are some tools especially dedicated to such a type of test. One of them is Pact. We can use this code-first tool with multiple languages including .NET, Go, Ruby, and of course, Java.

Before you start, it is worth familiarizing yourself with the Quarkus framework. There are several articles about Quarkus on my blog. If you want about to read about interesting and useful Quarkus features please refer to the post “Quarkus Tips, Tricks and Techniques” available here. For some more advanced features like testing strategies, you can read the “Advanced Testing with Quarkus” article available here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions. Let’s begin.

Architecture

In the exercise today we will add several contract tests to the existing architecture of sample Quarkus microservices. There are three sample apps that communicate with each other through HTTP. We use Quarkus declarative REST client to call remote HTTP endpoints. The department-service app calls the endpoint exposed by the employee-service app to get a list of employees assigned to the particular department. On the other hand, the organization-service app calls endpoints exposed by department-service and employee-service.

We will implement some contract tests to verify described interactions. Each contract is signed between two sides of communication: the consumer and the provider. Pact assumes that contract code is generated and published by the consumer side, and then verified by the provider side. It provides a tool for storing and sharing contracts between consumers and providers – Pact Broker. Pact Broker exposes a simple RESTful API for publishing and retrieving contracts, and an embedded web dashboard for navigating the API. We will run it as a Docker container. However, our goal is also to run it during the CI build and then use it to exchange contracts between the tests.

Here’s the diagram that illustrates the described architecture.

quarkus-pact-arch

Running Pact Broker

Before we create any test, we will start Pact broker on the local machine. In order to do that, we need to run two containers on Docker. Pact broker requires database, so in the first step we will start the postgres container:

$ docker run -d --name postgres \
  -p 5432:5432 \
  -e POSTGRES_USER=pact \ 
  -e POSTGRES_PASSWORD=pact123 \
  -e POSTGRES_DB=pact \
  postgres

After that, we can run the container with Pact broker. We will link it to the postgres container and set the autentication credentials:

$ docker run -d --name pact-broker \
  --link postgres:postgres \
  -e PACT_BROKER_DATABASE_USERNAME=pact \
  -e PACT_BROKER_DATABASE_PASSWORD=pact123 \ 
  -e PACT_BROKER_DATABASE_HOST=postgres \
  -e PACT_BROKER_DATABASE_NAME=pact \
  -p 9292:9292 \
  pactfoundation/pact-broker

If you prefer to run everything with the single command you can use docker-compose.yml in the repository root directory. It will run not only Postgres and Pact broker, but also our three sample microservices.

version: "3.7"
services:
  postgres:
    container_name: postgres
    image: postgres
    environment:
      POSTGRES_USER: pact
      POSTGRES_PASSWORD: pact123
      POSTGRES_DB: pact
    ports:
      - "5432"
  pact-broker:
    container_name: pact-broker
    image: pactfoundation/pact-broker
    ports:
      - "9292:9292"
    depends_on:
      - postgres
    links:
      - postgres
    environment:
      PACT_BROKER_DATABASE_USERNAME: pact
      PACT_BROKER_DATABASE_PASSWORD: pact123
      PACT_BROKER_DATABASE_HOST: postgres
      PACT_BROKER_DATABASE_NAME: pact
  employee:
    image: quarkus/employee-service:1.2
    ports:
      - "8080"
  department:
    image: quarkus/department-service:1.1
    ports:
      - "8080"
    links:
      - employee
  organization:
    image: quarkus/organization-service:1.1
    ports:
      - "8080"
    links:
      - employee
      - department

Since the docker-compose.yml includes images with our sample microservices, you first need to build the Docker images of the apps. We can easily do it with Quarkus. Once we included the quarkus-container-image-jib dependency, we may build the image using Jib Maven plugin by activating the quarkus.container-image.build property as shown below. Additionally, don’t forget about skipping the tests.

$ mvn clean package -DskipTests -Dquarkus.container-image.build=true

Then just run the following command:

$ docker compose up

Finally, you can access the Pact broker UI under the http://localhost:9292 address. Of course, there are no contracts saved there, so you just see the example pact.

Create Contract Test for Consumer

Once we started a Pact broker we can proceed to the implementation of the tests. We will start from the consumer side. Both departament-service and organization-service consuming endpoints exposed by the employee-service. In the first step, we will include the Quarkus Pact Consumer extension to the Maven dependencies.

<dependency>
  <groupId>io.quarkiverse.pact</groupId>
  <artifactId>quarkus-pact-consumer</artifactId>
  <version>1.0.0.Final</version>
  <scope>provided</scope>
</dependency>

Here’s the REST client interface responsible for calling the employee-service GET /employees/department/{id} endpoint from the departament-service.

@ApplicationScoped
@Path("/employees")
@RegisterRestClient(configKey = "employee")
public interface EmployeeClient {

    @GET
    @Path("/department/{departmentId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId);

}

We will test the EmployeeClient directly in the contract test. In order to implement a contract test on the consumer side we need to declare the PactConsumerTestExt JUnit5 extension. In the callFindByDepartment method, we need to prepare the expected response template as the RequestResponsePact object. The method should return an array of employees. Therefore we are using the PactDslJsonArray to construct the required object. The name of the provider is employee-service, while the name of the consumer is department-service. In order to use Pact MockServer I had to declare v3 of Pact instead of the latest v4. Then we are setting the mock server address as the RestClientBuilder base URI and test the contract.

@QuarkusTest
@ExtendWith(PactConsumerTestExt.class)
public class EmployeeClientContractTests {

    @Pact(provider = "employee-service", 
          consumer = "department-service")
    public RequestResponsePact callFindByDepartment(
        PactDslWithProvider builder) {
        DslPart body = PactDslJsonArray.arrayEachLike()
                .integerType("id")
                .stringType("name")
                .stringType("position")
                .numberType("age")
                .closeObject();
        return builder.given("findByDepartment")
                .uponReceiving("findByDepartment")
                    .path("/employees/department/1")
                    .method("GET")
                .willRespondWith()
                    .status(200)
                    .body(body).toPact();
    }

    @Test
    @PactTestFor(providerName = "employee-service", 
                 pactVersion = PactSpecVersion.V3)
    public void verifyFindDepartmentPact(MockServer mockServer) {
        EmployeeClient client = RestClientBuilder.newBuilder()
                .baseUri(URI.create(mockServer.getUrl()))
                .build(EmployeeClient.class);
        List<Employee> employees = client.findByDepartment(1L);
        assertNotNull(employees);
        assertTrue(employees.size() > 0);
        assertNotNull(employees.get(0).getId());
    }
}

The test for the integration between organization-service and department-service is pretty similar. Let’s take a look at the REST client interface.

@ApplicationScoped
@Path("/departments")
@RegisterRestClient(configKey = "department")
public interface DepartmentClient {

    @GET
    @Path("/organization/{organizationId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Department> findByOrganization(@PathParam("organizationId") Long organizationId);

    @GET
    @Path("/organization/{organizationId}/with-employees")
    @Produces(MediaType.APPLICATION_JSON)
    List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId);

}

Here’s the implementation of our contract test. However, instead of a single endpoint, we are testing two interactions with the GET /departments/organization/{id} and GET /departments/organization/{id}/with-employees.

@QuarkusTest
@ExtendWith(PactConsumerTestExt.class)
public class DepartmentClientContractTests {

   @Pact(provider = "department-service", 
         consumer = "organization-service")
   public RequestResponsePact callFindDepartment(
      PactDslWithProvider builder) {
      DslPart body = PactDslJsonArray.arrayEachLike()
            .integerType("id")
            .stringType("name")
            .closeObject();
      DslPart body2 = PactDslJsonArray.arrayEachLike()
            .integerType("id")
            .stringType("name")
            .array("employees")
               .object()
                  .integerType("id")
                  .stringType("name")
                  .stringType("position")
                  .integerType("age")
               .closeObject()
            .closeArray();
      return builder
            .given("findByOrganization")
               .uponReceiving("findByOrganization")
                  .path("/departments/organization/1")
                  .method("GET")
               .willRespondWith()
                  .status(200)
                  .body(body)
            .given("findByOrganizationWithEmployees")
               .uponReceiving("findByOrganizationWithEmployees")
                  .path("/departments/organization/1/with-employees")
                  .method("GET")
               .willRespondWith()
                  .status(200)
                  .body(body2)
            .toPact();
   }

   @Test
   @PactTestFor(providerName = "department-service", 
                pactVersion = PactSpecVersion.V3)
   public void verifyFindByOrganizationPact(MockServer mockServer) {
      DepartmentClient client = RestClientBuilder.newBuilder()
             .baseUri(URI.create(mockServer.getUrl()))
             .build(DepartmentClient.class);
      List<Department> departments = client.findByOrganization(1L);
      assertNotNull(departments);
      assertTrue(departments.size() > 0);
      assertNotNull(departments.get(0).getId());

      departments = client.findByOrganizationWithEmployees(1L);
      assertNotNull(departments);
      assertTrue(departments.size() > 0);
      assertNotNull(departments.get(0).getId());
      assertFalse(departments.get(0).getEmployees().isEmpty());
   }

}

Publish Contracts to the Pact Broker

That’s not all. We are still on the consumer side. After running the tests we need to publish the contract to the Pact broker. It is not performed automatically by Pact. To achieve it, we first need to include the following Maven plugin:

<plugin>
  <groupId>au.com.dius.pact.provider</groupId>
  <artifactId>maven</artifactId>
  <version>4.6.0</version>
  <configuration>
    <pactBrokerUrl>http://localhost:9292</pactBrokerUrl>
  </configuration>
</plugin>

In order to publish the contract after the test we need to include the pact:publish goal to the build command as shown below.

$ mvn clean package pact:publish

Now, we can switch the Pact Broker UI. As you see, there are several pacts generated during our tests. We can recognize it by the name of the consumer and provider.

quarkus-pact-broker

We can go to the details of each contract. Here’s the description of the integration between the department-service and employee-service.

Create Contract Test for Provider

Once we published pacts to the broker, we can proceed to the implementation of contract tests on the provider side. In the current case, it is employee-service. Firstly, let’s include the Quarkus Pact Provider extension in the Maven dependencies.

<dependency>
  <groupId>io.quarkiverse.pact</groupId>
  <artifactId>quarkus-pact-provider</artifactId>
  <version>1.0.0.Final</version>
  <scope>test</scope>
</dependency>

We need to annotate the test class with the @Provider annotation and pass the name of the provider used on the consumer side (1). In the @PactBroker annotation, we have to pass the address of the broker (2). The test will load the contract published by the consumer side and test it against the running instance of the Quarkus app (under the test instance port) (3). We also need to extend the test template with the PactVerificationInvocationContextProvider class (4). Thanks to that, Pact will trigger the verification of contracts for each interaction defined by the @State method (6) (7). We also let Pact publish the verification results of each contract to the Pact broker (5).

@QuarkusTest
@Provider("employee-service") // (1)
@PactBroker(url = "http://localhost:9292") // (2)
public class EmployeeContractTests {

   @ConfigProperty(name = "quarkus.http.test-port") 
   int quarkusPort;
    
   @TestTarget
   HttpTestTarget target = new HttpTestTarget("localhost", 
      this.quarkusPort); // (3)

   @TestTemplate
   @ExtendWith(PactVerificationInvocationContextProvider.class) // (4)
   void pactVerificationTestTemplate(PactVerificationContext context) {
      context.verifyInteraction();
      System.setProperty("pact.provider.version", "1.2"); 
      System.setProperty("pact.verifier.publishResults", "true"); // (5)
   }

   @BeforeEach
   void beforeEach(PactVerificationContext context) {
      context.setTarget(new HttpTestTarget("localhost",
         this.quarkusPort));
   }

   @State("findByDepartment") // (6)
   void findByDepartment() {

   }

   @State("findByOrganization") // (7)
   void findByOrganization() {

   }
}

The value of @State refers to the name of the integration set on the consumer side. For example, line (6) in the code source above verifies the contract defined in the department-sevice as shown below.

As I mentioned before, the contract verification results are published to the Pact broker. You can check the status of verification using Pact Broker UI:

quarkus-pact-verification

Run Tests in the CI Pipeline

Our last goal is to prepare the CI process for running Pact broker and contract tests during the build of our Quarkus apps. We will use CircleCI for that. Before running the contract tests we need to run the Pact broker container using Docker Compose. In order to do that we first need to use the Linux machine as a default executor and the docker orb (1). After that, we need to install Docker Compose and then use it to run the already prepared configuration in our docker-compose.yml file (2) (3). Then we can use the maven orb to run tests and publish contracts to the instance of the broker running during the tests (4).

version: 2.1

jobs:
  analyze:
    executor: # (1)
      name: docker/machine
      image: ubuntu-2204:2022.04.2
    steps:
      - checkout
      - docker/install-docker-compose # (2)
      - maven/with_cache:
          steps:
            - run:
                name: Build Images
                command: mvn package -DskipTests -Dquarkus.container-image.build=true
      - run: # (3)
          name: Run Pact Broker
          command: docker-compose up -d
      - maven/with_cache: # (4)
          steps:
            - run:
                name: Run Tests
                command: mvn package pact:publish -Dquarkus.container-image.build=false
      - maven/with_cache:
          steps:
            - run:
                name: Sonar Analysis
                command: mvn package sonar:sonar -DskipTests -Dquarkus.container-image.build=false


orbs:
  maven: circleci/maven@1.4.1
  docker: circleci/docker@2.2.0

workflows:
  maven_test:
    jobs:
      - analyze:
          context: SonarCloud

Here’s the final result of our build.

Final Thoughts

Contract testing is a useful approach for verifying interactions between microservices. Thanks to the Quarkus Pact extensions you can easily implement contract tests for your Quarkus apps. In this article, I showed how to use a Pact broker to store and share contracts between the tests. However, you can as well use the @PactFolder options to keep the contract JSON manifests inside the Git repository.

The post Contract Testing with Quarkus and Pact appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/05/09/contract-testing-with-quarkus-and-pact/feed/ 0 14166