quarkus-rest Archives - Piotr's TechBlog https://piotrminkowski.com/tag/quarkus-rest/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 18 Nov 2024 12:50:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 quarkus-rest Archives - Piotr's TechBlog https://piotrminkowski.com/tag/quarkus-rest/ 32 32 181738725 Consul with Quarkus and SmallRye Stork https://piotrminkowski.com/2024/11/18/consul-with-quarkus-and-smallrye-stork/ https://piotrminkowski.com/2024/11/18/consul-with-quarkus-and-smallrye-stork/#respond Mon, 18 Nov 2024 12:34:11 +0000 https://piotrminkowski.com/?p=15444 This article will teach you to use HashiCorp Consul as a discovery and configuration server for your Quarkus microservices. I wrote a similar article some years ago. However, there have been several significant improvements in the Quarkus ecosystem since that time. What I have in mind is mainly the Quarkus Stork project. This extension focuses […]

The post Consul with Quarkus and SmallRye Stork appeared first on Piotr's TechBlog.

]]>
This article will teach you to use HashiCorp Consul as a discovery and configuration server for your Quarkus microservices. I wrote a similar article some years ago. However, there have been several significant improvements in the Quarkus ecosystem since that time. What I have in mind is mainly the Quarkus Stork project. This extension focuses on service discovery and load balancing for cloud-native applications. It can seamlessly integrate with the Consul or Kubernetes discovery and provide various load balancer types over the Quarkus REST client. Our sample applications will also load configuration properties from the Consul Key-Value store and use the Smallrye Mutiny Consul client to register the app in the discovery server.

If you are looking for other interesting articles about Quarkus, you will find them in my blog. For example, you will read more about testing strategies with Quarkus and Pact here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions ๐Ÿ™‚

Architecture

Before proceeding to the implementation, letโ€™s take a look at the diagram of our system architecture. There are three microservices: employee-servicedepartament-service, and organization-service. They are communicating with each other through a REST API. They use the Consul Key-Value store as a distributed configuration backend. Every instance of service is registering itself in Consul. A load balancer is included in the application. It reads a list of registered instances of a target service from the Consul using the Quarkus Stork extension. Then it chooses an instance using a provided algorithm.

Running Consul Instance

We will run a single-node Consul instance as a Docker container. By default, Consul exposes HTTP API and a UI console on the 8500 port. Let’s expose that port outside the container.

docker run -d --name=consul \
   -e CONSUL_BIND_INTERFACE=eth0 \
   -p 8500:8500 \
   consul
ShellSession

Dependencies

Let’s analyze a list of the most important Maven dependencies using the department-service application as an example. Our application exposes REST endpoints and connects to the in-memory H2 database. We use the Quarkus REST client and the SmallRye Stork Service Discovery library to implement communication between the microservices. On the other hand, the io.quarkiverse.config:quarkus-config-consul is responsible for reading configuration properties the Consul Key-Value store. With the smallrye-mutiny-vertx-consul-client library the application is able to interact directly with the Consul HTTP API. This may not be necessary in the future, once the Stork project will implement the registration and deregistration mechanism. Currently it is not ready. Finally, we will Testcontainers to run Consul and tests our apps against it with the Quarkus JUnit support.

	<dependencies>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-rest-jackson</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-rest-client-jackson</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-hibernate-orm-panache</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-jdbc-h2</artifactId>
		</dependency>
		<dependency>
			<groupId>com.h2database</groupId>
			<artifactId>h2</artifactId>
			<scope>runtime</scope>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-smallrye-stork</artifactId>
		</dependency>
		<dependency>
			<groupId>io.smallrye.reactive</groupId>
			<artifactId>smallrye-mutiny-vertx-consul-client</artifactId>
		</dependency>
		<dependency>
			<groupId>io.smallrye.stork</groupId>
			<artifactId>stork-service-discovery-consul</artifactId>
		</dependency>
		<dependency>
			<groupId>io.smallrye.stork</groupId>
			<artifactId>stork-service-registration-consul</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-scheduler</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkiverse.config</groupId>
			<artifactId>quarkus-config-consul</artifactId>
			<version>${quarkus-consul.version}</version>
		</dependency>
		<dependency>
			<groupId>io.rest-assured</groupId>
			<artifactId>rest-assured</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-junit5</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>org.testcontainers</groupId>
			<artifactId>consul</artifactId>
			<version>1.20.3</version>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>org.testcontainers</groupId>
			<artifactId>junit-jupiter</artifactId>
			<version>1.20.3</version>
			<scope>test</scope>
		</dependency>
	</dependencies>
XML

Discovery and Load Balancing with Quarkus Stork for Consul

Let’s begin with the Quarkus Stork part. In the previous section, we included libraries required to provide service discovery and load balancing with Stork: quarkus-smallrye-stork and stork-service-discovery-consul. Now, we can proceed to the implementation. Hereโ€™s the EmployeeClient interface from the department-service responsible for calling the GET /employees/department/{departmentId} endpoint exposed by the employee-service. Instead of setting the target URL inside the @RegisterRestClient annotation we should refer to the name of the service registered in Consul.

@Path("/employees")
@RegisterRestClient(baseUri = "stork://employee-service")
public interface EmployeeClient {

    @GET
    @Path("/department/{departmentId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId);

}
Java

That service name should also be used in the configuration properties. The following property indicates that Stork will use Consul as a discovery server for the employee-service name.

quarkus.stork.employee-service.service-discovery.type = consul
Plaintext

Once we create a REST client with the additional annotations, we must inject it into the DepartmentResource class using the @RestClient annotation. Afterward, we can use that client to interact with the employee-service while calling the GET /departments/organization/{organizationId}/with-employees from the department-service.

@Path("/departments")
@Produces(MediaType.APPLICATION_JSON)
public class DepartmentResource {

    private Logger logger;
    private DepartmentRepository repository;
    private EmployeeClient employeeClient;

    public DepartmentResource(Logger logger,
                              DepartmentRepository repository,
                              @RestClient EmployeeClient employeeClient) {
        this.logger = logger;
        this.repository = repository;
        this.employeeClient = employeeClient;
    }

    // ... other methods for REST endpoints 

    @Path("/organization/{organizationId}")
    @GET
    public List<Department> findByOrganization(@PathParam("organizationId") Long organizationId) {
        logger.infof("Department find: organizationId=%d", organizationId);
        return repository.findByOrganization(organizationId);
    }

    @Path("/organization/{organizationId}/with-employees")
    @GET
    public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
        logger.infof("Department find with employees: organizationId=%d", organizationId);
        List<Department> departments = repository.findByOrganization(organizationId);
        departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
        return departments;
    }

}
Java

Let’s take a look at the implementation of the GET /employees/department/{departmentId} in the employee-service called by the EmployeeClient in the department-service.

@Path("/employees")
@Produces(MediaType.APPLICATION_JSON)
public class EmployeeResource {

    private Logger logger;
    private EmployeeRepository repository;

    public EmployeeResource(Logger logger,
                            EmployeeRepository repository) {
        this.logger = logger;
        this.repository = repository;
    }

    @Path("/department/{departmentId}")
    @GET
    public List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId) {
        logger.infof("Employee find: departmentId=%s", departmentId);
        return repository.findByDepartment(departmentId);
    }

    @Path("/organization/{organizationId}")
    @GET
    public List<Employee> findByOrganization(@PathParam("organizationId") Long organizationId) {
        logger.infof("Employee find: organizationId=%s", organizationId);
        return repository.findByOrganization(organizationId);
    }
    
    // ... other methods for REST endpoints

}
Java

Similarly in the organization-service, we define two REST clients for interacting with employee-service and department-service.

@Path("/departments")
@RegisterRestClient(baseUri = "stork://department-service")
public interface DepartmentClient {

    @GET
    @Path("/organization/{organizationId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Department> findByOrganization(@PathParam("organizationId") Long organizationId);

    @GET
    @Path("/organization/{organizationId}/with-employees")
    @Produces(MediaType.APPLICATION_JSON)
    List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId);

}

@Path("/employees")
@RegisterRestClient(baseUri = "stork://employee-service")
public interface EmployeeClient {

    @GET
    @Path("/organization/{organizationId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Employee> findByOrganization(@PathParam("organizationId") Long organizationId);

}
Java

It involves the need to include the following two configuration properties that set the discovery service type for the target services.

quarkus.stork.employee-service.service-discovery.type = consul
quarkus.stork.department-service.service-discovery.type = consul
Plaintext

The OrganizationResource class injects and uses both previously created clients.

@Path("/organizations")
@Produces(MediaType.APPLICATION_JSON)
public class OrganizationResource {

    private Logger logger;
    private OrganizationRepository repository;
    private DepartmentClient departmentClient;
    private EmployeeClient employeeClient;

    public OrganizationResource(Logger logger,
                                OrganizationRepository repository,
                                @RestClient DepartmentClient departmentClient,
                                @RestClient EmployeeClient employeeClient) {
        this.logger = logger;
        this.repository = repository;
        this.departmentClient = departmentClient;
        this.employeeClient = employeeClient;
    }

    // ... other methods for REST endpoints

    @Path("/{id}/with-departments")
    @GET
    public Organization findByIdWithDepartments(@PathParam("id") Long id) {
        logger.infof("Organization find with departments: id={}", id);
        Organization organization = repository.findById(id);
        organization.setDepartments(departmentClient.findByOrganization(organization.getId()));
        return organization;
    }

    @Path("/{id}/with-departments-and-employees")
    @GET
    public Organization findByIdWithDepartmentsAndEmployees(@PathParam("id") Long id) {
        logger.infof("Organization find with departments and employees: id={}", id);
        Organization organization = repository.findById(id);
        organization.setDepartments(departmentClient.findByOrganizationWithEmployees(organization.getId()));
        return organization;
    }

    @Path("/{id}/with-employees")
    @GET
    public Organization findByIdWithEmployees(@PathParam("id") Long id) {
        logger.infof("Organization find with employees: id={}", id);
        Organization organization = repository.findById(id);
        organization.setEmployees(employeeClient.findByOrganization(organization.getId()));
        return organization;
    }

}
Java

Registration in Consul with Quarkus

After including Stork, the Quarkus REST client automatically splits traffic between all the instances of the application existing in the discovery server. However, each application must register itself in the discovery server. Quarkus Stork won’t do that. Theoretically, there is the stork-service-registration-consul module that should register the application instance on startup. As far as I know, this feature is still under active development. For now, we will include a mentioned library and use the same property for enabling the registrar feature.

quarkus.stork.employee-service.service-registrar.type = consul
Plaintext

Our sample applications will interact directly with the Consul server using the SmallRye Mutiny reactive client. Let’s define the ClientConsul bean. It is registered only if the quarkus.stork.employee-service.service-registrar.type property with the consul value exists.

@ApplicationScoped
public class EmployeeBeanProducer {

    @ConfigProperty(name = "consul.host", defaultValue = "localhost")  String host;
    @ConfigProperty(name = "consul.port", defaultValue = "8500") int port;

    @Produces
    @LookupIfProperty(name = "quarkus.stork.employee-service.service-registrar.type", 
                      stringValue = "consul")
    public ConsulClient consulClient(Vertx vertx) {
        return ConsulClient.create(vertx, new ConsulClientOptions()
                .setHost(host)
                .setPort(port));
    }

}
Java

The bean responsible for catching the startup and shutdown events is annotated with @ApplicationScoped. It defines two methods: onStart and onStop. It also injects the ConsulClient bean. Quarkus dynamically generates the HTTP listen port number on startup and saves it in the quarkus.http.port property. Therefore, the startup task needs to wait a moment to ensure that the application is running. We will run it 3 seconds after receiving the startup event. Every instance of the application needs to have a unique id in Consul. Therefore, we retrieve the number of running port and use that number as the id suffix. The name of the service is taken from the quarkus.application.name property. The instance of the application should save id in order to be able to deregister itself on shutdown.

@ApplicationScoped
public class EmployeeLifecycle {

    @ConfigProperty(name = "quarkus.application.name")
    private String appName;
    private int port;

    private Logger logger;
    private Instance<ConsulClient> consulClient;
    private ScheduledExecutorService executor;

    public EmployeeLifecycle(Logger logger,
                             Instance<ConsulClient> consulClient,
                             ScheduledExecutorService executor) {
        this.logger = logger;
        this.consulClient = consulClient;
        this.executor = executor;
    }

    void onStart(@Observes StartupEvent ev) {
        if (consulClient.isResolvable()) {
            executor.schedule(() -> {
                port = ConfigProvider.getConfig().getValue("quarkus.http.port", Integer.class);
                consulClient.get().registerService(new ServiceOptions()
                                .setPort(port)
                                .setAddress("localhost")
                                .setName(appName)
                                .setId(appName + "-" + port),
                        result -> logger.infof("Service %s-%d registered", appName, port));
            }, 3000, TimeUnit.MILLISECONDS);
        }
    }

    void onStop(@Observes ShutdownEvent ev) {
        if (consulClient.isResolvable()) {
            consulClient.get().deregisterService(appName + "-" + port,
                    result -> logger.infof("Service %s-%d deregistered", appName, port));
        }
    }
}
Java

Read Configuration Properties from Consul

The io.quarkiverse.config:quarkus-config-consul is already included in dependencies. Once the quarkus.consul-config.enabled property is set to true, the Quarkus application tries to read properties from the Consul Key-Value store. The quarkus.consul-config.properties-value-keys property indicates the location of the properties file stored in Consul. Here are the properties that exists in the classpath application.properties. For example, the default config location for the department-service is config/department-service.

quarkus.application.name = department-service
quarkus.application.version = 1.1
quarkus.consul-config.enabled = true
quarkus.consul-config.properties-value-keys = config/${quarkus.application.name}
Plaintext

Let’s switch to the Consul UI. It is available under the same 8500 port as the API. In the “Key/Value” section we create configuration for all three sample applications.

These are configuration properties for department-service. They are targeted for the development mode. We enable the dynamically generated port number to run several instances on the same workstation. Our application use an in-memory H2 database. It loads the import.sql script on startup to initialize a demo data store. We also enable Quarkus Stork service discovery for the employee-service REST client and registration in Consul.

quarkus.http.port = 0
quarkus.datasource.db-kind = h2
quarkus.hibernate-orm.database.generation = drop-and-create
quarkus.hibernate-orm.sql-load-script = src/main/resources/import.sql
quarkus.stork.employee-service.service-discovery.type = consul
quarkus.stork.department-service.service-registrar.type = consul
Plaintext

Here are the configuration properties for the employee-service.

quarkus-stork-consul-config

Finally, let’s take a look at the organization-service configuration in Consul.

Run Applications in the Development Mode

Let’s run our three sample Quarkus applications in the development mode. Both employee-service and department-service should have two instances running. We don’t have to take care about port conflicts, since they are aqutomatically generated on startup.

$ cd employee-service
$ mvn quarkus:dev
$ mvn quarkus:dev

$ cd department-service
$ mvn quarkus:dev
$ mvn quarkus:dev

$ cd organization-service
$ mvn quarkus:dev
ShellSession

Once we start all the instances we can switch to the Consul UI. You should see exactly the same services in your web console.

quarkus-stork-consul-services

There are two instances of the employee-service and deparment-service. We can check out the list of registered instances for the selected application.

quarkus-stork-consul-service

This step is optional. To simplify tests I also included API gateway that integrates with Consul discovery. It listens on the static 8080 port and forwards requests to the downstream services, which listen on the dynamic ports. Since Quarkus does not provide a module dedicates for the API gateway, I used Spring Cloud Gateway with Spring Cloud Consul for that. Therefore, you need to use the following command to run the application:

$ cd gateway-service
$ mvn spring-boot:run
ShellSession

Afterward, we can make some API tests with or without the gateway. With the gateway-service, we can use the 8080 port with the /api base context path. Let’s call the following three endpoints. The first one is exposed by the department-service, while the another two by the organization-service.

$ curl http://localhost:8080/api/departments/organization/1/with-employees
$ curl http://localhost:8080/api/organizations/1/with-departments
$ curl http://localhost:8080/api/organizations/1/with-departments-and-employees
ShellSession

Each Quarkus service listens on the dynamic port and register itself in Consul using that port number. Here’s the department-service logs from startup and during test communication.

After including the quarkus-micrometer-registry-prometheus module each application instance exposes metrics under the GET /q/metrics endpoint. There are several metrics related to service discovery published by the Quarkus Stork extension.

$ curl http://localhost:51867/q/metrics | grep stork
# TYPE stork_service_discovery_instances_count counter
# HELP stork_service_discovery_instances_count The number of service instances discovered
stork_service_discovery_instances_count_total{service_name="employee-service"} 12.0
# TYPE stork_service_selection_duration_seconds summary
# HELP stork_service_selection_duration_seconds The duration of the selection operation
stork_service_selection_duration_seconds_count{service_name="employee-service"} 6.0
stork_service_selection_duration_seconds_sum{service_name="employee-service"} 9.93934E-4
# TYPE stork_service_selection_duration_seconds_max gauge
# HELP stork_service_selection_duration_seconds_max The duration of the selection operation
stork_service_selection_duration_seconds_max{service_name="employee-service"} 0.0
# TYPE stork_service_discovery_failures counter
# HELP stork_service_discovery_failures The number of failures during service discovery
stork_service_discovery_failures_total{service_name="employee-service"} 0.0
# TYPE stork_service_discovery_duration_seconds_max gauge
# HELP stork_service_discovery_duration_seconds_max The duration of the discovery operation
stork_service_discovery_duration_seconds_max{service_name="employee-service"} 0.0
# TYPE stork_service_discovery_duration_seconds summary
# HELP stork_service_discovery_duration_seconds The duration of the discovery operation
stork_service_discovery_duration_seconds_count{service_name="employee-service"} 6.0
stork_service_discovery_duration_seconds_sum{service_name="employee-service"} 2.997176541
# TYPE stork_service_selection_failures counter
# HELP stork_service_selection_failures The number of failures during service selection
stork_service_selection_failures_total{service_name="employee-service"} 0.0
ShellSession

Advanced Load Balancing with Quarkus Stork and Consul

Quarkus Stork provides several load balancing strategies to efficiently distribute requests across multiple instances of a application. It can ensure optimal resource usage, better performance, and high availability. By default, Quarkus Stork uses round-robin algorithm. To override the default strategy, we first need to include a library responsible for providing the selected load-balancing algorithm. For example, let’s choose the least-response-time strategy, which collects response times of the calls made with service instances and picks an instance based on this information.

<dependency>
    <groupId>io.smallrye.stork</groupId>
    <artifactId>stork-load-balancer-least-response-time</artifactId>
</dependency>
XML

Then, we have to change the default strategy in configuration properties for the selected client. Let’s add the following property to the config/department-service in Consul Key-Value store.

quarkus.stork.employee-service.load-balancer.type=least-response-time
Plaintext

After that, we can restart the instance of department-service and retest the communication between services.

Testing Integration Between Quarkus and Consul

We have already included the org.testcontainers:consul artifact to the Maven dependencies. Thanks to that, we can create JUnit tests with Quarkus and Testcontainers Consul. Since Quarkus doen’t provide a built-in support for testing Consul container, we need to create the class that implements the QuarkusTestResourceLifecycleManager interface. It is responsible for starting and stopping Consul container during JUnit tests. After starting the container, we add required configuration properties to enable in-memory database creation and a service registration in Consul.

public class ConsulResource implements QuarkusTestResourceLifecycleManager {

    private ConsulContainer consulContainer;

    @Override
    public Map<String, String> start() {
        consulContainer = new ConsulContainer("hashicorp/consul:latest")
                .withConsulCommand(
                """
                kv put config/department-service - <<EOF
                department.name=abc
                quarkus.datasource.db-kind=h2
                quarkus.hibernate-orm.database.generation=drop-and-create
                quarkus.stork.department-service.service-registrar.type=consul
                EOF
                """
                );

        consulContainer.start();

        String url = consulContainer.getHost() + ":" + consulContainer.getFirstMappedPort();

        return ImmutableMap.of(
                "quarkus.consul-config.agent.host-port", url,
                "consul.host", consulContainer.getHost(),
                "consul.port", consulContainer.getFirstMappedPort().toString()
        );
    }

    @Override
    public void stop() {
        consulContainer.stop();
    }
}
Java

To start Consul container during the test, we need to annotate the test class with @QuarkusTestResource(ConsulResource.class). The test loads configuration properties from Consul on startup and registers the service. Then, it verifies that REST endpoints exposed by the department-service work fine and the registered service exists in Consul.

@QuarkusTest
@QuarkusTestResource(ConsulResource.class)
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class DepartmentResourceConsulTests {

    @ConfigProperty(name = "department.name", defaultValue = "")
    private String name;
    @Inject
    ConsulClient consulClient;

    @Test
    @Order(1)
    void add() {
        Department d = new Department();
        d.setOrganizationId(1L);
        d.setName(name);

        given().body(d).contentType(ContentType.JSON)
                .when().post("/departments").then()
                .statusCode(200)
                .body("id", notNullValue())
                .body("name", is(name));
    }

    @Test
    @Order(2)
    void findAll() {
        when().get("/departments").then()
                .statusCode(200)
                .body("size()", is(4));
    }

    @Test
    @Order(3)
    void checkRegister() throws InterruptedException {
        Thread.sleep(5000);
        Uni<ServiceList> uni = Uni.createFrom().completionStage(() -> consulClient.catalogServices().toCompletionStage());
        List<Service> services = uni.await().atMost(Duration.ofSeconds(3)).getList();
        final long count = services.stream()
                .filter(svc -> svc.getName().equals("department-service")).count();
        assertEquals(1 ,count);
    }
}
Java

Final Thoughts

This article introduces Quarkus Stork for Consul discovery and client-side load balancing. It shows how to integrate Quarkus with Consul Key-Value store for distributed configuration. It also covers the topics like integration testing with Testcontainers support, metrics, service registration and advanced load-balancing strategies.

The post Consul with Quarkus and SmallRye Stork appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/11/18/consul-with-quarkus-and-smallrye-stork/feed/ 0 15444
Distributed Tracing with Istio, Quarkus and Jaeger https://piotrminkowski.com/2022/01/31/distributed-tracing-with-istio-quarkus-and-jaeger/ https://piotrminkowski.com/2022/01/31/distributed-tracing-with-istio-quarkus-and-jaeger/#respond Mon, 31 Jan 2022 10:26:05 +0000 https://piotrminkowski.com/?p=10540 In this article, you will learn how to configure distributed tracing for your service mesh with Istio and Quarkus. For test purposes, we will build and run Quarkus microservices on Kubernetes. The communication between them is going to be managed by Istio. Istio service mesh uses Jaeger as a distributed tracing system. This time I […]

The post Distributed Tracing with Istio, Quarkus and Jaeger appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to configure distributed tracing for your service mesh with Istio and Quarkus. For test purposes, we will build and run Quarkus microservices on Kubernetes. The communication between them is going to be managed by Istio. Istio service mesh uses Jaeger as a distributed tracing system.

This time I won’t tell you about Istio basics. Although our configuration is not complicated you may read the following introduction before we start.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should go to the mesh-with-db directory. After that, you should just follow my instructions ๐Ÿ™‚

Service Mesh Architecture

Let’s start with our microservices architecture. There are two applications: person-app and insurance-app. As you probably guessed, the person-app stores and returns information about insured people. On the other hand, the insurance-app keeps insurances data. Each service has a separate database. We deploy the person-app in two versions. The v2 version contains one additional field externalId.

The following picture illustrates our scenario. Istio splits traffic between two versions of the person-app. By default, it splits the traffic 50% to 50%. If it receives the X-Version header in the request it calls the particular version of the person-app. Of course, the possible values of the header are v1 or v2.

quarkus-istio-tracing-arch

Distributed Tracing with Istio

Istio generates distributed trace spans for each managed service. It means that every request sent inside the Istio will have the following HTTP headers:

So, every single request incoming from the Istio gateway contains X-B3-SpanId, X-B3-TraceId, and some other B3 headers. The X-B3-SpanId indicates the position of the current operation in the trace tree. On the other hand, every span in a trace should share the X-B3-TraceId header. At first glance, you can feel surprised that Istio does not propagate B3 headers in client calls. To clarify, if one service communicates with another service using e.g. REST client, you will see two different traces. The first of them is related to the API endpoint call, while the second with the client call of another API endpoint. That’s not exactly what we would like to achieve, right?

Let’s visualize our problem. If you call the insurance-app through the Istio gateway you will have the first trace in Jaeger. During that call the insurance-app calls endpoint from the person-app using Quarkus REST client. That’s another separate trace in Jeager. Our goal here is to propagate all required B3 headers to the person-app also. You can find a list of required headers here in the Istio documentation.

quarkus-istio-tracing-details

Of course, that’s not the only thing we will do today. We will also prepare Istio rules in order to simulate latency in our communication. It is a good scenario to use the tracing tool. Also, I’m going to show you how to easily deploy microservice on Kubernetes using Quarkus features for that.

I’m running Istio and Jaeger on OpenShift. More precisely, I’m using OpenShift Service Mesh that is the RedHat’s SM implementation based on Istio. I doesn’t have any impact on the exercise, so you as well repeat all the steps on Kubernetes.

Create Microservices with Quarkus

Let’s begin with the insurance-app. Here’s the class responsible for the REST endpoints implementation. There are several methods there. However, the most important for us is the getInsuranceDetailsById method that calls the GET /persons/{id} endpoint in the person-app. In order to use the Quarkus REST client extension, we need to inject client bean with @RestClient annotation.

@Path("/insurances")
public class InsuranceResource {

   @Inject
   Logger log;
   @Inject
   InsuranceRepository insuranceRepository;
   @Inject @RestClient
   PersonService personService;

   @POST
   @Transactional
   public Insurance addInsurance(Insurance insurance) {
      insuranceRepository.persist(insurance);
      return insurance;
   }

   @GET
   public List<Insurance> getInsurances() {
      return insuranceRepository.listAll();
   }

   @GET
   @Path("/{id}")
   public Insurance getInsuranceById(@PathParam("id") Long id) {
      return insuranceRepository.findById(id);
   }

   @GET
   @Path("/{id}/details")
   public InsuranceDetails getInsuranceDetailsById(@PathParam("id") Long id, @HeaderParam("X-Version") String version) {
      log.infof("getInsuranceDetailsById: id=%d, version=%s", id, version);
      Insurance insurance = insuranceRepository.findById(id);
      InsuranceDetails insuranceDetails = new InsuranceDetails();
      insuranceDetails.setPersonId(insurance.getPersonId());
      insuranceDetails.setAmount(insurance.getAmount());
      insuranceDetails.setType(insurance.getType());
      insuranceDetails.setExpiry(insurance.getExpiry());
      insuranceDetails.setPerson(personService.getPersonById(insurance.getPersonId()));
      return insuranceDetails;
   }

}

As you probably remember from the previous section, we need to propagate several headers responsible for Istio tracing to the downstream Quarkus service. Let’s take a look at the implementation of the REST client in the PersonService interface. In order to send some additional headers to the request, we need to annotate the interface with @RegisterClientHeaders. Then we have two options. We can provide our custom headers factory as shown below. Otherwise, we may use the property org.eclipse.microprofile.rest.client.propagateHeaders with a list of headers.

@Path("/persons")
@RegisterRestClient
@RegisterClientHeaders(RequestHeaderFactory.class)
public interface PersonService {

   @GET
   @Path("/{id}")
   Person getPersonById(@PathParam("id") Long id);
}

Let’s just take a look at the implementation of the REST endpoint in the person-app. The method getPersonId at the bottom is responsible for finding a person by the id field. It is a target method called by the PersonService client.

@Path("/persons")
public class PersonResource {

   @Inject
   Logger log;
   @Inject
   PersonRepository personRepository;

   @POST
   @Transactional
   public Person addPerson(Person person) {
      personRepository.persist(person);
      return person;
   }

   @GET
   public List<Person> getPersons() {
      return personRepository.listAll();
   }

   @GET
   @Path("/{id}")
   public Person getPersonById(@PathParam("id") Long id) {
      log.infof("getPersonById: id=%d", id);
      Person p = personRepository.findById(id);
      log.infof("getPersonById: %s", p);
      return p;
   }
}

Finally, the implementation of our customer client headers factory. It needs to implement the ClientHeadersFactory interface and its update() method. We are not doing anything complicated here. We are forwarding B3 tracing headers and the X-Version used by Istio to route between two versions of the person-app. I also added some logs. There I didn’t use the already mentioned option of header propagation based on the property org.eclipse.microprofile.rest.client.propagateHeaders.

@ApplicationScoped
public class RequestHeaderFactory implements ClientHeadersFactory {

   @Inject
   Logger log;

   @Override
   public MultivaluedMap<String, String> update(MultivaluedMap<String, String> inHeaders,
                                                 MultivaluedMap<String, String> outHeaders) {
      String version = inHeaders.getFirst("x-version");
      log.infof("Version Header: %s", version);
      String traceId = inHeaders.getFirst("x-b3-traceid");
      log.infof("Trace Header: %s", traceId);
      MultivaluedMap<String, String> result = new MultivaluedHashMap<>();
      result.add("X-Version", version);
      result.add("X-B3-TraceId", traceId);
      result.add("X-B3-SpanId", inHeaders.getFirst("x-b3-spanid"));
      result.add("X-B3-ParentSpanId", inHeaders.getFirst("x-b3-parentspanid"));
      return result;
   }
}

Run Quarkus Applications on Kubernetes

Before we test Istio tracing we need to deploy our Quarkus microservices on Kubernetes. We may do it in several different ways. One of the methods is provided directly by Quarkus. Thanks to that we can generate the Deployment manifests automatically during the build. It turns out that we can also apply Istio manifests to the Kubernetes cluster as well. Firstly, we need to include the following two modules.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-openshift</artifactId>
</dependency>
<dependency>
  <groupId>me.snowdrop</groupId>
  <artifactId>istio-client</artifactId>
  <version>1.7.7.1</version>
</dependency>

If you deploy on Kubernetes just replace the quarkus-openshift module with the quarkus-kubernetes module. In the next step, we need to provide some configuration settings in the application.properties file. In order to enable deployment during the build, we need to set the property quarkus.kubernetes.deploy to true. We can configure several aspects of the Kubernetes Deployment. For example, we may enable the Istio proxy by setting the annotation sidecar.istio.io/inject to true or add some labels required for routing: app and version (in that case). Finally, our application connects to the database, so we need to inject values from the Kubernetes Secret.

quarkus.container-image.group = demo-mesh
quarkus.container-image.build = true
quarkus.kubernetes.deploy = true
quarkus.kubernetes.deployment-target = openshift
quarkus.kubernetes-client.trust-certs = true

quarkus.openshift.deployment-kind = Deployment
quarkus.openshift.labels.app = quarkus-insurance-app
quarkus.openshift.labels.version = v1
quarkus.openshift.annotations."sidecar.istio.io/inject" = true
quarkus.openshift.env.mapping.postgres_user.from-secret = insurance-db
quarkus.openshift.env.mapping.postgres_user.with-key = database-user
quarkus.openshift.env.mapping.postgres_password.from-secret = insurance-db
quarkus.openshift.env.mapping.postgres_password.with-key = database-password
quarkus.openshift.env.mapping.postgres_db.from-secret = insurance-db
quarkus.openshift.env.mapping.postgres_db.with-key = database-name

Here’s a part of the configuration responsible for database connection settings. The name of the key after quarkus.openshift.env.mappings maps to the name of the environment variable, for example, the postgres_password property to the POSTGRES_PASSWORD env.

quarkus.datasource.db-kind = postgresql
quarkus.datasource.username = ${POSTGRES_USER}
quarkus.datasource.password = ${POSTGRES_PASSWORD}
quarkus.datasource.jdbc.url = 
jdbc:postgresql://person-db:5432/${POSTGRES_DB}

If there are any additional manifests to apply we should place them inside the src/main/kubernetes directory. This applies to, for example, the Istio configuration. So, now the only thing we need to do is to application build. Firstly go to the quarkus-person-app directory and run the following command. Then, go to the quarkus-insurance-app directory, and do the same.

$ mvn clean package

Traffic Management with Istio

There are two versions of the person-app application. So, let’s create the DestinationRule object containing two subsets v1 and v2 based on the version label.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: quarkus-person-app-dr
spec:
  host: quarkus-person-app
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

In the next step, we need to create the VirtualService object for the quarkus-person-app service. The routing between versions can be based on the X-Version header. If that is not set Istio sends 50% to the v1 version, and 50% to the v2 version. Also, we will inject the delay into the v2 route using the Istio HTTPFaultInjection object. It adds 3 seconds delay for 50% of incoming requests.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: quarkus-person-app-vs
spec:
  hosts:
    - quarkus-person-app
  http:
    - match:
        - headers:
            X-Version:
              exact: v1
      route:
        - destination:
            host: quarkus-person-app
            subset: v1
    - match:
        - headers:
            X-Version:
              exact: v2
      route:
        - destination:
            host: quarkus-person-app
            subset: v2
      fault:
        delay:
          fixedDelay: 3s
          percentage:
            value: 50
    - route:
        - destination:
            host: quarkus-person-app
            subset: v1
          weight: 50
        - destination:
            host: quarkus-person-app
            subset: v2
          weight: 50

Now, let’s create the Istio Gateway object. Replace the CLUSTER_DOMAIN variable with your cluster’s domain name:

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: microservices-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - quarkus-insurance-app.apps.$CLUSTER_DOMAIN
        - quarkus-person-app.apps.$CLUSTER_DOMAIN

In order to forward traffic from the gateway, the VirtualService needs to refer to that gateway.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: quarkus-insurance-app-vs
spec:
  hosts:
    - quarkus-insurance-app.apps.$CLUSTER_DOMAIN
  gateways:
    - microservices-gateway
  http:
    - match:
        - uri:
            prefix: "/insurance"
      rewrite:
        uri: " "
      route:
        - destination:
            host: quarkus-insurance-app
          weight: 100

Now you can call the insurance-app service:

$ curl http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/1/details

We can verify all the existing Istio objects using Kiali.

Testing Istio Tracing with Quarkus

First of all, you can generate many requests using the siege tool. There are multiple ways to run it. We can prepare the file with example requests as shown below:

http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/1
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/2
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/3
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/4
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/5
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/6
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/7
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/8
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/9
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/1/details
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/2/details
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/3/details
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/4/details
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/5/details
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/6/details
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/1
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/2
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/3
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/4
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/5
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/6

Now we need to set that file as an input for the siege command. We can also set the number of repeats (-r) and concurrent threads (-c).

$ siege -f k8s/traffic/urls.txt -i -v -r 500 -c 10 --no-parser

It takes some time before the command will finish. In the meanwhile, let’s try to send a single request to the insurance-app with the X-Version header set to v2. 50% of such requests are delayed by the Istio quarkus-person-app-vs VirtualService. Repeat the request until you have the response with 3s delay:

Here’s the access log from the Quarkus application:

Then, let’s switch to the Jaeger console. We can find the request by the guid:x-request-id tag.

Here’s the result of our search:

quarkus-istio-tracing-jaeger

We have a full trace of the request including communication between Istio gateway and insurance-app, and also between the insurance-app and person-app. In order to print the details of the trace just click the record. You will see a trace timeline and requests/responses structure. You can easily verify that latency occurs somewhere between insurance-app and person-app, because the request has been processed only 4.46 ms by the person-app.

quarkus-istio-tracing-jaeger-timeline

The post Distributed Tracing with Istio, Quarkus and Jaeger appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/01/31/distributed-tracing-with-istio-quarkus-and-jaeger/feed/ 0 10540
Quarkus Microservices with Consul Discovery https://piotrminkowski.com/2020/11/24/quarkus-microservices-with-consul-discovery/ https://piotrminkowski.com/2020/11/24/quarkus-microservices-with-consul-discovery/#comments Tue, 24 Nov 2020 07:59:51 +0000 https://piotrminkowski.com/?p=9118 In this article, I’ll show you how to run Quarkus microservices outside Kubernetes with Consul service discovery and a KV store. Firstly, we are going to create a custom integration with Consul discovery, since Quarkus does not offer it. On the other hand, we may take advantage of built-in support for configuration properties from the […]

The post Quarkus Microservices with Consul Discovery appeared first on Piotr's TechBlog.

]]>
In this article, I’ll show you how to run Quarkus microservices outside Kubernetes with Consul service discovery and a KV store. Firstly, we are going to create a custom integration with Consul discovery, since Quarkus does not offer it. On the other hand, we may take advantage of built-in support for configuration properties from the Consul KV store. We will also learn how to customize the Quarkus REST client to integrate it with an external service discovery mechanism. The client will follow a load balancing pattern based on a round-robin algorithm.

If you feel you need to enhance your knowledge about the Quarkus framework visit the site with guides. For more advanced information you may read the articles Guide to Quarkus with Kotlin and Guide to Quarkus on Kubernetes.

The Architecture

Before proceeding to the implementation, let’s take a look at the diagram with the architecture of our system. There are three microservices: employee-service, departament-service, and organization-service. They are communicating with each other through a REST API. They use the Consul KV store as a distributed configuration backend. Every single instance of microservice is registering itself in Consul. A load balancer is on the client-side. It reads a list of registered instances of a target service from Consul. Then it chooses a single instance using a round-robin algorithm.

quarkus-consul-arch

Source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my repository sample-quarkus-microservices-consul. Then you should just follow my instructions. ๐Ÿ™‚

Run the Consul instance

In order to run Consul on the local machine, we use its Docker image. By default, Consul exposes API and a web console on port 8500. We just need to expose that port outside the container.

$ docker run -d --name=consul \
   -e CONSUL_BIND_INTERFACE=eth0 \
   -p 8500:8500 \
   consul

Register Quarkus Microservice in Consul

Our application exposes a REST API on the HTTP server and connects to an in-memory database H2. It also uses the Java Consul client to interact with a Consul API. Therefore, we need to include at least the following dependencies.

<dependencies>
   <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-resteasy-jackson</artifactId>
   </dependency>
   <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-hibernate-orm-panache</artifactId>
   </dependency>
   <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-jdbc-h2</artifactId>
   </dependency>
   <dependency>
      <groupId>com.h2database</groupId>
      <artifactId>h2</artifactId>
      <scope>runtime</scope>
   </dependency>
   <dependency>
      <groupId>com.orbitz.consul</groupId>
      <artifactId>consul-client</artifactId>
      <version>${consul-client.version}</version>
   </dependency>
</dependencies>

Since we will run all our applications locally it is worth enabling an HTTP random port feature. To do that we should set the property quarkus.http.port to 0.

quarkus.http.port=0

Then we create the Consul client bean. By default, it is trying to connect with a server on the localhost and port 8500. So, we don’t need to provide any additional configuration.

@ApplicationScoped
public class EmployeeBeansProducer {

   @Produces
   Consul consulClient = Consul.builder().build();

}

Every single instance of a Quarkus application should register itself in Consul just after startup. Consequently, it needs to be able to deregister itself on shutdown. Therefore, you should first implement a bean responsible for intercepting startup and shutdown events. It is not hard with Quarkus.

The bean responsible for catching the startup and shutdown events is annotated with @ApplicationScoped. It defines two methods: onStart and onStop. It also injects the Consul client bean. Quarkus generates the number of the HTTP listen port on startup and saves it in the quarkus.http.port property. Therefore, the startup task needs to wait a moment to ensure that the application is running. We will run it 5 seconds after receiving the startup event. In order to register an application in Consul, we need to use the ConsulAgent object. Every single instance of the application needs to have a unique id in Consul. Therefore, we retrieve the number of running instances and use that number as the id suffix. The name of the service is taken from the quarkus.application.name property. The instance of the application should save id in order to be able to deregister itself on shutdown.

@ApplicationScoped
public class EmployeeLifecycle {

   private static final Logger LOGGER = LoggerFactory
         .getLogger(EmployeeLifecycle.class);
   private String instanceId;

   @Inject
   Consul consulClient;
   @ConfigProperty(name = "quarkus.application.name")
   String appName;
   @ConfigProperty(name = "quarkus.application.version")
   String appVersion;

   void onStart(@Observes StartupEvent ev) {
      ScheduledExecutorService executorService = Executors
            .newSingleThreadScheduledExecutor();
      executorService.schedule(() -> {
         HealthClient healthClient = consulClient.healthClient();
         List<ServiceHealth> instances = healthClient
               .getHealthyServiceInstances(appName).getResponse();
         instanceId = appName + "-" + instances.size();
         ImmutableRegistration registration = ImmutableRegistration.builder()
               .id(instanceId)
               .name(appName)
               .address("127.0.0.1")
               .port(Integer.parseInt(System.getProperty("quarkus.http.port")))
               .putMeta("version", appVersion)
               .build();
         consulClient.agentClient().register(registration);
         LOGGER.info("Instance registered: id={}", registration.getId());
      }, 5000, TimeUnit.MILLISECONDS);
   }

   void onStop(@Observes ShutdownEvent ev) {
      consulClient.agentClient().deregister(instanceId);
      LOGGER.info("Instance de-registered: id={}", instanceId);
   }

}

Run Quarkus microservices locally

Thanks to the HTTP random port feature we don’t have to care about port conflicts between applications. So, we can run as many instances as we need. To run a single instance of application we should use the quarkus:dev Maven command.

$ mvn compile quarkus:dev

Let’s see at the logs after employee-service startup. The application successfully called the Consul API using a Consul agent. With 5 second delay is sends an instance id and a port number.

Letโ€™s take a look at the list of services registered in Consul.

quarkus-consul-services

I run two instances of every microservice. We may take a look at list of instances registered, for example by employee-service.

quarkus-consul-instances

Integrate Quarkus REST client with Consul discovery

Both departament-service and organization-service applications use the Quarkus REST module to communicate with other microservices.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-rest-client</artifactId>
</dependency>

Let’s take a look at the EmployeeClient interface inside the departament-service. We won’t use @RegisterRestClient on it. It is just annotated with @Path and contains a single @GET method.

@Path("/employees")
public interface EmployeeClient {

    @GET
    @Path("/department/{departmentId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId);

}

We won’t provide a target address of the service, but just its name from the discovery server. The base URI is available in the application.properties file.

client.employee.uri=http://employee

The REST client uses a filter to detect a list of running instances registered in Consul. The filter implements a round-robin load balancer. Consequently, it replaces the name of service in the target URI with a particular IP address and a port number.

public class LoadBalancedFilter implements ClientRequestFilter {

   private static final Logger LOGGER = LoggerFactory
         .getLogger(LoadBalancedFilter.class);

   private Consul consulClient;
   private AtomicInteger counter = new AtomicInteger();

   public LoadBalancedFilter(Consul consulClient) {
      this.consulClient = consulClient;
   }

   @Override
   public void filter(ClientRequestContext ctx) {
      URI uri = ctx.getUri();
      HealthClient healthClient = consulClient.healthClient();
      List<ServiceHealth> instances = healthClient
            .getHealthyServiceInstances(uri.getHost()).getResponse();
      instances.forEach(it ->
            LOGGER.info("Instance: uri={}:{}",
                  it.getService().getAddress(),
                  it.getService().getPort()));
      ServiceHealth instance = instances.get(counter.getAndIncrement());
      URI u = UriBuilder.fromUri(uri)
            .host(instance.getService().getAddress())
            .port(instance.getService().getPort())
            .build();
      ctx.setUri(u);
   }

}

Finally, we need to inject the filter bean into the REST client builder. After that, our Quarkus application is fully integrated with the Consul discovery.

@ApplicationScoped
public class DepartmentBeansProducer {

   @ConfigProperty(name = "client.employee.uri")
   String employeeUri;
   @Produces
   Consul consulClient = Consul.builder().build();

   @Produces
   LoadBalancedFilter filter = new LoadBalancedFilter(consulClient);

   @Produces
   EmployeeClient employeeClient() throws URISyntaxException {
      URIBuilder builder = new URIBuilder(employeeUri);
      return RestClientBuilder.newBuilder()
            .baseUri(builder.build())
            .register(filter)
            .build(EmployeeClient.class);
   }

}

Read configuration properties from Consul

Although Quarkus does not provide built-in integration with a Consul discovery, it is able to read configuration properties from there. Firstly, we need to include the Quarkus Consul Config module to the Maven dependencies.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-consul-config</artifactId>
</dependency>

Then, we enable the mechanism with the quarkus.consul-config.enable property.

quarkus.application.name=employee
quarkus.consul-config.enabled=true
quarkus.consul-config.properties-value-keys=config/${quarkus.application.name}

The Quarkus Config client reads properties from a KV store based on the location set in quarkus.consul-config.properties-value-keys property. Let’s create the settings responsible for a database connection and for enabling a random HTTP port feature.

quarkus-consul-config

Finally, we can run the application. The effect is the same as they would be stored in the standard application.properties file. The configuration for departament-service and organization-service looks pretty similar, but it also contains URLs used by the HTTP clients to call other microservices. For some reasons the property quarkus.datasource.db-kind=h2 always needs to be set inside application.properties file.

Testing Quarkus Consul discovery with gateway

All the applications listen on the random HTTP port. In order to simplify testing, we should run an API gateway. It will listen on a defined port. Since Quarkus does not provide any implementation of an API gateway, we are going to use Spring Cloud Gateway. We can easily integrate it with Consul using a Spring Cloud discovery client.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-loadbalancer</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-consul-discovery</artifactId>
</dependency>

The configuration of Spring Cloud Gateway contains a list of routes. We need to create three routes for all our sample applications.

spring:
  application:
    name: gateway-service
  cloud:
    gateway:
      discovery:
        locator:
          enabled: true
      routes:
        - id: employee-service
          uri: lb://employee
          predicates:
            - Path=/api/employees/**
          filters:
            - StripPrefix=1
        - id: department-service
          uri: lb://department
          predicates:
            - Path=/api/departments/**
          filters:
            - StripPrefix=1
        - id: organization-service
          uri: lb://organization
          predicates:
            - Path=/api/organizations/**
          filters:
            - StripPrefix=1
    loadbalancer:
      ribbon:
        enabled: false

Now, you may perform some test calls by yourself. The API gateway is available on port 8080. It uses prefix /api. Here are some curl commands to list all available employees, departments and organizations.

$ http://localhost:8080/api/employees
$ http://localhost:8080/api/departments
$ http://localhost:8080/api/organizations

Conclusion

Although Quarkus is a Kubernetes-native framework, we can use it to run microservices outside Kubernetes. The only problem we may encounter is a lack of support for external discovery. This article shows how to solve it. As a result, we created microservices architecture based on our custom discovery mechanism and built-in support for configuration properties in Consul. It is worth saying that Quarkus also provides integration with other third-party configuration solutions like Vault or Spring Cloud Config. If you are interested in a competitive solution based on Spring Boot and Spring Cloud you should read the article Microservices with Spring Boot, Spring Cloud Gateway and Consul Cluster.

The post Quarkus Microservices with Consul Discovery appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/11/24/quarkus-microservices-with-consul-discovery/feed/ 15 9118
Quarkus OAuth2 and security with Keycloak https://piotrminkowski.com/2020/09/16/quarkus-oauth2-and-security-with-keycloak/ https://piotrminkowski.com/2020/09/16/quarkus-oauth2-and-security-with-keycloak/#respond Wed, 16 Sep 2020 07:27:40 +0000 https://piotrminkowski.com/?p=8811 Quarkus OAuth2 support is based on the WildFly Elytron Security project. In this article, you will learn how to integrate your Quarkus application with the OAuth2 authorization server like Keycloak. Before starting with Quarkus security it is worth to find out how to build microservices in Quick guide to microservices with Quarkus on OpenShift, and […]

The post Quarkus OAuth2 and security with Keycloak appeared first on Piotr's TechBlog.

]]>
Quarkus OAuth2 support is based on the WildFly Elytron Security project. In this article, you will learn how to integrate your Quarkus application with the OAuth2 authorization server like Keycloak.

Before starting with Quarkus security it is worth to find out how to build microservices in Quick guide to microservices with Quarkus on OpenShift, and how to easily deploy your application on Kubernetes in Guide to Quarkus on Kubernetes.

Source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-quarkus-applications. Then go to the employee-secure-service directory, and just follow my instructions ๐Ÿ™‚ The good idea is to read the article Guide to Quarkus with Kotlin before you move on.

Using Quarkus OAuth2 for securing endpoints

In the first step, we need to include Quarkus modules for REST and OAuth2. Of course, our applications use some other modules, but those two are required.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-elytron-security-oauth2</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-resteasy-jackson</artifactId>
</dependency>

Let’s discuss a typical implementation of the REST controller with Quarkus. Quarkus OAuth2 provides a set of annotations for setting permissions. We can allow to call an endpoint by any user with @PermitAll annotation. The annotation @DenyAll indicates that the given endpoint cannot be accessed by anyone. We can also define a list of roles allowed for calling a given endpoint with @RolesAllowed.

The controller contains different types of CRUD methods. I defined three roles: viewer, manager, and admin. The viewer role allows calling only GET methods. The manager role allows calling GET and POST methods. Finally, the admin role allows calling all the methods. You can see the final implementation of the controller class below.

@Path("/employees")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
class EmployeeResource(val repository: EmployeeRepository) {

    @POST
    @Transactional
    @RolesAllowed(value = ["manager", "admin"])
    fun add(employee: Employee): Response {
        repository.persist(employee)
        return Response.ok(employee).status(201).build()
    }

    @DELETE
    @Path("/{id}")
    @Transactional
    @RolesAllowed("admin")
    fun delete(@PathParam id: Long) {
        repository.deleteById(id)
    }

    @GET
    @PermitAll
    fun findAll(): List<Employee> = repository.listAll()

    @GET
    @Path("/{id}")
    @RolesAllowed(value = ["manager", "admin", "viewer"])
    fun findById(@PathParam id: Long): Employee?
            = repository.findById(id)

    @GET
    @Path("/first-name/{firstName}/last-name/{lastName}")
    @RolesAllowed(value = ["manager", "admin", "viewer"])
    fun findByFirstNameAndLastName(@PathParam firstName: String,
                          @PathParam lastName: String): List<Employee>
            = repository.findByFirstNameAndLastName(firstName, lastName)

    @GET
    @Path("/salary/{salary}")
    @RolesAllowed(value = ["manager", "admin", "viewer"])
    fun findBySalary(@PathParam salary: Int): List<Employee>
            = repository.findBySalary(salary)

    @GET
    @Path("/salary-greater-than/{salary}")
    @RolesAllowed(value = ["manager", "admin", "viewer"])
    fun findBySalaryGreaterThan(@PathParam salary: Int): List<Employee>
            = repository.findBySalaryGreaterThan(salary)

}

Running Keycloak

We are running Keycloak on a Docker container. By default, Keycloak exposes API and a web console on port 8080. However, that port number must be different than the Quarkus application port, so we are overriding it with 8888. We also need to set a username and password to the admin console.

$ docker run -d --name keycloak -p 8888:8080 -e KEYCLOAK_USER=quarkus -e KEYCLOAK_PASSWORD=quarkus123 jboss/keycloak

Create client on Keycloak

First, we need to create a client with a given name. Let’s say this name is quarkus. The client credentials are used during the authorization process. It is important to choose confidential in the “Access Type” section and enable option “Direct Access Grants”.

quarkus-oauth2-keycloak-client

Then we may switch to the “Credentials” tab, and copy the client secret.

Configure Quarkus OAuth2 connection to Keycloak

In the next steps, we will use two HTTP endpoints exposed by Keycloak. First of them, token_endpoint allows you to generate new access tokens. The second endpoint introspection_endpoint is used to retrieve the active state of a token. In other words, you can use it to validate access or refresh token.

The Quarkus OAuth2 module expects three configuration properties. These are the client’s name, the client’s secret, and the address of the introspection endpoint. The last property quarkus.oauth2.role-claim is responsible for setting the name of claim used to load the roles. The list of roles is a part of the response returned by the introspection endpoint. Let’s take a look at the final list of configuration properties for integration with my local instance of Keycloak.

quarkus.oauth2.client-id=quarkus
quarkus.oauth2.client-secret=7dd4d516-e06d-4d81-b5e7-3a15debacebf
quarkus.oauth2.introspection-url=http://localhost:8888/auth/realms/master/protocol/openid-connect/token/introspect
quarkus.oauth2.role-claim=roles

Create users and roles on Keycloak

Our application uses three roles: viewer, manager, and admin. Therefore, we will create three test users on Keycloak. Each of them has a single role assigned. The manager role is a composite role, and it contains the viewer role. The same with the admin, that contains both manager and viewer. Here’s the full list of test users.

quarkus-oauth2-keycloak-users

Of course, we also need to define roles. In the picture below, I highlighted the roles used by our application.

Before proceeding to the tests, we need to do one thing. We have to edit the client scope responsible for displaying a list of roles. To do that go to the section “Client Scopes”, and then find the roles scope. After editing it, you should switch to the “Mappers” tab. Finally, you need to find and edit the “realm roles” entry. The value of a field “Token Claim Name” should be the same as the value set in the quarkus.oauth2.role-claim property. I highlighted it in the picture below. In the next section, I’ll show you how Quarkus OAuth2 retrieves roles from the introspection endpoint.

quarkus-oauth2-keycloak-clientclaim

Analyzing Quarkus OAuth2 authorization process

In the first step, we are calling the Keycloak token endpoint to obtain a valid access token. We may choose between five supported grant types. Because I want to authorize with a user password I’m setting parameter grant_type to password. We also need to set client_id, client_secret, and of course user credentials. A test user in the request visible below is test_viewer. It has the role viewer assigned.

$ curl -X POST http://localhost:8888/auth/realms/master/protocol/openid-connect/token \
-d "grant_type=password" \ 
-d "client_id=quarkus" \
-d "client_secret=7dd4d516-e06d-4d81-b5e7-3a15debacebf" \
-d "username=test_viewer" \
-d "password=123456"

{
    "access_token": "eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIilWRfdX...",
    "expires_in": 1800,
    "refresh_expires_in": 1800,
    "refresh_token": "eyJhbGciOiJIUzI1NiIsInR5cCIgOiAiSldUIiwia2...",
    "token_type": "bearer",
    "not-before-policy": 1600100798,
    "session_state": "cf9862b0-f97a-43a7-abbb-a267fff5e71e",
    "scope": "email profile"
}

Once, we have successfully generated an access token, we may use it for authorizing requests sent to the Quarkus application. But before that, we can verify our token with the Keycloak introspect endpoint. It is an additional step. However, it shows you what type of information is returned by the introspect endpoint, which is then used by the Quarkus OAuth2 module. You can see the request and response for the token value generated in the previous step. Pay close attention to how it returns a list of user’s roles.

$ curl -X POST http://localhost:8888/auth/realms/master/protocol/openid-connect/token/introspect \
-d "token=eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIilWRfdX..."
-H "Authorization: Basic cXVhcmt1czo3ZGQ0ZDUxNi1lMDZkLTRkODEtYjVlNy0zYTE1ZGViYWNlYmY="

{
    "exp": 1600200132,
    "iat": 1600198332,
    "jti": "af160b82-ad41-45d3-8c7d-28096beb2509",
    "iss": "http://localhost:8888/auth/realms/master",
    "sub": "f41828f6-d597-41cb-9081-46c2d7a4d76b",
    "typ": "Bearer",
    "azp": "quarkus",
    "session_state": "0fdbbd83-35f9-4f4f-912a-c17979c2a87b",
    "preferred_username": "test_viewer",
    "email": "test_viewer@example.com",
    "email_verified": true,
    "acr": "1",
    "scope": "email profile",
    "roles": [
        "viewer"
    ],
    "client_id": "quarkus",
    "username": "test_viewer",
    "active": true
}

The generated access token is valid. So, now the only thing we need to do is to set it on the request inside the Authorization header. Role viewer is allowed for the endpoint GET /employees/{id}, so the HTTP response status is 200 OK or 204 No Content.

$ curl -v http://localhost:8080/employees/1 -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIilWRfdX..."
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /employees/1 HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.55.1
> Accept: */*
> Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIilWRfdX...
>
< HTTP/1.1 204 No Content
<
* Connection #0 to host localhost left intact

Now, let’s try to call the endpoint that is disallowed for the viewer role. In the request visible below, we are trying to call endpoint DELETE /employees/{id}. In line with the expectations, the HTTP response status is 403 Forbidden.

$ curl -v -X DELETE http://localhost:8080/employees/1 -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIilWRfdX..."
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /employees/1 HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.55.1
> Accept: */*
> Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIilWRfdX...
>
< HTTP/1.1 403 Forbidden
< Content-Length: 0
<
* Connection #0 to host localhost left intact

Conclusion

It is relatively easy to configure and implement OAuth2 support with Quarkus. However, you may spend a lot of time on Keycloak configuration. That's why I explained step-by-step how to set up OAuth2 authorization there. Enjoy ๐Ÿ™‚

The post Quarkus OAuth2 and security with Keycloak appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/09/16/quarkus-oauth2-and-security-with-keycloak/feed/ 0 8811