testcontainers Archives - Piotr's TechBlog https://piotrminkowski.com/tag/testcontainers/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 18 Nov 2024 12:50:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 testcontainers Archives - Piotr's TechBlog https://piotrminkowski.com/tag/testcontainers/ 32 32 181738725 Consul with Quarkus and SmallRye Stork https://piotrminkowski.com/2024/11/18/consul-with-quarkus-and-smallrye-stork/ https://piotrminkowski.com/2024/11/18/consul-with-quarkus-and-smallrye-stork/#respond Mon, 18 Nov 2024 12:34:11 +0000 https://piotrminkowski.com/?p=15444 This article will teach you to use HashiCorp Consul as a discovery and configuration server for your Quarkus microservices. I wrote a similar article some years ago. However, there have been several significant improvements in the Quarkus ecosystem since that time. What I have in mind is mainly the Quarkus Stork project. This extension focuses […]

The post Consul with Quarkus and SmallRye Stork appeared first on Piotr's TechBlog.

]]>
This article will teach you to use HashiCorp Consul as a discovery and configuration server for your Quarkus microservices. I wrote a similar article some years ago. However, there have been several significant improvements in the Quarkus ecosystem since that time. What I have in mind is mainly the Quarkus Stork project. This extension focuses on service discovery and load balancing for cloud-native applications. It can seamlessly integrate with the Consul or Kubernetes discovery and provide various load balancer types over the Quarkus REST client. Our sample applications will also load configuration properties from the Consul Key-Value store and use the Smallrye Mutiny Consul client to register the app in the discovery server.

If you are looking for other interesting articles about Quarkus, you will find them in my blog. For example, you will read more about testing strategies with Quarkus and Pact here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions 🙂

Architecture

Before proceeding to the implementation, let’s take a look at the diagram of our system architecture. There are three microservices: employee-servicedepartament-service, and organization-service. They are communicating with each other through a REST API. They use the Consul Key-Value store as a distributed configuration backend. Every instance of service is registering itself in Consul. A load balancer is included in the application. It reads a list of registered instances of a target service from the Consul using the Quarkus Stork extension. Then it chooses an instance using a provided algorithm.

Running Consul Instance

We will run a single-node Consul instance as a Docker container. By default, Consul exposes HTTP API and a UI console on the 8500 port. Let’s expose that port outside the container.

docker run -d --name=consul \
   -e CONSUL_BIND_INTERFACE=eth0 \
   -p 8500:8500 \
   consul
ShellSession

Dependencies

Let’s analyze a list of the most important Maven dependencies using the department-service application as an example. Our application exposes REST endpoints and connects to the in-memory H2 database. We use the Quarkus REST client and the SmallRye Stork Service Discovery library to implement communication between the microservices. On the other hand, the io.quarkiverse.config:quarkus-config-consul is responsible for reading configuration properties the Consul Key-Value store. With the smallrye-mutiny-vertx-consul-client library the application is able to interact directly with the Consul HTTP API. This may not be necessary in the future, once the Stork project will implement the registration and deregistration mechanism. Currently it is not ready. Finally, we will Testcontainers to run Consul and tests our apps against it with the Quarkus JUnit support.

	<dependencies>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-rest-jackson</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-rest-client-jackson</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-hibernate-orm-panache</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-jdbc-h2</artifactId>
		</dependency>
		<dependency>
			<groupId>com.h2database</groupId>
			<artifactId>h2</artifactId>
			<scope>runtime</scope>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-smallrye-stork</artifactId>
		</dependency>
		<dependency>
			<groupId>io.smallrye.reactive</groupId>
			<artifactId>smallrye-mutiny-vertx-consul-client</artifactId>
		</dependency>
		<dependency>
			<groupId>io.smallrye.stork</groupId>
			<artifactId>stork-service-discovery-consul</artifactId>
		</dependency>
		<dependency>
			<groupId>io.smallrye.stork</groupId>
			<artifactId>stork-service-registration-consul</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-scheduler</artifactId>
		</dependency>
		<dependency>
			<groupId>io.quarkiverse.config</groupId>
			<artifactId>quarkus-config-consul</artifactId>
			<version>${quarkus-consul.version}</version>
		</dependency>
		<dependency>
			<groupId>io.rest-assured</groupId>
			<artifactId>rest-assured</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>io.quarkus</groupId>
			<artifactId>quarkus-junit5</artifactId>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>org.testcontainers</groupId>
			<artifactId>consul</artifactId>
			<version>1.20.3</version>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>org.testcontainers</groupId>
			<artifactId>junit-jupiter</artifactId>
			<version>1.20.3</version>
			<scope>test</scope>
		</dependency>
	</dependencies>
XML

Discovery and Load Balancing with Quarkus Stork for Consul

Let’s begin with the Quarkus Stork part. In the previous section, we included libraries required to provide service discovery and load balancing with Stork: quarkus-smallrye-stork and stork-service-discovery-consul. Now, we can proceed to the implementation. Here’s the EmployeeClient interface from the department-service responsible for calling the GET /employees/department/{departmentId} endpoint exposed by the employee-service. Instead of setting the target URL inside the @RegisterRestClient annotation we should refer to the name of the service registered in Consul.

@Path("/employees")
@RegisterRestClient(baseUri = "stork://employee-service")
public interface EmployeeClient {

    @GET
    @Path("/department/{departmentId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId);

}
Java

That service name should also be used in the configuration properties. The following property indicates that Stork will use Consul as a discovery server for the employee-service name.

quarkus.stork.employee-service.service-discovery.type = consul
Plaintext

Once we create a REST client with the additional annotations, we must inject it into the DepartmentResource class using the @RestClient annotation. Afterward, we can use that client to interact with the employee-service while calling the GET /departments/organization/{organizationId}/with-employees from the department-service.

@Path("/departments")
@Produces(MediaType.APPLICATION_JSON)
public class DepartmentResource {

    private Logger logger;
    private DepartmentRepository repository;
    private EmployeeClient employeeClient;

    public DepartmentResource(Logger logger,
                              DepartmentRepository repository,
                              @RestClient EmployeeClient employeeClient) {
        this.logger = logger;
        this.repository = repository;
        this.employeeClient = employeeClient;
    }

    // ... other methods for REST endpoints 

    @Path("/organization/{organizationId}")
    @GET
    public List<Department> findByOrganization(@PathParam("organizationId") Long organizationId) {
        logger.infof("Department find: organizationId=%d", organizationId);
        return repository.findByOrganization(organizationId);
    }

    @Path("/organization/{organizationId}/with-employees")
    @GET
    public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
        logger.infof("Department find with employees: organizationId=%d", organizationId);
        List<Department> departments = repository.findByOrganization(organizationId);
        departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
        return departments;
    }

}
Java

Let’s take a look at the implementation of the GET /employees/department/{departmentId} in the employee-service called by the EmployeeClient in the department-service.

@Path("/employees")
@Produces(MediaType.APPLICATION_JSON)
public class EmployeeResource {

    private Logger logger;
    private EmployeeRepository repository;

    public EmployeeResource(Logger logger,
                            EmployeeRepository repository) {
        this.logger = logger;
        this.repository = repository;
    }

    @Path("/department/{departmentId}")
    @GET
    public List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId) {
        logger.infof("Employee find: departmentId=%s", departmentId);
        return repository.findByDepartment(departmentId);
    }

    @Path("/organization/{organizationId}")
    @GET
    public List<Employee> findByOrganization(@PathParam("organizationId") Long organizationId) {
        logger.infof("Employee find: organizationId=%s", organizationId);
        return repository.findByOrganization(organizationId);
    }
    
    // ... other methods for REST endpoints

}
Java

Similarly in the organization-service, we define two REST clients for interacting with employee-service and department-service.

@Path("/departments")
@RegisterRestClient(baseUri = "stork://department-service")
public interface DepartmentClient {

    @GET
    @Path("/organization/{organizationId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Department> findByOrganization(@PathParam("organizationId") Long organizationId);

    @GET
    @Path("/organization/{organizationId}/with-employees")
    @Produces(MediaType.APPLICATION_JSON)
    List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId);

}

@Path("/employees")
@RegisterRestClient(baseUri = "stork://employee-service")
public interface EmployeeClient {

    @GET
    @Path("/organization/{organizationId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Employee> findByOrganization(@PathParam("organizationId") Long organizationId);

}
Java

It involves the need to include the following two configuration properties that set the discovery service type for the target services.

quarkus.stork.employee-service.service-discovery.type = consul
quarkus.stork.department-service.service-discovery.type = consul
Plaintext

The OrganizationResource class injects and uses both previously created clients.

@Path("/organizations")
@Produces(MediaType.APPLICATION_JSON)
public class OrganizationResource {

    private Logger logger;
    private OrganizationRepository repository;
    private DepartmentClient departmentClient;
    private EmployeeClient employeeClient;

    public OrganizationResource(Logger logger,
                                OrganizationRepository repository,
                                @RestClient DepartmentClient departmentClient,
                                @RestClient EmployeeClient employeeClient) {
        this.logger = logger;
        this.repository = repository;
        this.departmentClient = departmentClient;
        this.employeeClient = employeeClient;
    }

    // ... other methods for REST endpoints

    @Path("/{id}/with-departments")
    @GET
    public Organization findByIdWithDepartments(@PathParam("id") Long id) {
        logger.infof("Organization find with departments: id={}", id);
        Organization organization = repository.findById(id);
        organization.setDepartments(departmentClient.findByOrganization(organization.getId()));
        return organization;
    }

    @Path("/{id}/with-departments-and-employees")
    @GET
    public Organization findByIdWithDepartmentsAndEmployees(@PathParam("id") Long id) {
        logger.infof("Organization find with departments and employees: id={}", id);
        Organization organization = repository.findById(id);
        organization.setDepartments(departmentClient.findByOrganizationWithEmployees(organization.getId()));
        return organization;
    }

    @Path("/{id}/with-employees")
    @GET
    public Organization findByIdWithEmployees(@PathParam("id") Long id) {
        logger.infof("Organization find with employees: id={}", id);
        Organization organization = repository.findById(id);
        organization.setEmployees(employeeClient.findByOrganization(organization.getId()));
        return organization;
    }

}
Java

Registration in Consul with Quarkus

After including Stork, the Quarkus REST client automatically splits traffic between all the instances of the application existing in the discovery server. However, each application must register itself in the discovery server. Quarkus Stork won’t do that. Theoretically, there is the stork-service-registration-consul module that should register the application instance on startup. As far as I know, this feature is still under active development. For now, we will include a mentioned library and use the same property for enabling the registrar feature.

quarkus.stork.employee-service.service-registrar.type = consul
Plaintext

Our sample applications will interact directly with the Consul server using the SmallRye Mutiny reactive client. Let’s define the ClientConsul bean. It is registered only if the quarkus.stork.employee-service.service-registrar.type property with the consul value exists.

@ApplicationScoped
public class EmployeeBeanProducer {

    @ConfigProperty(name = "consul.host", defaultValue = "localhost")  String host;
    @ConfigProperty(name = "consul.port", defaultValue = "8500") int port;

    @Produces
    @LookupIfProperty(name = "quarkus.stork.employee-service.service-registrar.type", 
                      stringValue = "consul")
    public ConsulClient consulClient(Vertx vertx) {
        return ConsulClient.create(vertx, new ConsulClientOptions()
                .setHost(host)
                .setPort(port));
    }

}
Java

The bean responsible for catching the startup and shutdown events is annotated with @ApplicationScoped. It defines two methods: onStart and onStop. It also injects the ConsulClient bean. Quarkus dynamically generates the HTTP listen port number on startup and saves it in the quarkus.http.port property. Therefore, the startup task needs to wait a moment to ensure that the application is running. We will run it 3 seconds after receiving the startup event. Every instance of the application needs to have a unique id in Consul. Therefore, we retrieve the number of running port and use that number as the id suffix. The name of the service is taken from the quarkus.application.name property. The instance of the application should save id in order to be able to deregister itself on shutdown.

@ApplicationScoped
public class EmployeeLifecycle {

    @ConfigProperty(name = "quarkus.application.name")
    private String appName;
    private int port;

    private Logger logger;
    private Instance<ConsulClient> consulClient;
    private ScheduledExecutorService executor;

    public EmployeeLifecycle(Logger logger,
                             Instance<ConsulClient> consulClient,
                             ScheduledExecutorService executor) {
        this.logger = logger;
        this.consulClient = consulClient;
        this.executor = executor;
    }

    void onStart(@Observes StartupEvent ev) {
        if (consulClient.isResolvable()) {
            executor.schedule(() -> {
                port = ConfigProvider.getConfig().getValue("quarkus.http.port", Integer.class);
                consulClient.get().registerService(new ServiceOptions()
                                .setPort(port)
                                .setAddress("localhost")
                                .setName(appName)
                                .setId(appName + "-" + port),
                        result -> logger.infof("Service %s-%d registered", appName, port));
            }, 3000, TimeUnit.MILLISECONDS);
        }
    }

    void onStop(@Observes ShutdownEvent ev) {
        if (consulClient.isResolvable()) {
            consulClient.get().deregisterService(appName + "-" + port,
                    result -> logger.infof("Service %s-%d deregistered", appName, port));
        }
    }
}
Java

Read Configuration Properties from Consul

The io.quarkiverse.config:quarkus-config-consul is already included in dependencies. Once the quarkus.consul-config.enabled property is set to true, the Quarkus application tries to read properties from the Consul Key-Value store. The quarkus.consul-config.properties-value-keys property indicates the location of the properties file stored in Consul. Here are the properties that exists in the classpath application.properties. For example, the default config location for the department-service is config/department-service.

quarkus.application.name = department-service
quarkus.application.version = 1.1
quarkus.consul-config.enabled = true
quarkus.consul-config.properties-value-keys = config/${quarkus.application.name}
Plaintext

Let’s switch to the Consul UI. It is available under the same 8500 port as the API. In the “Key/Value” section we create configuration for all three sample applications.

These are configuration properties for department-service. They are targeted for the development mode. We enable the dynamically generated port number to run several instances on the same workstation. Our application use an in-memory H2 database. It loads the import.sql script on startup to initialize a demo data store. We also enable Quarkus Stork service discovery for the employee-service REST client and registration in Consul.

quarkus.http.port = 0
quarkus.datasource.db-kind = h2
quarkus.hibernate-orm.database.generation = drop-and-create
quarkus.hibernate-orm.sql-load-script = src/main/resources/import.sql
quarkus.stork.employee-service.service-discovery.type = consul
quarkus.stork.department-service.service-registrar.type = consul
Plaintext

Here are the configuration properties for the employee-service.

quarkus-stork-consul-config

Finally, let’s take a look at the organization-service configuration in Consul.

Run Applications in the Development Mode

Let’s run our three sample Quarkus applications in the development mode. Both employee-service and department-service should have two instances running. We don’t have to take care about port conflicts, since they are aqutomatically generated on startup.

$ cd employee-service
$ mvn quarkus:dev
$ mvn quarkus:dev

$ cd department-service
$ mvn quarkus:dev
$ mvn quarkus:dev

$ cd organization-service
$ mvn quarkus:dev
ShellSession

Once we start all the instances we can switch to the Consul UI. You should see exactly the same services in your web console.

quarkus-stork-consul-services

There are two instances of the employee-service and deparment-service. We can check out the list of registered instances for the selected application.

quarkus-stork-consul-service

This step is optional. To simplify tests I also included API gateway that integrates with Consul discovery. It listens on the static 8080 port and forwards requests to the downstream services, which listen on the dynamic ports. Since Quarkus does not provide a module dedicates for the API gateway, I used Spring Cloud Gateway with Spring Cloud Consul for that. Therefore, you need to use the following command to run the application:

$ cd gateway-service
$ mvn spring-boot:run
ShellSession

Afterward, we can make some API tests with or without the gateway. With the gateway-service, we can use the 8080 port with the /api base context path. Let’s call the following three endpoints. The first one is exposed by the department-service, while the another two by the organization-service.

$ curl http://localhost:8080/api/departments/organization/1/with-employees
$ curl http://localhost:8080/api/organizations/1/with-departments
$ curl http://localhost:8080/api/organizations/1/with-departments-and-employees
ShellSession

Each Quarkus service listens on the dynamic port and register itself in Consul using that port number. Here’s the department-service logs from startup and during test communication.

After including the quarkus-micrometer-registry-prometheus module each application instance exposes metrics under the GET /q/metrics endpoint. There are several metrics related to service discovery published by the Quarkus Stork extension.

$ curl http://localhost:51867/q/metrics | grep stork
# TYPE stork_service_discovery_instances_count counter
# HELP stork_service_discovery_instances_count The number of service instances discovered
stork_service_discovery_instances_count_total{service_name="employee-service"} 12.0
# TYPE stork_service_selection_duration_seconds summary
# HELP stork_service_selection_duration_seconds The duration of the selection operation
stork_service_selection_duration_seconds_count{service_name="employee-service"} 6.0
stork_service_selection_duration_seconds_sum{service_name="employee-service"} 9.93934E-4
# TYPE stork_service_selection_duration_seconds_max gauge
# HELP stork_service_selection_duration_seconds_max The duration of the selection operation
stork_service_selection_duration_seconds_max{service_name="employee-service"} 0.0
# TYPE stork_service_discovery_failures counter
# HELP stork_service_discovery_failures The number of failures during service discovery
stork_service_discovery_failures_total{service_name="employee-service"} 0.0
# TYPE stork_service_discovery_duration_seconds_max gauge
# HELP stork_service_discovery_duration_seconds_max The duration of the discovery operation
stork_service_discovery_duration_seconds_max{service_name="employee-service"} 0.0
# TYPE stork_service_discovery_duration_seconds summary
# HELP stork_service_discovery_duration_seconds The duration of the discovery operation
stork_service_discovery_duration_seconds_count{service_name="employee-service"} 6.0
stork_service_discovery_duration_seconds_sum{service_name="employee-service"} 2.997176541
# TYPE stork_service_selection_failures counter
# HELP stork_service_selection_failures The number of failures during service selection
stork_service_selection_failures_total{service_name="employee-service"} 0.0
ShellSession

Advanced Load Balancing with Quarkus Stork and Consul

Quarkus Stork provides several load balancing strategies to efficiently distribute requests across multiple instances of a application. It can ensure optimal resource usage, better performance, and high availability. By default, Quarkus Stork uses round-robin algorithm. To override the default strategy, we first need to include a library responsible for providing the selected load-balancing algorithm. For example, let’s choose the least-response-time strategy, which collects response times of the calls made with service instances and picks an instance based on this information.

<dependency>
    <groupId>io.smallrye.stork</groupId>
    <artifactId>stork-load-balancer-least-response-time</artifactId>
</dependency>
XML

Then, we have to change the default strategy in configuration properties for the selected client. Let’s add the following property to the config/department-service in Consul Key-Value store.

quarkus.stork.employee-service.load-balancer.type=least-response-time
Plaintext

After that, we can restart the instance of department-service and retest the communication between services.

Testing Integration Between Quarkus and Consul

We have already included the org.testcontainers:consul artifact to the Maven dependencies. Thanks to that, we can create JUnit tests with Quarkus and Testcontainers Consul. Since Quarkus doen’t provide a built-in support for testing Consul container, we need to create the class that implements the QuarkusTestResourceLifecycleManager interface. It is responsible for starting and stopping Consul container during JUnit tests. After starting the container, we add required configuration properties to enable in-memory database creation and a service registration in Consul.

public class ConsulResource implements QuarkusTestResourceLifecycleManager {

    private ConsulContainer consulContainer;

    @Override
    public Map<String, String> start() {
        consulContainer = new ConsulContainer("hashicorp/consul:latest")
                .withConsulCommand(
                """
                kv put config/department-service - <<EOF
                department.name=abc
                quarkus.datasource.db-kind=h2
                quarkus.hibernate-orm.database.generation=drop-and-create
                quarkus.stork.department-service.service-registrar.type=consul
                EOF
                """
                );

        consulContainer.start();

        String url = consulContainer.getHost() + ":" + consulContainer.getFirstMappedPort();

        return ImmutableMap.of(
                "quarkus.consul-config.agent.host-port", url,
                "consul.host", consulContainer.getHost(),
                "consul.port", consulContainer.getFirstMappedPort().toString()
        );
    }

    @Override
    public void stop() {
        consulContainer.stop();
    }
}
Java

To start Consul container during the test, we need to annotate the test class with @QuarkusTestResource(ConsulResource.class). The test loads configuration properties from Consul on startup and registers the service. Then, it verifies that REST endpoints exposed by the department-service work fine and the registered service exists in Consul.

@QuarkusTest
@QuarkusTestResource(ConsulResource.class)
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class DepartmentResourceConsulTests {

    @ConfigProperty(name = "department.name", defaultValue = "")
    private String name;
    @Inject
    ConsulClient consulClient;

    @Test
    @Order(1)
    void add() {
        Department d = new Department();
        d.setOrganizationId(1L);
        d.setName(name);

        given().body(d).contentType(ContentType.JSON)
                .when().post("/departments").then()
                .statusCode(200)
                .body("id", notNullValue())
                .body("name", is(name));
    }

    @Test
    @Order(2)
    void findAll() {
        when().get("/departments").then()
                .statusCode(200)
                .body("size()", is(4));
    }

    @Test
    @Order(3)
    void checkRegister() throws InterruptedException {
        Thread.sleep(5000);
        Uni<ServiceList> uni = Uni.createFrom().completionStage(() -> consulClient.catalogServices().toCompletionStage());
        List<Service> services = uni.await().atMost(Duration.ofSeconds(3)).getList();
        final long count = services.stream()
                .filter(svc -> svc.getName().equals("department-service")).count();
        assertEquals(1 ,count);
    }
}
Java

Final Thoughts

This article introduces Quarkus Stork for Consul discovery and client-side load balancing. It shows how to integrate Quarkus with Consul Key-Value store for distributed configuration. It also covers the topics like integration testing with Testcontainers support, metrics, service registration and advanced load-balancing strategies.

The post Consul with Quarkus and SmallRye Stork appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/11/18/consul-with-quarkus-and-smallrye-stork/feed/ 0 15444
Spring Cloud Kubernetes with Spring Boot 3 https://piotrminkowski.com/2023/06/08/spring-cloud-kubernetes-with-spring-boot-3/ https://piotrminkowski.com/2023/06/08/spring-cloud-kubernetes-with-spring-boot-3/#comments Thu, 08 Jun 2023 08:29:52 +0000 https://piotrminkowski.com/?p=14232 In this article, you will learn how to create, test, and run apps with Spring Cloud Kubernetes, and Spring Boot 3. You will see how to use tools like Skaffold, Testcontainers, Spring Boot Admin, and the Fabric8 client in the Kubernetes environment. The main goal of this article is to update you with the latest […]

The post Spring Cloud Kubernetes with Spring Boot 3 appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create, test, and run apps with Spring Cloud Kubernetes, and Spring Boot 3. You will see how to use tools like Skaffold, Testcontainers, Spring Boot Admin, and the Fabric8 client in the Kubernetes environment. The main goal of this article is to update you with the latest version of the Spring Cloud Kubernetes project. There are several other posts on my blog with similar content. You can refer to the following article describing the best practices for running Java apps on Kubernetes. You can also read about microservices with Spring Cloud Kubernetes in the post published some years ago. It is quite outdated. I’ll show some changes since then. Let’s begin!

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

Firstly, let’s discuss our repository. It contains five apps. There are three microservices (employee-service, department-service, organization-service) communicating with each other through the REST client and connecting to the Mongo database. There is also the API gateway (gateway-service) created with the Spring Cloud Gateway project. Finally, the admin-service directory contains the Spring Boot Admin app used for monitoring all other apps. You can easily deploy all the apps from the source code using a single Skaffold command. If you run the following command from the repository root directory it will build the images with Jib Maven Plugin and deploy all apps on your Kubernetes cluster:

$ skaffold run

On the other hand, you can go to the particular app directory and deploy only it using exactly the same command. All the required Kubernetes YAML manifests for each app are placed inside the k8s directories. There is also a global configuration with e.g. Mongo deployment in the project root k8s directory. Here’s the structure of our sample repo:

How It Works

In our sample architecture, we will use Spring Cloud Kubernetes Config for injecting configuration via ConfigMap and Secret and Spring Cloud Kubernetes Discovery for inter-service communication with the OpenFeign client. All our apps are running within the same namespace, but we could as well deploy them across several different namespaces and handle communication between them with OpenFeign. The only thing we should do in that case is to set the property spring.cloud.kubernetes.discovery.all-namespaces to true. For more details, you can refer to the following article.

In front of our services, there is an API gateway. It is a separate app, but we could as well install it on Kubernetes using the native CRD integration. For more details, you can refer to the following post on the Spring blog. In our case, this is a standard Spring Boot 3 app that just includes and uses the Spring Cloud Gateway module. It also uses Spring Cloud Kubernetes Discovery together with Spring Cloud OpenFeign to locate and call the downstream services. Here’s the diagram that illustrates our architecture.

spring-cloud-kubernetes-arch

Using Spring Cloud Kubernetes Config

I’ll describe implementation details by the example of department-service. It exposes some REST endpoints but also calls the endpoints exposed by the employee-service. Besides the standard modules, we need to include Spring Cloud Kubernetes in the Maven dependencies. Here, we have to decide if we use the Fabric8 client or the Kubernetes Java Client. Personally, I have an experience with Fabric8, so I’ll use the spring-cloud-starter-kubernetes-fabric8-all starter to include both config and discovery modules.

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-kubernetes-fabric8-all</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

As you see our app is connecting to the Mongo database. Let’s provide connection details and credentials required by the app. In the k8s directory, you will find the configmap.yaml file. It contains the address of Mongo and the database name. Those properties are injected into the pod as the application.properties file. And now the most important thing. The name of the ConfigMap has to be the same as the name of our app. The name of the Spring Boot is indicated by the spring.application.name property.

kind: ConfigMap
apiVersion: v1
metadata:
  name: department
data:
  application.properties: |-
    spring.data.mongodb.host: mongodb
    spring.data.mongodb.database: admin
    spring.data.mongodb.authentication-database: admin

In the current case, the name of the app is department. Here’s the application.yml file inside the app:

spring:
  application:
    name: department

The same naming rule applies to Secret. We are keeping sensitive data like the username and password to the Mongo database inside the following Secret. You can also find that content inside the secret.yaml file in the k8s directory.

kind: Secret
apiVersion: v1
metadata:
  name: department
data:
  spring.data.mongodb.password: UGlvdF8xMjM=
  spring.data.mongodb.username: cGlvdHI=
type: Opaque

Now, let’s proceed to the Deployment manifest. We will clarify the two first points here later. Spring Cloud Kubernetes requires special privileges on Kubernetes to interact with the master API (1). We don’t have to provide a tag for the image – Skaffold will handle it (2). In order to enable loading properties from ConfigMap we need to set the spring.config.import=kubernetes: property (a new way) or set the property spring.cloud.bootstrap.enabled to true (the old way). Instead of using properties directly, we will set the corresponding environment variables on the Deployment (3). By default, consuming secrets through the API is not enabled for security reasons. In order to enable it, we will set the SPRING_CLOUD_KUBERNETES_SECRETS_ENABLEAPI environment variable to true (4).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: department
  labels:
    app: department
spec:
  replicas: 1
  selector:
    matchLabels:
      app: department
  template:
    metadata:
      labels:
        app: department
    spec:
      serviceAccountName: spring-cloud-kubernetes # (1)
      containers:
      - name: department
        image: piomin/department # (2)
        ports:
        - containerPort: 8080
        env:
          - name: SPRING_CLOUD_BOOTSTRAP_ENABLED # (3)
            value: "true"
          - name: SPRING_CLOUD_KUBERNETES_SECRETS_ENABLEAPI # (4)
            value: "true"

Using Spring Cloud Kubernetes Discovery

We have already included the Spring Cloud Kubernetes Discovery module in the previous section using the spring-cloud-starter-kubernetes-fabric8-all starter. In order to provide a declarative REST client we will also include the Spring Cloud OpenFeign module:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

Now, we can declare the @FeignClient interface. The important thing here is the name of a discovered service. It should be the same as the name of the Kubernetes Service defined for the employee-service app.

@FeignClient(name = "employee")
public interface EmployeeClient {

    @GetMapping("/department/{departmentId}")
    List<Employee> findByDepartment(@PathVariable("departmentId") String departmentId);

    @GetMapping("/department-with-delay/{departmentId}")
    List<Employee> findByDepartmentWithDelay(@PathVariable("departmentId") String departmentId);
}

Here’s the Kubernetes Service manifest for the employee-service app. The name of the service is employee (1). The label spring-boot is set for Spring Boot Admin discovery purposes (2). You can find the following YAML in the employee-service/k8s directory.

apiVersion: v1
kind: Service
metadata:
  name: employee # (1)
  labels:
    app: employee
    spring-boot: "true" # (2)
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: employee
  type: ClusterIP

Just to clarify – here’s the implementation of the employee-service API methods called by the OpenFeign client in the department-service.

@RestController
public class EmployeeController {

    private static final Logger LOGGER = LoggerFactory
        .getLogger(EmployeeController.class);
	
    @Autowired
    EmployeeRepository repository;

    // ... other endpoints implementation 

    @GetMapping("/department/{departmentId}")
    public List<Employee> findByDepartment(@PathVariable("departmentId") String departmentId) {
        LOGGER.info("Employee find: departmentId={}", departmentId);
        return repository.findByDepartmentId(departmentId);
    }

    @GetMapping("/department-with-delay/{departmentId}")
    public List<Employee> findByDepartmentWithDelay(@PathVariable("departmentId") String departmentId) throws InterruptedException {
        LOGGER.info("Employee find: departmentId={}", departmentId);
        Thread.sleep(2000);
        return repository.findByDepartmentId(departmentId);
    }
	
}

That’s all that we have to do. Now, we can just call the endpoint using the OpenFeign client from department-service. For example on the “delayed” endpoint, we can use Spring Cloud Circuit Breaker with Resilience4J.

@RestController
public class DepartmentController {

    private static final Logger LOGGER = LoggerFactory
        .getLogger(DepartmentController.class);

    DepartmentRepository repository;
    EmployeeClient employeeClient;
    Resilience4JCircuitBreakerFactory circuitBreakerFactory;

    public DepartmentController(
        DepartmentRepository repository, 
        EmployeeClient employeeClient,
        Resilience4JCircuitBreakerFactory circuitBreakerFactory) {
            this.repository = repository;
            this.employeeClient = employeeClient;
            this.circuitBreakerFactory = circuitBreakerFactory;
    }

    @GetMapping("/{id}/with-employees-and-delay")
    public Department findByIdWithEmployeesAndDelay(@PathVariable("id") String id) {
        LOGGER.info("Department findByIdWithEmployees: id={}", id);
        Department department = repository.findById(id).orElseThrow();
        CircuitBreaker circuitBreaker = circuitBreakerFactory.create("delayed-circuit");
        List<Employee> employees = circuitBreaker.run(() ->
                employeeClient.findByDepartmentWithDelay(department.getId()));
        department.setEmployees(employees);
        return department;
    }

    @GetMapping("/organization/{organizationId}/with-employees")
    public List<Department> findByOrganizationWithEmployees(@PathVariable("organizationId") String organizationId) {
        LOGGER.info("Department find: organizationId={}", organizationId);
        List<Department> departments = repository.findByOrganizationId(organizationId);
        departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
        return departments;
    }

}

Testing with Fabric8 Kubernetes

We have already finished the implementation of our service. All the Kubernetes YAML manifests are prepared and ready to deploy. Now, the question is – can we easily test that everything works fine before we proceed to the deployment on the real cluster? The answer is – yes. Moreover, we can choose between several tools. Let’s begin with the simplest option – Kubernetes mock server. In order to use it, we to include an additional Maven dependency:

<dependency>
  <groupId>io.fabric8</groupId>
  <artifactId>kubernetes-server-mock</artifactId>
  <version>6.7.1</version>
  <scope>test</scope>
</dependency>

Then, we can proceed to the test. In the first step, we need to provide several test annotations. Inside @SpringBootTest we should simulate the Kubernetes platform with spring.main.cloud-platform property set to KUBERNETES (1). Normally Spring Boot is able to autodetect if it is running on Kubernetes. In that case, we need “trick him”, because we are just simulating the API, not running the test on Kubernetes. We also need to enable the old way of ConfigMap injection with the spring.cloud.bootstrap.enabled=true property.

Once we annotate the test method with @EnableKubernetesMockClient (2) we can use an auto-configured static instance of the Fabric8 KubernetesClient (3). During the test Fabric8 library runs a web server that mocks all the API requests sent by the client. By the way, we are using Testcontainers for running Mongo (4). In the next step, we are creating the ConfigMap that injects Mongo connection settings into the Spring Boot app (5). Thanks to the Spring Cloud Kubernetes Config it is automatically loaded by the app and the app is able to connect the Mongo database on the dynamically generated port.

Spring Cloud Kubernetes comes with auto-configured Fabric8 KubernetesClient. We need to force it to connect to the mock API server. Therefore we should override kubernetes.master property used by the Fabric8 KubernetesClient into the master URL taken from the test “mocked” instance (6). Finally, we can just implement test methods in the standard way.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT,
        properties = {
                "spring.main.cloud-platform=KUBERNETES",
                "spring.cloud.bootstrap.enabled=true"}) // (1)
@EnableKubernetesMockClient(crud = true) // (2)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class EmployeeKubernetesMockTest {

    private static final Logger LOG = LoggerFactory
        .getLogger(EmployeeKubernetesMockTest.class);

    static KubernetesClient client; // (3)

    @Container // (4)
    static MongoDBContainer mongodb = new MongoDBContainer("mongo:5.0");

    @BeforeAll
    static void setup() {

        ConfigMap cm = client.configMaps()
                .create(buildConfigMap(mongodb.getMappedPort(27017)));
        LOG.info("!!! {}", cm); // (5)

        // (6)
        System.setProperty(Config.KUBERNETES_MASTER_SYSTEM_PROPERTY, 
            client.getConfiguration().getMasterUrl());
        System.setProperty(Config.KUBERNETES_TRUST_CERT_SYSTEM_PROPERTY, "true");
        System.setProperty(Config.KUBERNETES_NAMESPACE_SYSTEM_PROPERTY, "default");
    }

    private static ConfigMap buildConfigMap(int port) {
        return new ConfigMapBuilder().withNewMetadata()
                .withName("employee").withNamespace("default")
                .endMetadata()
                .addToData("application.properties",
                        """
                        spring.data.mongodb.host=localhost
                        spring.data.mongodb.port=%d
                        spring.data.mongodb.database=test
                        spring.data.mongodb.authentication-database=test
                        """.formatted(port))
                .build();
    }

    @Autowired
    TestRestTemplate restTemplate;

    @Test
    @Order(1)
    void addEmployeeTest() {
        Employee employee = new Employee("1", "1", "Test", 30, "test");
        employee = restTemplate.postForObject("/", employee, Employee.class);
        assertNotNull(employee);
        assertNotNull(employee.getId());
    }

    @Test
    @Order(2)
    void addAndThenFindEmployeeByIdTest() {
        Employee employee = new Employee("1", "2", "Test2", 20, "test2");
        employee = restTemplate.postForObject("/", employee, Employee.class);
        assertNotNull(employee);
        assertNotNull(employee.getId());
        employee = restTemplate
                .getForObject("/{id}", Employee.class, employee.getId());
        assertNotNull(employee);
        assertNotNull(employee.getId());
    }

    @Test
    @Order(3)
    void findAllEmployeesTest() {
        Employee[] employees =
                restTemplate.getForObject("/", Employee[].class);
        assertEquals(2, employees.length);
    }

    @Test
    @Order(3)
    void findEmployeesByDepartmentTest() {
        Employee[] employees =
                restTemplate.getForObject("/department/1", Employee[].class);
        assertEquals(1, employees.length);
    }

    @Test
    @Order(3)
    void findEmployeesByOrganizationTest() {
        Employee[] employees =
                restTemplate.getForObject("/organization/1", Employee[].class);
        assertEquals(2, employees.length);
    }

}

Now, after running the tests we can take a look at the logs. As you see, our test is loading properties from the employee ConfigMap.

Finally, it is able to successfully connect Mongo on the dynamic port and run all the tests against that instance.

Testing with Testcontainers on k3s

As I mentioned before, there are several tools we can use for testing with Kubernetes. This time we will see how to do it with Testcomntainers. We have already used it in the previous section for running the Mongo database. But there is also the Testcontainers module for Rancher’s k3s Kubernetes distribution. Currently, it is in the incubating state, but it doesn’t bother us to try it. In order to use it in the project we need to include the following Maven dependency:

<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>k3s</artifactId>
  <scope>test</scope>
</dependency>

Here’s the implementation of the same tests as in the previous section, but this time with the k3s container. We don’t have to create any mocks. Instead, we will create the K3sContainer object (1). Before running the tests we need to create and initialize KubernetesClient. Testcontainers K3sContainer provides the getKubeConfigYaml() method for getting kubeconfig data. With the Fabric8 Config object we can initialize the client from that kubeconfig (2) (3). After that, we will create the ConfigMap with Mongo connection details (4). Finally, we have to override the master URL for Spring Cloud Kubernetes auto-configured Fabric8 client. In comparison to the previous section, we also need to set Kubernetes client certificates and keys (5).

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT,
        properties = {
                "spring.main.cloud-platform=KUBERNETES",
                "spring.cloud.bootstrap.enabled=true"})
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class EmployeeKubernetesTest {

   private static final Logger LOG = LoggerFactory
      .getLogger(EmployeeKubernetesTest.class);

   @Container
   static MongoDBContainer mongodb = new MongoDBContainer("mongo:5.0");
   @Container
   static K3sContainer k3s = new K3sContainer(DockerImageName
      .parse("rancher/k3s:v1.21.3-k3s1")); // (1)

   @BeforeAll
   static void setup() {
      Config config = Config
         .fromKubeconfig(k3s.getKubeConfigYaml()); // (2)
      DefaultKubernetesClient client = new 
         DefaultKubernetesClient(config); // (3)

      ConfigMap cm = client.configMaps().inNamespace("default")
         .create(buildConfigMap(mongodb.getMappedPort(27017)));
      LOG.info("!!! {}", cm); // (4)

      System.setProperty(Config.KUBERNETES_MASTER_SYSTEM_PROPERTY, 
         client.getConfiguration().getMasterUrl());
      
      // (5) 
      System.setProperty(Config.KUBERNETES_CLIENT_CERTIFICATE_DATA_SYSTEM_PROPERTY,
         client.getConfiguration().getClientCertData());
      System.setProperty(Config.KUBERNETES_CA_CERTIFICATE_DATA_SYSTEM_PROPERTY,
         client.getConfiguration().getCaCertData());
       System.setProperty(Config.KUBERNETES_CLIENT_KEY_DATA_SYSTEM_PROPERTY,
         client.getConfiguration().getClientKeyData());
      System.setProperty(Config.KUBERNETES_TRUST_CERT_SYSTEM_PROPERTY, 
         "true");
      System.setProperty(Config.KUBERNETES_NAMESPACE_SYSTEM_PROPERTY, 
         "default");
    }

    private static ConfigMap buildConfigMap(int port) {
        return new ConfigMapBuilder().withNewMetadata()
                .withName("employee").withNamespace("default")
                .endMetadata()
                .addToData("application.properties",
                        """
                        spring.data.mongodb.host=localhost
                        spring.data.mongodb.port=%d
                        spring.data.mongodb.database=test
                        spring.data.mongodb.authentication-database=test
                        """.formatted(port))
                .build();
    }

    @Autowired
    TestRestTemplate restTemplate;

    @Test
    @Order(1)
    void addEmployeeTest() {
        Employee employee = new Employee("1", "1", "Test", 30, "test");
        employee = restTemplate.postForObject("/", employee, Employee.class);
        assertNotNull(employee);
        assertNotNull(employee.getId());
    }

    @Test
    @Order(2)
    void addAndThenFindEmployeeByIdTest() {
        Employee employee = new Employee("1", "2", "Test2", 20, "test2");
        employee = restTemplate
           .postForObject("/", employee, Employee.class);
        assertNotNull(employee);
        assertNotNull(employee.getId());
        employee = restTemplate
                .getForObject("/{id}", Employee.class, employee.getId());
        assertNotNull(employee);
        assertNotNull(employee.getId());
    }

    @Test
    @Order(3)
    void findAllEmployeesTest() {
        Employee[] employees =
                restTemplate.getForObject("/", Employee[].class);
        assertEquals(2, employees.length);
    }

    @Test
    @Order(3)
    void findEmployeesByDepartmentTest() {
        Employee[] employees =
                restTemplate.getForObject("/department/1", Employee[].class);
        assertEquals(1, employees.length);
    }

    @Test
    @Order(3)
    void findEmployeesByOrganizationTest() {
        Employee[] employees =
                restTemplate.getForObject("/organization/1", Employee[].class);
        assertEquals(2, employees.length);
    }

}

Run Spring Kubernetes Apps on Minikube

In this exercise, I’m using Minikube, but you can as well use any other distribution like Kind or k3s. Spring Cloud Kubernetes requires additional privileges on Kubernetes to be able to interact with the master API. So, before running the apps we will create the spring-cloud-kubernetes ServiceAccount with the required privileges. Our role needs to have access to the configmaps, pods, services, endpoints and secrets. If we do not enable discovery across all namespaces (the spring.cloud.kubernetes.discovery.all-namespaces property), it can be Role within the namespace. Otherwise, we should create a ClusterRole.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: spring-cloud-kubernetes
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: spring-cloud-kubernetes
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["configmaps", "pods", "services", "endpoints", "secrets"]
    verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: spring-cloud-kubernetes
  namespace: default
subjects:
  - kind: ServiceAccount
    name: spring-cloud-kubernetes
    namespace: default
roleRef:
  kind: ClusterRole
  name: spring-cloud-kubernetes

Of course, you don’t have to apply the manifests visible above by yourself. As I mentioned at the beginning of the article, there is a skaffold.yaml file in the repository root directory file that contains the whole configuration. It runs manifests with Mongo Deployment (1) and with privileges (2) together with all the services.

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: sample-spring-microservices-kubernetes
build:
  artifacts:
    - image: piomin/admin
      jib:
        project: admin-service
    - image: piomin/department
      jib:
        project: department-service
        args:
          - -DskipTests
    - image: piomin/employee
      jib:
        project: employee-service
        args:
          - -DskipTests
    - image: piomin/gateway
      jib:
        project: gateway-service
    - image: piomin/organization
      jib:
        project: organization-service
        args:
          - -DskipTests
  tagPolicy:
    gitCommit: {}
manifests:
  rawYaml:
    - k8s/mongodb-*.yaml # (1)
    - k8s/privileges.yaml # (2)
    - admin-service/k8s/*.yaml
    - department-service/k8s/*.yaml
    - employee-service/k8s/*.yaml
    - gateway-service/k8s/*.yaml
    - organization-service/k8s/*.yaml

All we need to do it to deploy all the apps by executing the following skaffold command:

$ skaffold dev

Once we will do it we can display a list of running s pods:

kubectl get pod
NAME                            READY   STATUS    RESTARTS   AGE
admin-5f8c8498f-vtstx           1/1     Running   0          2m38s
department-746774879b-llrdn     1/1     Running   0          2m38s
employee-5bbf6b765f-7hsv7       1/1     Running   0          2m37s
gateway-578cb64558-m9n7f        1/1     Running   0          2m37s
mongodb-7f68b8b674-dbfnb        1/1     Running   0          2m38s
organization-5688c58656-bv8n6   1/1     Running   0          2m37s

We can also display a list of services. Some of them, like admin or gateway, are exposed as NodePort. Thanks to that we can easily access them outside of our Kubernetes cluster.

kubectl get svc
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
admin          NodePort    10.101.220.141   <none>        8080:31368/TCP   3m53s
department     ClusterIP   10.108.144.90    <none>        8080/TCP         3m52s
employee       ClusterIP   10.99.75.2       <none>        8080/TCP         3m52s
gateway        NodePort    10.96.7.237      <none>        8080:31518/TCP   3m52s
kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP          38h
mongodb        ClusterIP   10.108.198.233   <none>        27017/TCP        3m53s
organization   ClusterIP   10.107.102.26    <none>        8080/TCP         3m52s

Let’s obtain the Minikube IP address on our local machine:

$ minikube ip

Now, we can use that IP address to access e.g. Spring Boot Admin Server on the target port. For me its 31368. Spring Boot Admin should successfully discover all three microservices and connect to the /actuator endpoints exposed by that apps.

spring-cloud-kubernetes-admin

We can go to the details of each Spring Boot app. As you depatment-service is running on my local Minikube.

spring-cloud-kubernetes-services

Once you stop the skaffold dev command, all the apps and configured will be removed from your Kubernetes cluster.

Final Thoughts

If you are running only Spring Boot apps on your Kubernetes cluster, Spring Cloud Kubernetes is an interesting option. It allows us to easily integrate with Kubernetes discovery, config maps, and secrets. Thanks to that we can take advantage of other Spring Cloud components like load balancer, circuit breaker, etc. However, if you are running apps written in different languages and frameworks, and using language-agnostic tools like service mesh (Istio, Linkerd), Spring Cloud Kubernetes may not be the best choice.

The post Spring Cloud Kubernetes with Spring Boot 3 appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/06/08/spring-cloud-kubernetes-with-spring-boot-3/feed/ 20 14232
Spring Boot Development Mode with Testcontainers and Docker https://piotrminkowski.com/2023/05/26/spring-boot-development-mode-with-testcontainers-and-docker/ https://piotrminkowski.com/2023/05/26/spring-boot-development-mode-with-testcontainers-and-docker/#comments Fri, 26 May 2023 14:26:38 +0000 https://piotrminkowski.com/?p=14207 In this article, you will learn how to use Spring Boot built-in support for Testcontainers and Docker Compose to run external services in development mode. Spring Boot introduces those features in the current latest version 3.1. Of course, you can already take advantage of Testcontainers in your Spring Boot app tests. However, the ability to […]

The post Spring Boot Development Mode with Testcontainers and Docker appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use Spring Boot built-in support for Testcontainers and Docker Compose to run external services in development mode. Spring Boot introduces those features in the current latest version 3.1. Of course, you can already take advantage of Testcontainers in your Spring Boot app tests. However, the ability to run external databases, message brokers, or other external services on app startup was something I was waiting for. Especially, since the competitive framework, Quarkus, already provides a similar feature called Dev Services, which is very useful during my development. Also, we should not forget about another exciting feature – integration with Docker Compose. Let’s begin.

If you are looking for more articles related to Spring Boot 3 you can refer to the following one, about microservices with Spring Cloud.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. Since I’m using Testcontainers often, you can find examples in my several repositories. Here’s a list of repositories we will use today:

You can clone them and then follow my instruction to see how to leverage Spring Boot built-in support for Testcontainers and Docker Compose in development mode.

Use Testcontainers in Tests

Let’s start with the standard usage example. The first repository has a single Spring Boot app that connects to the Mongo database. In order to build automated tests we have to include the following Maven dependencies:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-test</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>mongodb</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>junit-jupiter</artifactId>
  <scope>test</scope>
</dependency>

Now, we can create the tests. We need to annotate our test class with @Testcontainers. Then, we have to declare the MongoDBContainer bean. Before Spring Boot 3.1, we would have to use DynamicPropertyRegistry to set the Mongo address automatically generated by Testcontainers.

@SpringBootTest(webEnvironment = 
   SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerTest {

   @Container
   static MongoDBContainer mongodb = 
      new MongoDBContainer("mongo:5.0");

   @DynamicPropertySource
   static void registerMongoProperties(DynamicPropertyRegistry registry) {
      registry.add("spring.data.mongodb.uri", mongodb::getReplicaSetUrl);
   }

   // ... test methods

}

Fortunately, beginning from Spring Boot 3.1 we can simplify that notation with @ServiceConnection annotation. Here’s the full test implementation with the latest approach. It verifies some REST endpoints exposed by the app.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerTest {

    private static String id;

    @Container
    @ServiceConnection
    static MongoDBContainer mongodb = new MongoDBContainer("mongo:5.0");

    @Autowired
    TestRestTemplate restTemplate;

    @Test
    @Order(1)
    void add() {
        Person p = new Person(null, "Test", "Test", 100, Gender.FEMALE);
        Person personAdded = restTemplate
            .postForObject("/persons", p, Person.class);
        assertNotNull(personAdded);
        assertNotNull(personAdded.getId());
        assertEquals(p.getLastName(), personAdded.getLastName());
        id = personAdded.getId();
    }

    @Test
    @Order(2)
    void findById() {
        Person person = restTemplate
            .getForObject("/persons/{id}", Person.class, id);
        assertNotNull(person);
        assertNotNull(person.getId());
        assertEquals(id, person.getId());
    }

    @Test
    @Order(2)
    void findAll() {
        Person[] persons = restTemplate
            .getForObject("/persons", Person[].class);
        assertEquals(6, persons.length);
    }

}

Now, we can build the project with the standard Maven command. Then Testcontainers will automatically start the Mongo database before the test. Of course, we need to have Docker running on our machine.

$ mvn clean package

Tests run fine. But what will happen if we would like to run our app locally for development? We can do it by running the app main class directly from IDE or with the mvn spring-boot:run Maven command. Here’s our main class:

@SpringBootApplication
@EnableMongoRepositories
public class SpringBootOnKubernetesApp implements ApplicationListener<ApplicationReadyEvent> {

    public static void main(String[] args) {
        SpringApplication.run(SpringBootOnKubernetesApp.class, args);
    }

    @Autowired
    PersonRepository repository;

    @Override
    public void onApplicationEvent(ApplicationReadyEvent applicationReadyEvent) {
        if (repository.count() == 0) {
            repository.save(new Person(null, "XXX", "FFF", 20, Gender.MALE));
            repository.save(new Person(null, "AAA", "EEE", 30, Gender.MALE));
            repository.save(new Person(null, "ZZZ", "DDD", 40, Gender.FEMALE));
            repository.save(new Person(null, "BBB", "CCC", 50, Gender.MALE));
            repository.save(new Person(null, "YYY", "JJJ", 60, Gender.FEMALE));
        }
    }
}

Of course, unless we start the Mongo database our app won’t be able to connect it. If we use Docker, we first need to execute the docker run command that runs MongoDB and exposes it on the local port.

spring-boot-testcontainers-logs

Use Testcontainers in Development Mode with Spring Boot

Fortunately, with Spring Boot 3.1 we can simplify that process. We don’t have to Mongo before starting the app. What we need to do – is to enable development mode with Testcontainers. Firstly, we should include the following Maven dependency in the test scope:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-testcontainers</artifactId>
  <scope>test</scope>
</dependency>

Then we need to prepare the @TestConfiguration class with the definition of containers we want to start together with the app. For me, it is just a single MongoDB container as shown below:

@TestConfiguration
public class MongoDBContainerDevMode {

    @Bean
    @ServiceConnection
    MongoDBContainer mongoDBContainer() {
        return new MongoDBContainer("mongo:5.0");
    }

}

After that, we have to “override” the Spring Boot main class. It should have the same name as the main class with the Test suffix. Then we pass the current main method inside the SpringApplication.from(...) method. We also need to set @TestConfiguration class using the with(...) method.

public class SpringBootOnKubernetesAppTest {

    public static void main(String[] args) {
        SpringApplication.from(SpringBootOnKubernetesApp::main)
                .with(MongoDBContainerDevMode.class)
                .run(args);
    }

}

Finally, we can start our “test” main class directly from the IDE or we can just execute the following Maven command:

$ mvn spring-boot:test-run

Once the app starts you will see that the Mongo container is up and running and connection to it is established.

Since we are in dev mode we will also include the Spring Devtools module to automatically restart the app after the source code change.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-devtools</artifactId>
  <optional>true</optional>
</dependency>

Let’s what happened. Once we provide a change in the source code Spring Devtools will restart the app and the Mongo container. You can verify it in the app logs and also on the list of running Docker containers. As you see the Testcontainer ryuk has been initially started a minute ago, while Mongo was restarted after the app restarted 9 seconds ago.

In order to prevent restarting the container on app restart with Devtools we need to annotate the MongoDBContainer bean with @RestartScope.

@TestConfiguration
public class MongoDBContainerDevMode {

    @Bean
    @ServiceConnection
    @RestartScope
    MongoDBContainer mongoDBContainer() {
        return new MongoDBContainer("mongo:5.0");
    }

}

Now, Devtools just restart the app without restarting the container.

spring-boot-testcontainers-containers

Sharing Container across Multiple Apps

In the previous example, we have a single app that connects to the database on a single container. Now, we will switch to the repository with some microservices that communicates with each other via the Kafka broker. Let’s say I want to develop and test all three apps simultaneously. Of course, our services need to have the following Maven dependencies:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-testcontainers</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>kafka</artifactId>
  <version>1.18.1</version>
  <scope>test</scope>
</dependency>

Then we need to do a very similar thing as before – declare the @TestConfiguration bean with a list of required containers. However, this time we need to make our Kafka container reusable between several apps. In order to do that, we will invoke the withReuse(true) on the KafkaContainer. By the way, it is also possible to use Kafka Raft mode instead of Zookeeper.

@TestConfiguration
public class KafkaContainerDevMode {

    @Bean
    @ServiceConnection
    public KafkaContainer kafka() {
        return new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:7.4.0"))
                .withKraft()
                .withReuse(true);
    }

}

The same as before we have to create a “test” main class that uses the @TestConfiguration bean. We will do the same thing for two other apps inside the repository: payment-service and stock-service.

public class OrderAppTest {

    public static void main(String[] args) {
        SpringApplication.from(OrderApp::main)
                .with(KafkaContainerDevMode.class)
                .run(args);
    }

}

Let’s run our three microservices. Just to remind you, it is possible to run the “test” main class directly from IDE or with the mvn spring-boot:test-run command. As you see, I run all three apps.

spring-boot-testcontainers-microservices

Now, if we display a list of running containers, there is only one Kafka broker shared between all the apps.

Use Spring Boot support for Docker Compose

Beginning from version 3.1 Spring Boot provides built-in support for Docker Compose. Let’s switch to our last sample repository. It consists of several microservices that connect to the Mongo database and the Netflix Eureka discovery server. We can go to the directory with one of the microservices, e.g. customer-service. Assuming we include the following Maven dependency, Spring Boot looks for a Docker Compose configuration file in the current working directory. Let’s activate that mechanism only for a specific Maven profile:

<profiles>
  <profile>
    <id>compose</id>
    <dependencies>
      <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-docker-compose</artifactId>
        <optional>true</optional>
      </dependency>
    </dependencies>
  </profile>
</profiles>

Our goal is to run all the required external services before running the customer-service app. The customer-service app connects to Mongo, Eureka, and calls endpoint exposed by the account-service. Here’s the implementation of the REST client that communicates to the account-service.

@FeignClient("account-service")
public interface AccountClient {

    @RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
    List<Account> getAccounts(@PathVariable("customerId") String customerId);

}

We need to prepare the docker-compose.yml with all required containers definition. As you see, there is the mongo service and two applications discovery-service and account-service, which uses local Docker images.

version: "3.8"
services:
  mongo:
    image: mongo:5.0
    ports:
      - "27017:27017"
  discovery-service:
    image: sample-spring-microservices-advanced/discovery-service:1.0-SNAPSHOT
    ports:
      - "8761:8761"
    healthcheck:
      test: curl --fail http://localhost:8761/eureka/v2/apps || exit 1
      interval: 4s
      timeout: 2s
      retries: 3
    environment:
      SPRING_PROFILES_ACTIVE: docker
  account-service:
    image: sample-spring-microservices-advanced/account-service:1.0-SNAPSHOT
    ports:
      - "8080"
    depends_on:
      discovery-service:
        condition: service_healthy
    links:
      - mongo
      - discovery-service
    environment:
      SPRING_PROFILES_ACTIVE: docker

Before we run the service, let’s build the images with our apps. We could as well use built-in Spring Boot mechanisms based on Buildpacks, but I’ve got some problems with it. Jib works fine in my case.

<profile>
  <id>build-image</id>
  <build>
    <plugins>
      <plugin>
        <groupId>com.google.cloud.tools</groupId>
        <artifactId>jib-maven-plugin</artifactId>
        <version>3.3.2</version>
        <configuration>
          <to>
            
          </to>
        </configuration>
        <executions>
          <execution>
            <goals>
              <goal>dockerBuild</goal>
            </goals>
            <phase>package</phase>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</profile>

Let’s execute the following command on the repository root directory:

$ mvn clean package -Pbuild-image -DskipTests

After a successful build, we can verify a list of available images with the docker images command. As you see, there are two images used in our docker-compose.yml file:

Finally, the only thing you need to do is to run the customer-service app. Let’s switch to the customer-service directory once again and execute the mvn spring-boot:run with a profile that includes the spring-boot-docker-compose dependency:

$ mvn spring-boot:run -Pcompose

As you see, our app locates docker-compose.yml.

spring-boot-testcontainers-docker-compose

Once we start our app, it also starts all required containers.

For example, we can take a look at the Eureka dashboard available at http://localhost:8761. There are two apps registered there. The account-service is running on Docker, while the customer-service has been started locally.

Final Thoughts

Spring Boot 3.1 comes with several improvements in the area of containerization. Especially the feature related to the ability to run Testcontainers in development together with the app was something that I was waiting for. I hope this article will clarify how you can take advantage of the latest Spring Boot features for better integration with Testcontainers and Docker Compose.

The post Spring Boot Development Mode with Testcontainers and Docker appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/05/26/spring-boot-development-mode-with-testcontainers-and-docker/feed/ 5 14207
Best Practices for Java Apps on Kubernetes https://piotrminkowski.com/2023/02/13/best-practices-for-java-apps-on-kubernetes/ https://piotrminkowski.com/2023/02/13/best-practices-for-java-apps-on-kubernetes/#comments Mon, 13 Feb 2023 16:18:43 +0000 https://piotrminkowski.com/?p=13990 In this article, you will read about the best practices for running Java apps on Kubernetes. Most of these recommendations will also be valid for other languages. However, I’m considering all the rules in the scope of Java characteristics and also showing solutions and tools available for JVM-based apps. Some of these Kubernetes recommendations are […]

The post Best Practices for Java Apps on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will read about the best practices for running Java apps on Kubernetes. Most of these recommendations will also be valid for other languages. However, I’m considering all the rules in the scope of Java characteristics and also showing solutions and tools available for JVM-based apps. Some of these Kubernetes recommendations are forced by design when using the most popular Java frameworks like Spring Boot or Quarkus. I’ll show you how to effectively leverage them to simplify developer life.

I’m writing a lot about topics related to both Kubernetes and Java. You can find many practical examples on my blog. Some time ago I published a similar article to that one – but mostly focused on best practices for microservices-based apps. You can find it here.

Don’t Set Limits Too Low

Should we set limits for Java apps on Kubernetes or not? The answer seems to be obvious. There are many tools that validate your Kubernetes YAML manifests, and for sure they will print a warning if you don’t set CPU or memory limits. However, there are some “hot discussions” in the community about that. Here’s an interesting article that does not recommend setting any CPU limits. Here’s another article written as a counterpoint to the previous one. They consider CPU limits, but we could as well begin a similar discussion for memory limits. Especially in the context of Java apps 🙂

However, for memory management, the proposition seems to be quite different. Let’s read the other article – this time about memory limits and requests. In short, it recommends always setting the memory limit. Moreover, the limit should be the same as the request. In the context of Java apps, it is also important that we may limit the memory with JVM parameters like -Xmx, -XX:MaxMetaspaceSize or -XX:ReservedCodeCacheSize. Anyway, from the Kubernetes perspective, the pod receives the resources it requests. The limit has nothing to do with it.

It all leads me to the first recommendation today – don’t set your limits too low. Even if you set a CPU limit, it shouldn’t impact your app. For example, as you probably know, even if your Java app doesn’t consume much CPU in normal work, it requires a lot of CPU to start fast. For my simple Spring Boot app that connects MongoDB on Kubernetes, the difference between no limit and even 0.5 core is significant. Normally it starts below 10 seconds:

kubernetes-java-startup

With the CPU limit set to 500 millicores, it starts ~30 seconds:

Of course, we could find some examples. But we will discuss them also in the next sections.

Beginning from the 1.27 version of Kubernetes you may take advantage of the feature called “In-Place Vertical Pod Scaling”. It allows users to resize CPU/memory resources allocated to pods without restarting the containers. Such an approach may help us to speed up Java startup on Kubernetes and keep adequate resource limits (especially CPU limits) for the app at the same time. You can read more about that in the following article: https://piotrminkowski.com/2023/08/22/resize-cpu-limit-to-speed-up-java-startup-on-kubernetes/.

Consider Memory Usage First

Let’s focus just on the memory limit. If you run a Java app on Kubernetes, you have two levels of limiting maximum usage: container and JVM. However, there are also some defaults if you don’t specify any settings for JVM. JVM sets its maximum heap size to approximately 25% of the available RAM in case you don’t set the -Xmx parameter. This value is counted based on the memory visible inside the container. Once you won’t set a limit at the container level, JVM will see the whole memory of the node.

Before running the app on Kubernetes, you should at least measure how much memory it consumes at the expected load. Fortunately, there are tools that may optimize memory configuration for Java apps running in containers. For example, Paketo Buildpacks comes with a built-in memory calculator. It calculates the -Xmx JVM flag using the formula Heap = Total Container Memory - Non-Heap - Headroom. On the other hand, the non-heap value is calculated using the following formula: Non-Heap = Direct Memory + Metaspace + Reserved Code Cache + (Thread Stack * Thread Count).

Paketo Buildpacks is currently a default option for building Spring Boot apps (with the mvn spring-boot:build-image command). Let’s try it for our sample app. Assuming we will set the memory limit to 512M it will calculate -Xmx at a level of 130M.

kubernetes-java-memory

Is it fine for my app? I should at least perform some load tests to verify how my app performs under heavy traffic. But once again – don’t set the limits too low. For example, with the 1024M limit, the -Xmx equals 650M.

As you see we take care of memory usage with JVM parameters. It prevents us from OOM kills described in the article mentioned in the first section. Therefore, setting the request at the same level as the limit does not make much sense. I would recommend setting it a little higher than normal usage – let’s say 20% more.

Proper Liveness and Readiness Probes

Introduction

It is essential to understand the difference between liveness and readiness probes in Kubernetes. If both these probes are not implemented carefully, they can degrade the overall operation of a service, for example by causing unnecessary restarts. The third type of probe, the startup probe, is a relatively new feature in Kubernetes. It allows us to avoid setting initialDelaySeconds on liveness or readiness probes and therefore is especially useful if your app startup takes a lot of time. For more details about Kubernetes probes in general and best practices, I can recommend that very interesting article.

A liveness probe is used to decide whether to restart the container or not. If an application is unavailable for any reason, restarting the container sometimes can make sense. On the other hand, a readiness probe is used to decide if a container can handle incoming traffic. If a pod has been recognized as not ready, it is removed from load balancing. Failure of the readiness probe does not result in pod restart. The most typical liveness or readiness probe for web applications is realized via an HTTP endpoint.

Since subsequent failures of the liveness probe result in pod restart, it should not check the availability of your app integrations. Such things should be verified by the readiness probe.

Configuration Details

The good news is that the most popular Java frameworks like Spring Boot or Quarkus provide an auto-configured implementation of both Kubernetes probes. They follow best practices, so we usually don’t have to take about basics. However, in Spring Boot besides including the Actuator module you need to enable them with the following property:

management:
  endpoint: 
    health:
      probes:
        enabled: true

Since Spring Boot Actuator provides several endpoints (e.g. metrics, traces) it is a good idea to expose it on a different port than a default (usually 8080). Of course, the same rule applies to other popular Java frameworks. On the other hand, a good practice is to check your main app port – especially in the readiness probe. Since it defines if our app is ready to process incoming requests, it should listen also on the main port. It looks just the opposite with the liveness probe. If let’s say the whole working thread pool is busy, I don’t want to restart my app. I just don’t want to receive incoming traffic for some time.

We can also customize other aspects of Kubernetes probes. Let’s say that our app connects to the external system, but we don’t verify that integration in our readiness probe. It is not critical and doesn’t have a direct impact on our operational status. Here’s a configuration that allows us to include in the probe only the selected set of integrations (1) and also exposes readiness on the main server port (2).

spring:
  application:
    name: sample-spring-boot-on-kubernetes
  data:
    mongodb:
      host: ${MONGO_URL}
      port: 27017
      username: ${MONGO_USERNAME}
      password: ${MONGO_PASSWORD}
      database: ${MONGO_DATABASE}
      authentication-database: admin

management:
  endpoint.health:
    show-details: always
    group:
      readiness:
        include: mongo # (1)
        additional-path: server:/readiness # (2)
    probes:
      enabled: true
  server:
    port: 8081

Hardly ever our application is able to exist without any external solutions like databases, message brokers, or just other applications. When configuring the readiness probe we should consider connection settings to that system carefully. Firstly you should consider the situation when external service is not available. How you will handle it? I suggest decreasing these timeouts to lower values as shown below.

spring:
  application:
    name: sample-spring-kotlin-microservice
  datasource:
    url: jdbc:postgresql://postgres:5432/postgres
    username: postgres
    password: postgres123
    hikari:
      connection-timeout: 2000
      initialization-fail-timeout: 0
  jpa:
    database-platform: org.hibernate.dialect.PostgreSQLDialect
  rabbitmq:
    host: rabbitmq
    port: 5672
    connection-timeout: 2000

Choose The Right JDK

If you have already built images with Dockerfile it is possible that you were using the official OpenJDK base image from the Docker Hub. However, currently, the announcement on the image site says that it is officially deprecated and all users should find suitable replacements. I guess it may be quite confusing, so you will find a detailed explanation of the reasons here.

All right, so let’s consider which alternative we should choose. Different vendors provide several replacements. If you are looking for a detailed comparison between them you should go to the following site. It recommends using Eclipse Temurin in the 21 version.

On the other hand, the most popular image build tools like Jib or Cloud Native Buildpacks automatically choose a vendor for you. By default, Jib uses Eclipse Temurin, while Paketo Buildpacks uses Bellsoft Liberica implementation. Of course, you can easily override these settings. I think it might make sense if you, for example, run your app in the environment matched to the JDK provider, like AWS and Amazon Corretto.

Let’s say we use Paketo Buildpacks and Skaffold for deploying Java apps on Kubernetes. In order to replace a default Bellsoft Liberica buildpack with another one we just need to set it literally in the buildpacks section. Here’s an example that leverages the Amazon Corretto buildpack.

apiVersion: skaffold/v2beta22
kind: Config
metadata:
  name: sample-spring-boot-on-kubernetes
build:
  artifacts:
    - image: piomin/sample-spring-boot-on-kubernetes
      buildpacks:
        builder: paketobuildpacks/builder:base
        buildpacks:
          - paketo-buildpacks/amazon-corretto
          - paketo-buildpacks/java
        env:
          - BP_JVM_VERSION=21

We can also easily test the performance of our apps with different JDK vendors. If you are looking for an example of such a comparison you can read my article describing such tests and results. I measured the different JDK performance for the Spring Boot 3 app that interacts with the Mongo database using several available Paketo Java Buildpacks.

Consider Migration To Native Compilation 

Native compilation is a real “game changer” in the Java world. But I can bet that not many of you use it – especially on production. Of course, there were (and still are) numerous challenges in the migration of existing apps into the native compilation. The static code analysis performed by the GraalVM during build time can result in errors like ClassNotFound, as or MethodNotFound. To overcome these challenges, we need to provide several hints to let GraalVM know about the dynamic elements of the code. The number of those hints usually depends on the number of libraries and the general number of language features used in the app.

The Java frameworks like Quarkus or Micronaut try to address challenges related to a native compilation by design. For example, they are avoiding using reflection wherever possible. Spring Boot has also improved native compilation support a lot through the Spring Native project. So, my advice in this area is that if you are creating a new application, prepare it the way it will be ready for native compilation. For example, with Quarkus you can simply generate a Maven configuration that contains a dedicated profile for building a native executable.

<profiles>
  <profile>
    <id>native</id>
    <activation>
      <property>
        <name>native</name>
      </property>
    </activation>
    <properties>
      <skipITs>false</skipITs>
      <quarkus.package.type>native</quarkus.package.type>
    </properties>
  </profile>
</profiles>

Once you add it you can just a native build with the following command:

$ mvn clean package -Pnative

Then you can analyze if there are any issues during the build. Even if you do not run native apps on production now (for example your organization doesn’t approve it), you should place GraalVM compilation as a step in your acceptance pipeline. You can easily build the Java native image for your app with the most popular frameworks. For example, with Spring Boot you just need to provide the following configuration in your Maven pom.xml as shown below:

<plugin>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-maven-plugin</artifactId>
  <executions>
    <execution>
      <goals>
        <goal>build-info</goal>
        <goal>build-image</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    
  </configuration>
</plugin>

Configure Logging Properly

Logging is probably not the first thing you are thinking about when writing your Java apps. However, at the global scope, it becomes very important since we need to be able to collect, store data, and finally search and present the particular entry quickly. The best practice is to write your application logs to the standard output (stdout) and standard error (stderr) streams.

Fluentd is a popular open-source log aggregator that allows you to collect logs from the Kubernetes cluster, process them, and then ship them to a data storage backend of your choice. It integrates seamlessly with Kubernetes deployments. Fluentd tries to structure data as JSON to unify logging across different sources and destinations. Assuming that probably the best way is to prepare logs in this format. With JSON format we can also easily include additional fields for tagging logs and then easily search them in the visual tool with various criteria.

In order to format our logs to JSON readable by Fluentd we may include the Logstash Logback Encoder library in Maven dependencies.

<dependency>
   <groupId>net.logstash.logback</groupId>
   <artifactId>logstash-logback-encoder</artifactId>
   <version>7.2</version>
</dependency>

Then we just need to set a default console log appender for our Spring Boot application in the file logback-spring.xml.

<configuration>
    <appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>
    <logger name="jsonLogger" additivity="false" level="DEBUG">
        <appender-ref ref="consoleAppender"/>
    </logger>
    <root level="INFO">
        <appender-ref ref="consoleAppender"/>
    </root>
</configuration>

Should we avoid using additional log appenders and just print logs just to the standard output? From my experience, the answer is – no. You can still alternative mechanisms for sending the logs. Especially if you use more than one tool for collecting logs in your organization – for example internal stack on Kubernetes and global stack outside. Personally, I’m using a tool that helps me to resolve performance problems e.g. message broker as a proxy. In Spring Boot we easily use RabbitMQ for that. Just include the following starter:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-amqp</artifactId>
</dependency>

Then you need to provide a similar appender configuration in the logback-spring.xml:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

  <springProperty name="destination" source="app.amqp.url" />

  <appender name="AMQP"
		class="org.springframework.amqp.rabbit.logback.AmqpAppender">
    <layout>
      <pattern>
{
  "time": "%date{ISO8601}",
  "thread": "%thread",
  "level": "%level",
  "class": "%logger{36}",
  "message": "%message"
}
      </pattern>
    </layout>

    <addresses>${destination}</addresses>	
    <applicationId>api-service</applicationId>
    <routingKeyPattern>logs</routingKeyPattern>
    <declareExchange>true</declareExchange>
    <exchangeName>ex_logstash</exchangeName>

  </appender>

  <root level="INFO">
    <appender-ref ref="AMQP" />
  </root>

</configuration>

Create Integration Tests

Ok, I know – it’s not directly related to Kubernetes. However, since we use Kubernetes to manage and orchestrate containers, we should also run integration tests on the containers. Fortunately, with Java frameworks, we can simplify that process a lot. For example, Quarkus allows us to annotate the test with @QuarkusIntegrationTest. It is a really powerful solution in conjunction with the Quarkus containers build feature. We can run the tests against an already-built image containing the app. First, let’s include the Quarkus Jib module:

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-container-image-jib</artifactId>
</dependency>

Then we have to enable container build by setting the quarkus.container-image.build property to true in the application.properties file. In the test class, we can use @TestHTTPResource and @TestHTTPEndpoint annotations to inject the test server URL. Then we are creating a client with the RestClientBuilder and call the service started on the container. The name of the test class is not accidental. In order to be automatically detected as the integration test, it has the IT suffix.

@QuarkusIntegrationTest
public class EmployeeControllerIT {

    @TestHTTPEndpoint(EmployeeController.class)
    @TestHTTPResource
    URL url;

    @Test
    void add() {
        EmployeeService service = RestClientBuilder.newBuilder()
                .baseUrl(url)
                .build(EmployeeService.class);
        Employee employee = new Employee(1L, 1L, "Josh Stevens", 
                                         23, "Developer");
        employee = service.add(employee);
        assertNotNull(employee.getId());
    }

    @Test
    public void findAll() {
        EmployeeService service = RestClientBuilder.newBuilder()
                .baseUrl(url)
                .build(EmployeeService.class);
        Set<Employee> employees = service.findAll();
        assertTrue(employees.size() >= 3);
    }

    @Test
    public void findById() {
        EmployeeService service = RestClientBuilder.newBuilder()
                .baseUrl(url)
                .build(EmployeeService.class);
        Employee employee = service.findById(1L);
        assertNotNull(employee.getId());
    }
}

You can find more details about that process in my previous article about advanced testing with Quarkus. The final effect is visible in the picture below. When we run the tests during the build with the mvn clean verify command, our test is executed after building the container image.

kubernetes-java-integration-tests

That Quarkus feature is based on the Testcontainers framework. We can also use Testcontainers with Spring Boot. Here’s the sample test of the Spring REST app and its integration with the PostgreSQL database.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerTests {

    @Autowired
    TestRestTemplate restTemplate;

    @Container
    static PostgreSQLContainer<?> postgres = 
       new PostgreSQLContainer<>("postgres:15.1")
            .withExposedPorts(5432);

    @DynamicPropertySource
    static void registerMySQLProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.datasource.url", postgres::getJdbcUrl);
        registry.add("spring.datasource.username", postgres::getUsername);
        registry.add("spring.datasource.password", postgres::getPassword);
    }

    @Test
    @Order(1)
    void add() {
        Person person = Instancio.of(Person.class)
                .ignore(Select.field("id"))
                .create();
        person = restTemplate.postForObject("/persons", person, Person.class);
        Assertions.assertNotNull(person);
        Assertions.assertNotNull(person.getId());
    }

    @Test
    @Order(2)
    void updateAndGet() {
        final Integer id = 1;
        Person person = Instancio.of(Person.class)
                .set(Select.field("id"), id)
                .create();
        restTemplate.put("/persons", person);
        Person updated = restTemplate.getForObject("/persons/{id}", Person.class, id);
        Assertions.assertNotNull(updated);
        Assertions.assertNotNull(updated.getId());
        Assertions.assertEquals(id, updated.getId());
    }

}

Final Thoughts

I hope that this article will help you avoid some common pitfalls when running Java apps on Kubernetes. Treat it as a summary of other people’s recommendations I found in similar articles and my private experience in that area. Maybe you will find some of those rules are quite controversial. Feel free to share your opinions and feedback in the comments. It will also be valuable for me. If you like this article, once again, I recommend reading another one from my blog – more focused on running microservices-based apps on Kubernetes – Best Practices For Microservices on Kubernetes. But it also contains several useful (I hope) recommendations.

The post Best Practices for Java Apps on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/02/13/best-practices-for-java-apps-on-kubernetes/feed/ 10 13990
Advanced Testing with Quarkus https://piotrminkowski.com/2023/02/08/advanced-testing-with-quarkus/ https://piotrminkowski.com/2023/02/08/advanced-testing-with-quarkus/#comments Wed, 08 Feb 2023 09:52:45 +0000 https://piotrminkowski.com/?p=13971 This article will teach you how to build advanced testing scenarios with Quarkus. We will focus mainly on the integration tests. Quarkus simplifies them by leveraging the Testcontainers project. In many cases, it is a smooth integration process. You won’t even notice you are using Testcontainers under the hood. Before starting with the test it […]

The post Advanced Testing with Quarkus appeared first on Piotr's TechBlog.

]]>
This article will teach you how to build advanced testing scenarios with Quarkus. We will focus mainly on the integration tests. Quarkus simplifies them by leveraging the Testcontainers project. In many cases, it is a smooth integration process. You won’t even notice you are using Testcontainers under the hood.

Before starting with the test it is worth reading about the Quarkus framework. If you are familiar with Spring Boot, I especially recommend the following article about Quarkus. It shows some useful and interesting features of the Quarkus framework that Spring Boot doesn’t provide.

Introduction: The Basics

Let’s begin with the basics. Quarkus has three different launch modes: dev, test, and prod. It defines a built-in profile for each of those modes. As you probably guessed, today we will focus on the test mode. It is automatically activated when running tests during the build. The class containing tests should be annotated with @QuarkusTest. We may provide a configuration dedicated to the particular mode using the following semantics in the application.properties file:

%prod.quarkus.datasource.db-kind = postgresql
%prod.quarkus.datasource.username = ${PG_USER}
%prod.quarkus.datasource.password = ${PG_PASS}
%prod.quarkus.datasource.jdbc.url = jdbc:postgresql://pg:5432/${PG_DB}

Let’s assume we have a very simple endpoint that returns an object by its id:

@Path("/persons")
public class PersonResource {

   @Inject
   InMemoryPersonRepository inMemoryRepository;

   @GET
   @Path("/{id}")
   public Person getPersonById(@PathParam("id") Long id) {
      return inMemoryRepository.findById(id);
   }

}

Now, we have to create a test. We don’t need to take care of a port. By default, Quarkus runs on the 8081 port in test mode. It also automatically configures the Rest Assured library to interact with the server.

@QuarkusTest
public class PersonResourceTests {

   @Test
   void getById() {
      given().get("/persons/{id}", 1L)
         .then()
         .statusCode(200)
         .body("id", notNullValue());
   }

}

Source Code

If you would like to try it by yourself, you may always take a look at my source code. This time, we have multiple repositories with examples. All those repositories contain Quarkus testing scenarios for the different use cases. You can clone the repository with a single app that connects to the Postgres database. There are two other repositories with microservices. Here’s the repository with simple microservices. There is another one with Consul configuration and discovery. Then you should just follow my instructions.

Testing with External Services

Let’s include a database in our scenario. We will use Postgres. In order to interact with the database, we will leverage the Quarkus Panache ORM module. Firstly, we need to add the following two dependencies:

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-hibernate-orm-panache</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-jdbc-postgresql</artifactId>
</dependency>

Here’s our Person entity:

@Entity
public class Person extends PanacheEntityBase {

   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   public Long id;
   public String name;
   public int age;
   @Enumerated(EnumType.STRING)
   public Gender gender;

}

We also need to create the repository:

@ApplicationScoped
public class PersonRepository implements PanacheRepository<Person> {

   public List<Person> findByName(String name) {
      return find("name", name).list();
   }

   public List<Person> findByAgeGreaterThan(int age) {
      return find("age > ?1", age).list();
   }

}

Finally, we have a resource endpoints implementation:

@Path("/persons")
public class PersonResource {

   private PersonRepository repository;

   public PersonResource(PersonRepository repository) {
      this.repository = repository;
   }

   @GET
   public List<Person> findAll() {
      return repository.findAll().list();
   }

   @GET
   @Path("/name/{name}")
   public List<Person> findByName(@PathParam("name") String name) {
      return repository.findByName(name);
   }

   @GET
   @Path("/{id}")
   public Person findById(@PathParam("id") Long id) {
      return repository.findById(id);
   }
}

The most important thing here is not to place any addresses for the test mode. We may set them, for example, only for the prod mode. Here’s our test. It doesn’t differ much from the previous one. We don’t need to add any special annotations, dependencies, or objects. Everything happens automatically. The only thing we need to guarantee is access to the Docker host. Quarkus will automatically start the Postgres container there and configure connection settings for the app.

@QuarkusTest
public class PersonResourceTest {

    @Test
    public void findAll() {
        given()
          .when().get("/persons")
          .then()
             .statusCode(200)
             .assertThat().body("size()", is(20));
    }

    @Test
    public void findById() {
        Person person = given()
                .when().get("/persons/1")
                .then()
                .statusCode(200)
                .extract()
                .body().as(Person.class);
        assertNotNull(person);
        assertEquals(1L, person.id);
    }

}

Let’s run our tests. Here’s the fragment of the logs. Before running the tests Quarkus starts the Postgres container using Testcontainers:

quarkus-testing-postgres

Then it runs our tests and exposes the app at the 8081 port.

Ok, our tests work fine on the local machine. However, the goal is to run them as a part of the CI process. This time we will use CircleCI. The process needs to have access to the Docker host. We may use a dedicated Linux machine as an executor or take advantage of the Testcontainers cloud. Here’s a build configuration for the second option.

version: 2.1

orbs:
  maven: circleci/maven@1.4.0
  tcc: atomicjar/testcontainers-cloud-orb@0.1.0

executors:
  j17:
    docker:
      - image: 'cimg/openjdk:17.0'

workflows:
  maven_test:
    jobs:
      - maven/test:
          executor: j17
          context: Testcontainers
          pre-steps:
            - tcc/setup

Now, we just need to create a job in CircleCI and run the pipeline.

Integration Testing with Quarkus

Instead of @QuarkusTest, we can annotate our test with @QuarkusIntegrationTest. It is a really powerful solution in conjunction with the Quarkus containers build feature. It allows us to run the tests against an already-built image containing the app. First, let’s include the Quarkus Jib module:

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-container-image-jib</artifactId>
</dependency>

We need to enable container build in the application.properties:

quarkus.container-image.build = true

This time instead of the RestAssured object we will use the real HTTP client. Quarkus provides a convenient method for creating declarative REST clients. We need to define an interface with endpoint methods:

public interface EmployeeService {

   @POST
   Employee add(@Valid Employee employee);

   @GET
   Set<Employee> findAll();

   @Path("/{id}")
   @GET
   Employee findById(@PathParam("id") Long id);

}

In the test class, we use @TestHTTPResource and @TestHTTPEndpoint annotations to inject the test URL. Then we are creating a client with the RestClientBuilder and call the service started on the container. The name of the test class is not accidental. In order to be automatically detected as the integration test, it has the IT suffix.

@QuarkusIntegrationTest
public class EmployeeControllerIT {

    @TestHTTPEndpoint(EmployeeController.class)
    @TestHTTPResource
    URL url;

    @Test
    void add() {
        EmployeeService service = RestClientBuilder.newBuilder()
                .baseUrl(url)
                .build(EmployeeService.class);
        Employee employee = new Employee(1L, 1L, "Josh Stevens", 
                                         23, "Developer");
        employee = service.add(employee);
        assertNotNull(employee.getId());
    }

    @Test
    public void findAll() {
        EmployeeService service = RestClientBuilder.newBuilder()
                .baseUrl(url)
                .build(EmployeeService.class);
        Set<Employee> employees = service.findAll();
        assertTrue(employees.size() >= 3);
    }

    @Test
    public void findById() {
        EmployeeService service = RestClientBuilder.newBuilder()
                .baseUrl(url)
                .build(EmployeeService.class);
        Employee employee = service.findById(1L);
        assertNotNull(employee.getId());
    }
}

Now, we just need to include the maven-failsafe-plugin. It will run our test during the verify or integration-test Maven phase.

<plugin>
  <artifactId>maven-failsafe-plugin</artifactId>
  <version>${surefire-plugin.version}</version>
  <executions>
    <execution>
      <goals>
        <goal>integration-test</goal>
        <goal>verify</goal>
      </goals>
    </execution>
  </executions>
</plugin>

Let’s see how it works. In order to run the tests we need to execute the following Maven command:

$ mvn clean verify

Before running the tests, Quarkus build the app image using Jib:

Then, Maven runs the integration tests. Quarkus app starts as the container on Docker and exposes its endpoint over the default test URL http://localhost:8081.

quarkus-testing-integration

Testcontainers with Quarkus 

By default, Quarkus automatically runs several third-party services as containers. It includes databases, brokers like Kafka or RabbitMQ, and some others tools. Here’s a full list of supported software. What if we have a tool that’s not on that list? Let’s consider HashiCorp Consul. There’s a dedicated module for integrating it via Testcontainers:

<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>consul</artifactId>
  <version>1.17.6</version>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>junit-jupiter</artifactId>
  <version>1.17.6</version>
  <scope>test</scope>
</dependency>

In order to start the container during the test we need to create a class that implements the QuarkusTestResourceLifecycleManager interface. It defines two methods: start() and stop(). Inside the start() method, we are creating and running the Consul container. Once it starts successfully we put a new key department.name under the config/department path. Then we need to override the address of the Consul used by the test to the dynamic address of the container run by Testcontainers.

public class ConsulResource implements QuarkusTestResourceLifecycleManager {

   private ConsulContainer consul;

   @Override
   public Map<String, String> start() {
      consul = new ConsulContainer("consul:1.14")
         .withConsulCommand("kv put config/department department.name=abc");

      consul.start();

      String url = consul.getHost() + ":" + consul.getFirstMappedPort();

      return ImmutableMap.of("quarkus.consul-config.agent.host-port", url);
   }

   @Override
   public void stop() {
      consulContainer.stop();
   }
}

Here are the application settings related to the Consul instance. We use them only for storing configuration keys and values.

quarkus.consul-config.enabled=true
quarkus.consul-config.properties-value-keys=config/${quarkus.application.name}

Finally, we can go to the test implementation. In order to start the Consul container defined inside the ConsulResource class during the test we need to annotate the whole test with @QuarkusTestResource. By default, all test resources are global, even if they are defined on a test class or custom profile, which means they will all be activated for all tests. If you want to only enable a test resource on a single test class or test profile, you need to set the restrictToAnnotatedClass field to true. In the following test, I’m injecting the property department.name defined in our Consul instance under the /config/department key.

@QuarkusTest
@QuarkusTestResource(ConsulResource.class, 
                     restrictToAnnotatedClass = true)
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class DepartmentResourceConsulTests {

   @ConfigProperty(name = "department.name", defaultValue = "")
   private String name;

   @Test
   @Order(1)
   void add() {
      Department d = new Department();
      d.setOrganizationId(1L);
      d.setName(name);

      given().body(d).contentType(ContentType.JSON)
              .when().post("/departments").then()
              .statusCode(200)
              .body("id", notNullValue());
   }

   @Test
   @Order(2)
   void findAll() {
       when().get("/departments").then()
              .statusCode(200)
              .body("size()", is(1));
   }
}

Once again let’s run the test using the mvn clean verify command. Of course, don’t forget to run Docker on your laptop. Then, you verify the result.

Enable Profiles

Instead of creating an integration test that runs the Consul container, we can create a simple unit test with disabled interaction with Consul. In order to do that, we need to override a configuration setting. Of course, we can do it globally for the test profile using the following notation:

%test.quarkus.consul-config.enabled = false

However, assuming there are several different tests in the project, we may a different set of configuration properties per a single test class. In that case, we still have the integration test that interacts with the Consul container started on Docker. For such a scenario, Quarkus provides the QuarkusTestProfile interface. We need to create a class that implements it and overrides the value of the quarkus.consul-config.enabled property inside the getConfigOverrides() method.

public class DisableExternalProfile implements QuarkusTestProfile {

    @Override
    public Map<String, String> getConfigOverrides() {
        return Map.of("quarkus.consul-config.enabled", "false");
    }
}

Then, we just need to annotate the test class with the @TestProfile holding of our implementation of the QuarkusTestProfile interface.

@QuarkusTest
@TestProfile(DisableExternalProfile.class)
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class DepartmentResourceTests {

    @Test
    @Order(1)
    void add() {
        Department d = new Department();
        d.setName("test");
        d.setOrganizationId(1L);

        given().body(d).contentType(ContentType.JSON)
                .when().post("/departments").then()
                .statusCode(200)
                .body("id", notNullValue());
    }

    @Test
    @Order(2)
    void findAll() {
        when().get("/departments").then()
                .statusCode(200)
                .body("size()", is(1));
    }

    @Test
    @Order(2)
    void findById() {
        when().get("/departments/{id}", 1).then()
                .statusCode(200)
                .body("id", is(1));
    }

}

Testing with Quarkus on Kubernetes

This topic couldn’t be missed in my article. The last part of the article will show how to deploy the app on Kubernetes and run tests against an application pod. There is no built-in support in Quarkus for the whole scenario, but we can simplify it with some separate features. First of all, Quarkus provides built-in support for building the image (we have already done it in one of the previous sections) and generating YAML manifests for Kubernetes. In order to use we need to include Quarkus Kubernetes Extension in the Maven dependencies. We will also include the fabric8 Kubernetes client for the test purposes:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-kubernetes</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-kubernetes-client</artifactId>
  <scope>test</scope>
</dependency>

Now, if we set the property quarkus.kubernetes.deploy to true during the build, Quarkus will try to deploy the image to the current Kubernetes cluster. We can customize this process using Quarkus configuration properties. Of course, we need to build the image and push it to the container registry. In order to easily test the app in the @QuarkusIntegrationTest we will enable the NodePort Kubernetes service. Here’s the full set of configuration properties to perform the test.

quarkus.container-image.build = true
quarkus.container-image.group = piomin
quarkus.container-image.push = true

quarkus.kubernetes.deploy = true
quarkus.kubernetes.namespace = default
quarkus.kubernetes.service-type = node-port

It is important to run the test after the Maven build phase to deploy an already created and pushed image. The same in one of the previous scenarios we can use the @QuarkusIntegrationTest for that. We can use the KubernetesClient to detect the target port the employee-service running on Kubernetes. Then, we will use the Quarkus Rest client to call the target service as shown below.

@QuarkusIntegrationTest
public class EmployeeAppKubernetesIT {

   KubernetesClient client = new KubernetesClientBuilder().build();

   @Test
   void api() throws MalformedURLException {
      Service service = client.services()
         .inNamespace("default")
         .withName("employee-service")
         .get();
      ServicePort port = service.getSpec().getPorts().get(0);
      EmployeeService client = RestClientBuilder.newBuilder()
         .baseUrl(new URL("http://localhost:" + port.getNodePort() + "/employees"))
         .build(EmployeeService.class);
      Employee employee = new Employee(1L, 1L, "Josh Stevens", 23, "Developer");
      employee = client.add(employee);
      assertNotNull(employee.getId());
   }
}

That’s it. Let’s run the test. In the first step, the Quarkus Maven plugin builds the app image using Jib and pushes it to the registry:

Only after that, it tries to deploy the image on the current Kubernetes cluster. It creates Deployment and Service with the NodePort type.

quarkus-testing-kubernetes

Finally, it will run the test against the current Kubernetes cluster. As I mentioned before, there is no full built-in support for that scenario. So, for example, Quarkus still tries to run the Docker container with the app. In this scenario, our test ignores it and connects to the app deployed on Kubernetes.

Final Thoughts

Quarkus simplifies several things in automation testing. It effectively uses containers in the integration tests. Most of the things work out of the box without any additional configuration or annotations. Finally, we can easily include Kubernetes in our testing scenarios thanks to Quarkus Kubernetes Extension. I just included the most interesting Quarkus testing features. For detailed pieces of information, you may refer to the docs.

The post Advanced Testing with Quarkus appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/02/08/advanced-testing-with-quarkus/feed/ 6 13971
Manage Multiple GitHub Repositories with Renovate and CircleCI https://piotrminkowski.com/2023/01/12/manage-multiple-github-repositories-with-renovate-and-circleci/ https://piotrminkowski.com/2023/01/12/manage-multiple-github-repositories-with-renovate-and-circleci/#comments Thu, 12 Jan 2023 11:37:55 +0000 https://piotrminkowski.com/?p=13895 In this article, you will learn how to automatically update your GitHub repositories with Renovate and CircleCI. The problem we will try to solve today is strictly related to my blogging. As I always attach code examples to my posts, I have a lot of repositories to manage. I know that sometimes it is more […]

The post Manage Multiple GitHub Repositories with Renovate and CircleCI appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to automatically update your GitHub repositories with Renovate and CircleCI. The problem we will try to solve today is strictly related to my blogging. As I always attach code examples to my posts, I have a lot of repositories to manage. I know that sometimes it is more convenient to have a single repo per all the demos, but I do not prefer it. My repo is always related to the particular technology or even to the case it is showing. 

Let’s consider what’s the problem with that approach. I’m usually sharing the same repository across multiple articles if they are closely related to each other. But despite that, I have more than 100 repositories with code examples. Once I create a repository I usually don’t have time to keep it up to date. I need a tool that will do that automatically for me. This, however, forces me to improve automatic tests. If I configure a tool that automatically updates a code in GitHub repositories, I need to verify that the change is valid and will not break the demo app.

There is another problem related to that. Classics to the genre – lack of automation tests… I was always focusing on creating the example app to show the use case described in the post, but not on building the valuable tests. It’s time to fix that! This is my first New Year’s resolution 🙂 As you probably guessed, my work is still in progress. But even now, I can show you which tools I’m using for that and how to configure them. I will also share some first thoughts. Let’s begin!

First Problem: Not Maintained Repositories

Did you ever try to run the app from the source code created some years ago? In theory, everything should go fine. But in practice, several things may have changed. I can use a different version of e.g. Java or Maven than before. Even if have automated tests, they may not work fine especially since I didn’t use any tool to run the build and tests remotely. Of course, I don’t have such many old, unmaintained repositories. Sometimes, I was updating them manually, in particular, those more popular and shared across several articles.

Let’s just take a look at this example. It is from the following repository. I’m trying to generate a class definition from the Protocol Buffers schema file. As you see, the plugin used for that is not able to find the protoc executable. Honestly, I don’t remember how it worked before. Maybe I installed something on my laptop… Anyway, the solution was to use another plugin that doesn’t require any additional executables. Of course, I need to do it manually.

Let’s analyze another example. This time it fails during integration tests from another repository. The test is trying to connect to the Docker container. The problem here is that I was using Windows some years ago and Docker Toolbox was, by default, available under the 192.168.99.100 address. I should not leave such an address in the test. However, once again, I was just running all the tests locally, and at that time they finished successfully.

By the way, moving such a test to the CircleCI pipeline is not a simple thing to do. In order to run some containers (pact-broker with posgtresql) before the pipeline I decided to use Docker Compose. To run containers with Docker Compose I had to enable remote docker for CicrleCI as described here.

Second Problem: Updating Dependencies

If you manage application repositories that use several libraries, you probably know that an update is sometimes not just a formality. Even if that’s a patch or a minor update. Although my applications are usually not very complicated, the update of the Spring Boot version may be challenging. In the following example of Netflix DGS usage (GraphQL framework), I tried to update from the 2.4.2 to the 2.7.7 version. Here’s the result.

In that particular case, my app was initiating the H2 database with some data from the data.sql file. But since one of the 2.4.X Spring Boot version, the records from the data.sql are loaded before database schema initialization. The solution is to replace that file with the import.sql script or add the property spring.jpa.defer-datasource-initialization=true to the application properties. After choosing the second option we solved the problem… and then another one occurs. This time it is related to Netflix DGS and GraphQL Java libraries as described here.

Currently, according to the comments, there is no perfect solution to that problem with Maven. Probably I will have to wait for the next release of Netflix DGS or wait until they will propose the right solution.

Let’s analyze another example – once again with the Spring Boot update. This time it is related to the Spring Data and Embedded Mongo. The case is very interesting since it fails just on the remote builder. When I’m running the test on my local machine everything works perfectly fine.

A similar issue has been described here. However, the described solution doesn’t help me anymore. Probably I will decide to migrate my tests to the Testcontainers. By the way, it is also a very interesting example, since it has an impact only on the tests. So, even with a high level of automation, you will still need to do manual work.

Third Problem: Lack of Automated Tests

It is some kind of paradox – although I’m writing a lot about continuous delivery or tests I have a lot of repositories without any tests. Of course, when I was creating real applications for several companies I was adding many tests to ensure they will fine on production. But even for simple demo apps it is worth adding several tests that verify if everything works fine. In that case, I don’t have many small unit tests but rather a test that runs a whole app and verifies e.g. all the endpoints. Fortunately, the frameworks like Spring Boot or Quarkus provide intuitive tools for that. There are helpers for almost all popular solutions. Here’s my @SprignBootTest for GraphQL queries.

@SpringBootTest(webEnvironment = 
      SpringBootTest.WebEnvironment.RANDOM_PORT)
public class EmployeeQueryResolverTests {

    @Autowired
    GraphQLTestTemplate template;

    @Test
    void employees() throws IOException {
        Employee[] employees = template
           .postForResource("employees.graphql")
           .get("$.data.employees", Employee[].class);
        Assertions.assertTrue(employees.length > 0);
    }

    @Test
    void employeeById() throws IOException {
        Employee employee = template
           .postForResource("employeeById.graphql")
           .get("$.data.employee", Employee.class);
        Assertions.assertNotNull(employee);
        Assertions.assertNotNull(employee.getId());
    }

    @Test
    void employeesWithFilter() throws IOException {
        Employee[] employees = template
           .postForResource("employeesWithFilter.graphql")
           .get("$.data.employeesWithFilter", Employee[].class);
        Assertions.assertTrue(employees.length > 0);
    }
}

In the previous test, I’m using an in-memory H2 database in the background. If I want to test smth with the “real” database I can use Testcontainers. This tool runs the required container on Docker during the test. In the following example, we run PostgreSQL. After that, the Spring Boot application automatically connects to the database thanks to the @DynamicPropertySource annotation that sets the generated URL as the Spring property.

@SpringBootTest(webEnvironment = 
      SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerTests {

   @Autowired
   TestRestTemplate restTemplate;

   @Container
   static PostgreSQLContainer<?> postgres = 
      new PostgreSQLContainer<>("postgres:15.1")
           .withExposedPorts(5432);

   @DynamicPropertySource
   static void registerMySQLProperties(DynamicPropertyRegistry registry) {
       registry.add("spring.datasource.url", 
          postgres::getJdbcUrl);
       registry.add("spring.datasource.username", 
          postgres::getUsername);
       registry.add("spring.datasource.password", 
          postgres::getPassword);
   }

   @Test
   @Order(1)
   void add() {
       Person person = Instancio.of(Person.class)
               .ignore(Select.field("id"))
               .create();
       person = restTemplate
          .postForObject("/persons", person, Person.class);
       Assertions.assertNotNull(person);
       Assertions.assertNotNull(person.getId());
   }

   @Test
   @Order(2)
   void updateAndGet() {
       final Integer id = 1;
       Person person = Instancio.of(Person.class)
               .set(Select.field("id"), id)
               .create();
       restTemplate.put("/persons", person);
       Person updated = restTemplate
          .getForObject("/persons/{id}", Person.class, id);
       Assertions.assertNotNull(updated);
       Assertions.assertNotNull(updated.getId());
       Assertions.assertEquals(id, updated.getId());
   }

   @Test
   @Order(3)
   void getAll() {
       Person[] persons = restTemplate
          .getForObject("/persons", Person[].class);
       Assertions.assertEquals(1, persons.length);
   }

   @Test
   @Order(4)
   void deleteAndGet() {
       restTemplate.delete("/persons/{id}", 1);
       Person person = restTemplate
          .getForObject("/persons/{id}", Person.class, 1);
       Assertions.assertNull(person);
   }

}

In some cases, we may have multiple applications (or microservices) communicating with each other. We can mock that communication with the libraries like Mockito. On the other, we can simulate real HTTP traffic with the libraries like Hoverfly or Wiremock. Here’s the example with Hoverfly and the Spring Boot Test module.

@SpringBootTest(properties = { "POD_NAME=abc", "POD_NAMESPACE=default"}, 
   webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ExtendWith(HoverflyExtension.class)
public class CallerControllerTests {

   @LocalServerPort
   int port;
   @Autowired
   TestRestTemplate restTemplate;

   @Test
   void ping(Hoverfly hoverfly) {
      String msg = "callme-service v1.0-SNAPSHOT (id=1): abc in default";
      hoverfly.simulate(dsl(
            service("http://callme-service.serverless.svc.cluster.local")
               .get("/callme/ping")
               .willReturn(success(msg, "text/plain"))));

      String response = restTemplate
         .getForObject("/caller/ping", String.class);
      assertNotNull(response);

      String c = "caller-service(id=1): abc in default is calling " + msg;
      assertEquals(c, response);
   }
}

Of course, these are just examples of tests. There are a lot of different tests and technologies used in all my repositories. Some others would be added in the near future 🙂 Now, let’s go to the point.

Choosing the Right Tools

As mentioned in the introduction, I will use CircleCI and Renovate for managing my GitHub repositories. CircleCI is probably the most popular choice for running builds of open-source projects stored in GitHub repositories. GitHub also provides a tool for updating dependencies called Dependabot. However, Renovate has some significant advantages over Dependabot. It provides a lot of configuration options, may be run anywhere (including Kubernetes – more details here), and can integrate also with GitLab or Bitbucket. We will also use SonarCloud for a static code quality analysis.

Renovate is able to analyze not only the descriptors of traditional package managers like npm, Maven, or Gradle but also e.g. CircleCI configuration files or Docker image tags. Here’s a list of my requirements that the following tool needs to meet:

  1. It should be able to perform different actions depending on the dependency update type (major, patch, or minor)
  2. It needs to create PR on change and auto-merge it only if the build performed by CircleCI finishes successfully. Therefore it needs to wait for the status of that build
  3. Auto-merge should not be enabled for major updates. They require approval from the repository admin

Renovate meets all these requirements. We can also easily install Renovate on GitHub and use it to update CircleCI configuration files inside repositories. In order to install Renovate on GitHub you need to go to the marketplace. After you install it, go the Settings, and then the Applications menu item. In order to set the list of repositories enabled for Renovate click the Configure button.

Then in the Repository Access section, you can enable all your repositories or choose several from the whole list.

github-renovate-circleci-conf

Configure Renovate and CircleCI inside the GitHub Repository

Each GitHub repository has to contain CircleCI and Renovate configuration files. Renovate tries to detect the renovate.json file in the repository root directory. We don’t provide many configuration settings to achieve the expected results. By default, Renovate creates a pull request once it detects a new version of dependency but does not auto-merge it. We want to auto-merge all non-major changes. Therefore, we need to set a list of all update types merged automatically (minor, patch, pin, and digest).

By default, Renovate creates PR just after it creates a branch with a new version of the dependency. Because we are auto-merging all non-major PRs we need to force Renovate to create them only after the build on CircleCI finishes successfully. Once, all the tests on the newly created branch will be passed, Renovate creates PR and auto-merge if it does not contain major changes. Otherwise, it leaves the PR for approval. To achieve it, we to set the property prCreation to not-pending. Here’s the renovate.json file I’m using for all my GitHub repositories.

{
  "$schema": "https://docs.renovatebot.com/renovate-schema.json",
  "extends": [
    "config:base",":dependencyDashboard"
  ],
  "packageRules": [
    {
      "matchUpdateTypes": ["minor", "patch", "pin", "digest"],
      "automerge": true
    }
  ],
  "prCreation": "not-pending"
}

The CircleCI configuration is stored in the .circleci/config.yml file. I mostly use Maven as a build tool. Here’s a typical CircleCI configuration file for my repositories. It defines two jobs: a standard maven/test job for building the project and running unit tests and a job for running SonarQube analysis.

version: 2.1

jobs:
  analyze:
    docker:
      - image: 'cimg/openjdk:17.0'
    steps:
      - checkout
      - run:
          name: Analyze on SonarCloud
          command: mvn verify sonar:sonar

executors:
  j17:
    docker:
      - image: 'cimg/openjdk:17.0'

orbs:
  maven: circleci/maven@1.4.0

workflows:
  maven_test:
    jobs:
      - maven/test:
          executor: j17
      - analyze:
          context: SonarCloud

By default, CicrcleCI runs builds on Docker containers. However, this approach is not suitable everywhere. For Testcontainers we need a machine executor that has full access to the Docker process. Thanks to that, it is able to run additional containers during tests with e.g. databases.

version: 2.1

jobs:
  analyze:
    docker:
      - image: 'cimg/openjdk:11.0'
    steps:
      - checkout
      - run:
          name: Analyze on SonarCloud
          command: mvn verify sonar:sonar -DskipTests

orbs:
  maven: circleci/maven@1.3.0

executors:
  machine_executor_amd64:
    machine:
      image: ubuntu-2204:2022.04.2
    environment:
      architecture: "amd64"
      platform: "linux/amd64"

workflows:
  maven_test:
    jobs:
      - maven/test:
          executor: machine_executor_amd64
      - analyze:
          context: SonarCloud

Finally, the last part of configuration – an integration between CircleCI and SonarCloud. We need to add some properties to Maven pom.xml to enable SonarCloud context.

<properties>
  <sonar.projectKey>piomin_sample-spring-redis</sonar.projectKey>
  <sonar.organization>piomin</sonar.organization>
  <sonar.host.url>https://sonarcloud.io</sonar.host.url>
</properties>

How It Works

Let’s verify how it works. Once you provide the required configuration for Renovate, CircleCI, and SonarCloud in your GitHub repository the process starts. Renovate initially detects a list of required dependency updates. Since I enabled the dependency dashboard, Renovate immediately creates an issue with a list of changes as shown below. It just provides a summary view showing a list of changes in the dependencies.

github-renovate-circleci-dashboard

Here’s a list of detected package managers in this repository. Besides Maven and CircleCI, there is also Dockerfile and Gitlab CI configuration file there.

Some pull requests has already been automerged by Renovate, if the build on CircleCI has finished successfully.

github-renovate-circleci-pr

Some other pull requests are still waiting in the Open state – waiting for approval (a major update from Java 11 to Java 17) or for a fix because the build on CicrcleCI failed.

We can go into the details of the selected PR. Let’s do that for the first PR (#11) on the list visible above. Renovate is trying to update Spring Boot from 2.6.1 to the latest 2.7.7. It created the branch renovate/spring-boot that contains the required changes.

github-renovate-circleci-pr-details

The PR could be merged automatically. However the build failed, so it didn’t happen.

github-renovate-circleci-pr-checks

We can go to the details of the build. As you see in the CircleCI dashboard all the tests failed. In this particular case, I have already tried to fix the by updating the version of embedded Mongo. However, it didn’t solve the problem.

Here’s a list of commits in the master branch. As you see Renovate is automatically updating the repository after the build of the particular branch finishes successfully.

As you see, each time a new branch is created CircleCI runs a build to verify if it does not break the tests.

github-renovate-circleci-builds

Conclusion

I have some conclusions after making the described changes in my repository:

  • 1) Include automated tests in your projects even if you are creating an app for demo showcase, not for production usage. It will help you back to a project after some time. It will also ensure that everything works fine in your demo and helps other people when using it.
  • 2) All these tools like Renovate, CircleCI, or SonarCloud can be easily used with your GitHub project for free. You don’t need to spend a lot of time configuring them, but the effect can be significant.
  • 3) Keeping the repositories up to date is important. Sometimes people wrote to me that something doesn’t work properly in my examples. Even now, I found some small bugs in the code logic. Thanks to the described approach, I hope to give you a better quality of my examples – as you are my blog followers.

If you have something like that in my repository main site, it means that I already have reviewed the project and added all the described mechanisms in that article.

The post Manage Multiple GitHub Repositories with Renovate and CircleCI appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/01/12/manage-multiple-github-repositories-with-renovate-and-circleci/feed/ 1 13895
Local Development with Redpanda, Quarkus and Testcontainers https://piotrminkowski.com/2022/04/20/local-development-with-redpanda-quarkus-and-testcontainers/ https://piotrminkowski.com/2022/04/20/local-development-with-redpanda-quarkus-and-testcontainers/#comments Wed, 20 Apr 2022 08:13:36 +0000 https://piotrminkowski.com/?p=11098 In this article, you will learn how to speed up your local development with Redpanda and Quarkus. The main goal is to show that you can replace Apache KafkaⓇ with Redpanda without any changes in the source code. Instead, you will get a fast way to run your existing Kafka applications without Zookeeper and JVM. […]

The post Local Development with Redpanda, Quarkus and Testcontainers appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to speed up your local development with Redpanda and Quarkus. The main goal is to show that you can replace Apache KafkaⓇ with Redpanda without any changes in the source code. Instead, you will get a fast way to run your existing Kafka applications without Zookeeper and JVM. You will also see how Quarkus uses Redpanda as a local instance for development. Finally, we are going to run all containers in the Testcontainers Cloud.

For the current exercise, we use the same examples as described in one of my previous articles about Quarkus and Kafka Streams. Just to remind you: we are building a simplified version of the stock market platform. The stock-service application receives and handles incoming orders. There are two types of orders: purchase (BUY) and sale (SELL). While the stock-service consumes Kafka streams, the order-service generates and sends events to the orders.buy and orders.sell topics. Here’s the diagram with our architecture. As you see, the stock-service also uses PostgreSQL as a database.

quarkus-redpanda-arch

Source Code

If you would like to try this exercise yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. Then switch to the dev branch. After that, you should just follow my instructions. Let’s begin.

Install Redpanda

This step is not required. However, it is worth installing Redpanda since it provides a useful CLI called Redpanda Keeper (rpk) to manage a cluster. To install Redpanda on macOS just run the following command:

$ brew install redpanda-data/tap/redpanda

Now, we can easily create and run a new cluster. For the development purpose, we only need a single-node Redpanda cluster. In order to run, you need to have Docker on your laptop.

$ rpk container start

Before proceeding to the next steps let’s just remove a current cluster. Quarkus will create everything for us automatically.

$ rpk container purge

Quarkus with Kafka and Postgres

Let’s begin with the stock-service. It consumes streams from Kafka topics and connects to the PostgreSQL database, as I mentioned before. So, the first step is to include the following dependencies:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-kafka-streams</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-hibernate-orm-panache</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-jdbc-postgresql</artifactId>
</dependency>

Now, we may proceed to the implementation. The topology for all the streams is provided inside the following method:

@Produces
public Topology buildTopology() {
   ...
}

There are some different streams defined there. But let’s just take a look at the fragment of topology responsible for creating transactions from incoming orders

final String ORDERS_BUY_TOPIC = "orders.buy";
final String ORDERS_SELL_TOPIC = "orders.sell";
final String TRANSACTIONS_TOPIC = "transactions";

// ... other streams

KStream<Long, Order> orders = builder.stream(
   ORDERS_SELL_TOPIC,
   Consumed.with(Serdes.Long(), orderSerde));

builder.stream(ORDERS_BUY_TOPIC, Consumed.with(Serdes.Long(), orderSerde))
   .merge(orders)
   .peek((k, v) -> {
      log.infof("New: %s", v);
      logic.add(v);
   });

builder.stream(ORDERS_BUY_TOPIC, Consumed.with(Serdes.Long(), orderSerde))
   .selectKey((k, v) -> v.getProductId())
   .join(orders.selectKey((k, v) -> v.getProductId()),
      this::execute,
      JoinWindows.of(Duration.ofSeconds(10)),
      StreamJoined.with(Serdes.Integer(), orderSerde, orderSerde))
   .filterNot((k, v) -> v == null)
   .map((k, v) -> new KeyValue<>(v.getId(), v))
   .peek((k, v) -> log.infof("Done -> %s", v))
   .to(TRANSACTIONS_TOPIC, Produced.with(Serdes.Long(), transactionSerde));

The whole implementation is more advanced. For the details, you may refer to the article I mentioned in the introduction. Now, let’s imagine we are still developing our stock market app. Firstly, we should run PostgreSQL and a local Kafka cluster. We use Redpanda, which is easy to run locally. After that, we would typically provide addresses of both the database and broker in the application.properties. But using a feature called Quarkus Dev Services, the only thing we need to configure now, are the names of topics used for consuming Kafka Streams and the application id. Both of these are required by Kafka Streams.

Now, the most important thing: you just need to start the Quarkus app. Nothing more. DO NOT run any external tools by yourself and DO NOT provide any addresses for them in the configuration settings. Just add the two lines you see below:

quarkus.kafka-streams.application-id = stock
quarkus.kafka-streams.topics = orders.buy,orders.sell

Run Quarkus in dev mode with Redpanda

Before you run the Quarkus app, make sure you have Docker running on your laptop. When you do, the only thing you need is to start both test apps. Let’s begin with the stock-service since it receives orders generated by the order-service. Go to the stock-service directory and run the following command:

$ cd stock-service
$ mvn quarkus:dev

If you see the following logs, it means that everything went well. Our application has been started in 13 seconds. During this time, Quarkus also started Kafka, PostgreSQL on Docker, and built Kafka Streams. Everything in 13 seconds with a single command and without any additional configuration. Nice, right? Let’s check out what happened in the background:

Firstly, let’s find the following line of logs beginning with the sentence “Dev Services for Kafka started”. It perfectly describes the feature of Quarkus called Dev Services. Our Kafka instance has been started as a Docker container and is available under a dynamically generated port. The application connects to it. All other Quarkus apps you would run now will share the same instance of a broker. You can disable that feature by setting the property quarkus.kafka.devservices.shared to false.

It may be a little surprising, but Quarkus Dev Services for Kafka uses Redpanda to run a broker. Of course, Redpanda is a Kafka-compatible solution. Since it starts in ~one second and does not require Zookeeper, it is a great choice for local development.

In order to run tools like brokers or databases on Docker, Quarkus uses Testcontainers. If you are interested in more details about Quarkus Dev Services for Kafka, read the following documentation. For now, let’s display a list of running containers using the docker ps command. There is a container with Redpanda, PostgreSQL, and Testcontainers.

quarkus-redpanda-containers

Manage Kafka Streams with Redpanda and Quarkus

Let’s verify how everything works on the application side. After running the application, we can take advantage of another useful Quarkus feature called Dev UI. Our UI console is available under the address http://localhost:8080/q/dev/. After accessing it, you can display a topology of Kafka Streams by clicking the button inside the Apache Kafka Streams tile.

Here you will see a summary of available streams. For me, it is 12 topics and 15 state stores. You may also see a visualization of Kafka Streams’ topology. The following picture shows the fragment of topology. You can download the full image by clicking the green download button, visible on the right side of the screen.

quarkus-redpanda-dev

Now, let’s use the Redpanda CLI to display a list of created topics. In my case, Redpanda is available under the port 55001 locally. All the topics are automatically created by Quarkus during application startup. We need to define the names of topics used in communication between both our test apps. Those topics are: orders.buy, orders.sell and transactions. They are configured and created by the order-service. The stock-service is creating all other topics visible below, which are responsible for handling streams.

$ rpk topic list --brokers localhost:55001
NAME                                                    PARTITIONS  REPLICAS
orders.buy                                              1           1
orders.sell                                             1           1
stock-KSTREAM-JOINOTHER-0000000016-store-changelog      1           1
stock-KSTREAM-JOINOTHER-0000000043-store-changelog      1           1
stock-KSTREAM-JOINOTHER-0000000065-store-changelog      1           1
stock-KSTREAM-JOINTHIS-0000000015-store-changelog       1           1
stock-KSTREAM-JOINTHIS-0000000042-store-changelog       1           1
stock-KSTREAM-JOINTHIS-0000000064-store-changelog       1           1
stock-KSTREAM-KEY-SELECT-0000000005-repartition         1           1
stock-KSTREAM-KEY-SELECT-0000000006-repartition         1           1
stock-KSTREAM-KEY-SELECT-0000000032-repartition         1           1
stock-KSTREAM-KEY-SELECT-0000000033-repartition         1           1
stock-KSTREAM-KEY-SELECT-0000000054-repartition         1           1
stock-KSTREAM-KEY-SELECT-0000000055-repartition         1           1
stock-transactions-all-summary-changelog                1           1
stock-transactions-all-summary-repartition              1           1
stock-transactions-per-product-summary-30s-changelog    1           1
stock-transactions-per-product-summary-30s-repartition  1           1
stock-transactions-per-product-summary-changelog        1           1
stock-transactions-per-product-summary-repartition      1           1
transactions                                            1           1

In order to do a full test, we also need to run order-service. It is generating orders continuously and sending them to the orders.buy or orders.sell topics. Let’s do that.

Send messages to Redpanda with Quarkus

Before we run order-service, let’s see some implementation details. On the producer side, we need to include a single dependency responsible for integration with a Kafka broker:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-smallrye-reactive-messaging-kafka</artifactId>
</dependency>

Our application generates and sends random orders to the orders.buy or orders.sell topics. There are two methods for that, each of them dedicated to a single topic. Let’s just see a method for generating BUY orders. We need to annotate it with @Outgoing and set the channel name (orders-buy). Our method generates a single order per 500 milliseconds.

@Outgoing("orders-buy")
public Multi<Record<Long, Order>> buyOrdersGenerator() {
   return Multi.createFrom().ticks().every(Duration.ofMillis(500))
      .map(order -> {
         Integer productId = random.nextInt(10) + 1;
         int price = prices.get(productId) + random.nextInt(200);
         Order o = new Order(
            incrementOrderId(),
            random.nextInt(1000) + 1,
            productId,
            100 * (random.nextInt(5) + 1),
            LocalDateTime.now(),
            OrderType.BUY,
            price);
         log.infof("Sent: %s", o);
      return Record.of(o.getId(), o);
   });
}

After that, we need to map the channel name into a target topic name. Another required operation is to set the serializer for the message key and value.

mp.messaging.outgoing.orders-buy.connector = smallrye-kafka
mp.messaging.outgoing.orders-buy.topic = orders.buy
mp.messaging.outgoing.orders-buy.key.serializer = org.apache.kafka.common.serialization.LongSerializer
mp.messaging.outgoing.orders-buy.value.serializer = io.quarkus.kafka.client.serialization.ObjectMapperSerializer

Finally, go to the order-service directory and run the application.

$ cd order-service
$ mvn quarkus:dev

Once you start order-service, it will create topics and start sending orders. It uses the same instance of Redpanda as stock-service. You can run the docker ps command once again to verify it.

Now, just do a simple change in stock-service to reload the application. It will also reload the Kafka Streams topology. After that, it is starting to receive orders from the topics created by the order-service. Finally, it will create transactions from incoming orders and store them in the transactions topic.

Use Testcontainers Cloud

In our development process, we need to have a locally installed Docker ecosystem. But, what if we don’t have it? That’s where Testcontainers Cloud comes in. Testcontainers Cloud is the developer-first SaaS platform for modern integration testing with real databases, message brokers, cloud services, or any other component of application infrastructure. To simplify, we will do the same thing as before but our instances of Redpanda and PostgreSQL will not run on the local Docker, but on the remote Testcointainers platform.

What do you need to do to enable Testcontainers Cloud? Firstly, download the agent from the following site. You also need to be a beta tester to obtain an authorization token. Finally, just run the agent and kill your local Docker daemon. You should see the Testcontainers icon in the running apps with information about the connection to the cloud.

quarkus-redpanda-testcontainers

Docker should not run locally.

The same as before, just run both applications with the quarkus:dev command. Your Redpanda broker is running on the Testcontainers Cloud but, thanks to the agent, you may access it over localhost.

Once again you can verify a list of topics using the following command for the new broker:

$ rpk topic list --brokers localhost:59779

Final Thoughts

In this article, I focused on showing you how new and exciting technologies like Quarkus, Redpanda, and Testcontainers can work together. Local development is one of the use cases, but you may as well use them to write integration tests.

The post Local Development with Redpanda, Quarkus and Testcontainers appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/04/20/local-development-with-redpanda-quarkus-and-testcontainers/feed/ 2 11098
Quarkus Tips, Tricks and Techniques https://piotrminkowski.com/2021/10/12/quarkus-tips-tricks-and-techniques/ https://piotrminkowski.com/2021/10/12/quarkus-tips-tricks-and-techniques/#respond Tue, 12 Oct 2021 10:05:51 +0000 https://piotrminkowski.com/?p=10117 In this article, you will learn some useful tips and tricks related to the Quarkus framework. We will focus on the features that stand Quarkus out from the other frameworks. For those who use Spring Boot, there is a similar article Spring Boot Tips, Tricks and Techniques. If you run your applications on Kubernetes, Quarkus […]

The post Quarkus Tips, Tricks and Techniques appeared first on Piotr's TechBlog.

]]>
In this article, you will learn some useful tips and tricks related to the Quarkus framework. We will focus on the features that stand Quarkus out from the other frameworks. For those who use Spring Boot, there is a similar article Spring Boot Tips, Tricks and Techniques.

If you run your applications on Kubernetes, Quarkus is obviously a good choice. It starts fast and does not consume much memory. You may easily compile it natively with GraalVM. It provides a lot of useful developers features like e.g. hot reload. I hope you will find there tips and techniques that help to boost your productivity in Quarkus development. Or maybe just convince you to take a look at it, if don’t have any experience yet.

I have already published all these Quarkus tips on Twitter in a graphical form visible below. You may access them using the #QuarkusTips hashtag. I’m a huge fan of Quarkus (and to be honest Spring Boot also :)). So, if you have suggestions or your own favorite features just ping me on Twitter (@piotr_minkowski). I will definitely retweet your tweet.

quarkus-tips-twitter

Tip 1. Use Quarkus command-line tool

How do you start a new application when using one of the popular Java frameworks? You can go to the online generator website, which is usually provided by those frameworks. Did you hear about Spring Initializr? Quarkus offers a similar site available at https://code.quarkus.io/. But what you may not know, there is also the Quarkus CLI tool. It allows you to create projects, manage extensions and execute build and dev commands. For example, you can create a source code for a new application using a single command as shown below.

$ quarkus create app --package-name=pl.piomin.samples.quarkus \
  -x resteasy-jackson,hibernate-orm-panache,jdbc-postgresql \
  -o person-service \
  pl.piomin.samples:person-service

After executing the command visible above you should a similar screen.

quarkus-tips-cli

This command creates a simple REST application that uses the PostgreSQL database and Quarkus ORM layer. Also, it sets the name of the application, Maven groupId, and artifactId. After that, you can just run the application. To do that go to the generated directory and run the following command. Alternatively, you can execute the mvn quarkus:dev command.

$ quarkus dev

The application does not start successfully, since there is no database connection configured. Do we have to do that? No! Let’s proceed to the next section to see why.

Tip 2. Use Dev Services with databases

Did you hear about Testcontainers? It is a Java library that allows you to run containers automatically during JUnit tests. You can run common databases, Selenium web browsers, or anything else that can run in a Docker container. Quarkus provides built-in integration with Testcontainers when running applications in dev or test modes. This feature is called Dev Services. Moreover, you don’t have to do anything to enable it. Just DO NOT PROVIDE connection URL and credentials.

Let’s back to our scenario. We have already created the application using Quarkus CLI. It contains all the required libraries. So, the only thing we need to do now is to run a Docker daemon. Thanks to that Quarkus will try to run PostgreSQL with Testcontainers in development mode. What’s the final result? Our application is working and it is connected with PostgreSQL started with Docker as shown below.

Then, we may proceed to the development. With the quarkus dev command we have already enabled dev mode. Thanks to this, we may take advantage of the live reload feature.

Tip 3. Use simplified ORM with Panache

Let’s add some code to our sample application. We will implement a data layer using Quarkus Panache ORM. That’s a very interesting module that focuses on making your entities trivial and fun to write in Quarkus. Here’s our entity class. Thanks to the Quarkus field access rewrite, when you read person.name you will actually call your getName() accessor, and similarly for field writes and the setter. This allows for proper encapsulation at runtime as all fields calls will be replaced by the corresponding getter or setter calls. The PanacheEntity also takes care of the primary key implementation.

@Entity
public class Person extends PanacheEntity {
   public String name;
   public int age;
   @Enumerated(EnumType.STRING)
   public Gender gender;
}

In the next step, we are going to define the repository class. Since it implements the PanacheRepository interface, we only need to add our custom find methods.

@ApplicationScoped
public class PersonRepository implements PanacheRepository<Person> {

    public List<Person> findByName(String name) {
        return find("name", name).list();
    }

    public List<Person> findByAgeGreaterThan(int age) {
        return find("age > ?1", age).list();
    }
}

Finally, let’s add a resource class with REST endpoints.

@Path("/persons")
public class PersonResource {

    @Inject
    PersonRepository personRepository;

    @POST
    @Transactional
    public Person addPerson(Person person) {
        personRepository.persist(person);
        return person;
    }

    @GET
    public List<Person> getPersons() {
        return personRepository.listAll();
    }

    @GET
    @Path("/name/{name}")
    public List<Person> getPersonsByName(@PathParam("name") String name) {
        return personRepository.findByName(name);
    }

    @GET
    @Path("/age-greater-than/{age}")
    public List<Person> getPersonsByName(@PathParam("age") int age) {
        return personRepository.findByAgeGreaterThan(age);
    }

    @GET
    @Path("/{id}")
    public Person getPersonById(@PathParam("id") Long id) {
        return personRepository.findById(id);
    }

}

Also, let’s create the import.sql file in the src/main/resources directory. It loads SQL statements when Hibernate ORM starts.

insert into person(id, name, age, gender) values(1, 'John Smith', 25, 'MALE');
insert into person(id, name, age, gender) values(2, 'Paul Walker', 65, 'MALE');
insert into person(id, name, age, gender) values(3, 'Lewis Hamilton', 35, 'MALE');
insert into person(id, name, age, gender) values(4, 'Veronica Jones', 20, 'FEMALE');
insert into person(id, name, age, gender) values(5, 'Anne Brown', 60, 'FEMALE');
insert into person(id, name, age, gender) values(6, 'Felicia Scott', 45, 'FEMALE');

Finally, we can call our REST endpoint.

$ curl http://localhost:8080/persons

Tip 4. Unified configuration as option

Assuming we don’t want to run a database on Docker, we should configure the connection in application.properties. By default, Quarkus provides 3 profiles: prod, test, dev. We can define properties for multiple profiles inside a single application.properties using the syntax  %{profile-name}.config.name. In our case, there is an H2 instance used in dev and test modes, and an external PostgreSQL instance in the prod mode.

quarkus.datasource.db-kind=postgresql
quarkus.datasource.username=${POSTGRES_USER}
quarkus.datasource.password=${POSTGRES_PASSWORD}
quarkus.datasource.jdbc.url=jdbc:postgresql://person-db:5432/${POSTGRES_DB}

%test.quarkus.datasource.db-kind=h2
%test.quarkus.datasource.username=sa
%test.quarkus.datasource.password=password
%test.quarkus.datasource.jdbc.url=jdbc:h2:mem:testdb

%dev.quarkus.datasource.db-kind=h2
%dev.quarkus.datasource.username=sa
%dev.quarkus.datasource.password=password
%dev.quarkus.datasource.jdbc.url=jdbc:h2:mem:testdb

Before running a new version application we have to include H2 dependency in Maven pom.xml.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-jdbc-h2</artifactId>
</dependency>

You can also define your custom profile and provide properties using it as a prefix. Of course, you may still define profile-specific files like application-{profile}.properties.

Tip 5. Deploy to Kubernetes with Maven

Quarkus in a Kubernetes native framework. You may easily deploy your Quarkus application to the Kubernetes cluster without creating any YAML files manually. For more advanced configurations like e.g. mapping secrets to environment variables, you can use application.properties. Other things like e.g. health checks are detected in the source code. To enable this we need to include the quarkus-kubernetes module. There is also an implementation for OpenShift.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-openshift</artifactId>
</dependency>

After that, Quarkus will generate deployment manifests during the Maven build. We can enable automatic deployment to the current Kubernetes cluster by setting the property quarkus.kubernetes.deploy to true. For OpenShift deploy, we have to change the default deployment target from kubernetes to openshift.

quarkus.container-image.build = true
quarkus.kubernetes.deploy = true
quarkus.kubernetes.deployment-target = openshift

Let’s assume we have some custom configuration to set on the Deployment manifest. Our application will run in two pods and is automatically exposed outside the cluster. It also injects values from Secret in order to connect with the PostgreSQL database.

quarkus.openshift.expose = true
quarkus.openshift.replicas = 2
quarkus.openshift.labels.app = person-app
quarkus.openshift.annotations.app-type = demo
quarkus.openshift.env.mapping.postgres_user.from-secret = person-db
quarkus.openshift.env.mapping.postgres_user.with-key = database-user
quarkus.openshift.env.mapping.postgres_password.from-secret = person-db
quarkus.openshift.env.mapping.postgres_password.with-key = database-password
quarkus.openshift.env.mapping.postgres_db.from-secret = person-db
quarkus.openshift.env.mapping.postgres_db.with-key = database-name

Then we just need to build our application with Maven. Alternatively, we may remove the quarkus.kubernetes.deploy property from application.properties and enable it on the Maven command.

$ maven clean package -Dquarkus.kubernetes.deploy=true

Tip 6. Access Dev UI console

After running the Quarkus app in dev mode (mvn quarkus:dev) you can access the Dev UI console under the address http://localhost:8080/q/dev. The more modules you include the more options you can configure there. One of my favorite features here is the ability to deploy applications to OpenShift. Instead of running the Maven command for building an application, we can just run it in dev mode and deploy using graphical UI.

quarkus-tips-dev-ui

Tip 7. Test continuously

Quarkus supports continuous testing, where tests run immediately after code changes. This allows you to get instant feedback on your code changes. Quarkus detects which tests cover which code, and uses this information to only run the relevant tests when code is changed. After running the application in development you will be prompted to enable that feature as shown below. Just press r to enable it.

Ok, so let’s add some tests for our sample application. Firstly, we need to include the Quarkus Test module and REST Assured library.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-junit5</artifactId>
   <scope>test</scope>
</dependency>
<dependency>
   <groupId>io.rest-assured</groupId>
   <artifactId>rest-assured</artifactId>
   <scope>test</scope>
</dependency>

Then we will add some simple API tests. The test class has to be annotated with @QuarkusTest. The rest of the implementation is typical for the REST Assured library.

@QuarkusTest
public class PersonResourceTests {

    @Test
    void getPersons() {
        List<Person> persons = given().when().get("/persons")
                .then()
                .statusCode(200)
                .extract()
                .body()
                .jsonPath().getList(".", Person.class);
        assertEquals(persons.size(), 6);
    }

    @Test
    void getPersonById() {
        Person person = given()
                .pathParam("id", 1)
                .when().get("/persons/{id}")
                .then()
                .statusCode(200)
                .extract()
                .body().as(Person.class);
        assertNotNull(person);
        assertEquals(1L, person.id);
    }

    @Test
    void newPersonAdd() {
        Person newPerson = new Person();
        newPerson.age = 22;
        newPerson.name = "TestNew";
        newPerson.gender = Gender.FEMALE;
        Person person = given()
                .body(newPerson)
                .contentType(ContentType.JSON)
                .when().post("/persons")
                .then()
                .statusCode(200)
                .extract()
                .body().as(Person.class);
        assertNotNull(person);
        assertNotNull(person.id);
    }
}

We can run those JUnit tests from the Dev UI console as well. Firstly, you should go to the Dev UI console. At the bottom of the page, you will find the panel responsible testing module. Just click the Test result icon and you will see a screen similar to the visible below.

Tip 8. Compile natively with GraalVM on OpenShift

You can easily build and run a native Quarkus GraalVM image on OpenShift using a single command and ubi-quarkus-native-s2i builder. OpenShift builds the application using the S2I (source-2-image) approach. Of course, you just need a running OpenShift cluster (e.g. local CRC or Developer Sandbox https://developers.redhat.com/products/codeready-containers/overview) and the oc client installed locally.

$ oc new-app --name person-native \
             --context-dir basic-with-db/person-app \
  quay.io/quarkus/ubi-quarkus-native-s2i:21.2-java11~https://github.com/piomin/openshift-quickstart.git

Tip 9. Rollback transaction after each test

If you need to roll back changes in the data after each test, avoid doing it manually. Instead, you just need to annotate your test class with @TestTransaction. Rollback is performed each time the test method is complete.

@QuarkusTest
@TestTransaction
public class PersonRepositoryTests {

    @Inject
    PersonRepository personRepository;

    @Test
    void addPerson() {
        Person newPerson = new Person();
        newPerson.age = 22;
        newPerson.name = "TestNew";
        newPerson.gender = Gender.FEMALE;
        personRepository.persist(newPerson);
        Assertions.assertNotNull(newPerson.id);
    }
}

Tip 10. Take advantage of GraphQL support

That’s the last of Quarkus tips in this article. However, it is one of my favorite Quarkus features. GraphQL support is not a strong side of Spring Boot. On the other hand, Quarkus provides very cool and simple extensions for GraphQL for the client and server-side.

Firstly, let’s add the Quarkus modules responsible for GraphQL support.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-graphql</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-graphql-client</artifactId>
   <scope>test</scope>
</dependency>

Then we may create a code responsible for exposing GraphQL API. The class has to be annotated with @GraphQLAPI. Quarkus automatically generates GraphQL schema from the source code.

@GraphQLApi
public class EmployeeFetcher {

    private EmployeeRepository repository;

    public EmployeeFetcher(EmployeeRepository repository){
        this.repository = repository;
    }

    @Query("employees")
    public List<Employee> findAll() {
        return repository.listAll();
    }

    @Query("employee")
    public Employee findById(@Name("id") Long id) {
        return repository.findById(id);
    }

    @Query("employeesWithFilter")
    public List<Employee> findWithFilter(@Name("filter") EmployeeFilter filter) {
        return repository.findByCriteria(filter);
    }

}

Then, let’s create a client interface to call two endpoints. We need to annotate that interface with @GraphQLClientApi.

@GraphQLClientApi(configKey = "employee-client")
public interface EmployeeClient {

    List<Employee> employees();
    Employee employee(Long id);
}

Finally, we can add a simple JUnit test. We just need to inject EmployeeClient, and then call methods. If you are interested in more details about Quarkus GraphQL support read my article An Advanced GraphQL with Quarkus.

@QuarkusTest
public class EmployeeFetcherTests {

    @Inject
    EmployeeClient employeeClient;

    @Test
    void fetchAll() {
        List<Employee> employees = employeeClient.employees();
        Assertions.assertEquals(10, employees.size());
    }

    @Test
    void fetchById() {
        Employee employee = employeeClient.employee(10L);
        Assertions.assertNotNull(employee);
    }
}

Final Thougths

In my opinion, Quarkus is a very interesting and promising framework. With these tips, you can easily start the development of your first application with Quarkus. There are some new interesting features in each new release. So maybe, I will have to update this list of Quarkus tips soon 🙂

The post Quarkus Tips, Tricks and Techniques appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/10/12/quarkus-tips-tricks-and-techniques/feed/ 0 10117
Secure Rate Limiting with Spring Cloud Gateway https://piotrminkowski.com/2021/05/21/secure-rate-limiting-with-spring-cloud-gateway/ https://piotrminkowski.com/2021/05/21/secure-rate-limiting-with-spring-cloud-gateway/#comments Fri, 21 May 2021 11:12:04 +0000 https://piotrminkowski.com/?p=9733 In this article, you will learn how to enable rate limiting for an authenticated user with Spring Cloud Gateway. Why it is important? API gateway is an entry point to your microservices system. Therefore, you should provide there a right level of security. Rate limiting can prevent your API against DoS attacks and limit web scraping. You can easily […]

The post Secure Rate Limiting with Spring Cloud Gateway appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to enable rate limiting for an authenticated user with Spring Cloud Gateway. Why it is important? API gateway is an entry point to your microservices system. Therefore, you should provide there a right level of security. Rate limiting can prevent your API against DoS attacks and limit web scraping.

You can easily configure rate limiting with Spring Cloud Gateway. For a basic introduction to this feature, you may refer to my article Rate Limiting in Spring Cloud Gateway with Redis. Similarly, today we will also use Redis as a backend for a rate limiter. Moreover, we will configure an HTTP basic authentication. Of course, you can provide some more advanced authentication mechanisms like an X509 certificate or OAuth2 login. If you think about it, read my article Spring Cloud Gateway OAuth2 with Keycloak.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-spring-cloud-gateway. Then you should go to the src/test/java directory, and just follow my instructions in the next sections.

1. Dependencies

Let’s start with dependencies. Since we will create an integration test, we need some additional libraries. Firstly, we will use the Testcontainers library. It allows us to run Docker containers during the JUnit test. We will use it for running Redis and a mock server, which is responsible for mocking a downstream service. Of course, we need to include a starter with Spring Cloud Gateway and Spring Data Redis. To implement an HTTP basic authentication we also need to include Spring Security. Here’s a full list of required dependencies in Maven pom.xml.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-redis-reactive</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-test</artifactId>
   <scope>test</scope>
</dependency>
<dependency>
   <groupId>org.testcontainers</groupId>
   <artifactId>mockserver</artifactId>
   <scope>test</scope>
</dependency>
<dependency>
   <groupId>org.mock-server</groupId>
   <artifactId>mockserver-client-java</artifactId>
   <scope>test</scope>
</dependency>

2. Configure an HTTP Basic Authentication

In order to configure an HTTP basic authentication, we need to create the @Configuration bean annotated with @EnableWebFluxSecurity. That’s because Spring Cloud Gateway is built on top of Spring WebFlux and Netty. Also, we will create a set of test users with MapReactiveUserDetailsService.

@Configuration
@EnableWebFluxSecurity
public class SecurityConfig {

   @Bean
   public SecurityWebFilterChain filterChain(ServerHttpSecurity http) {
      http.authorizeExchange(exchanges -> 
         exchanges.anyExchange().authenticated())
            .httpBasic();
      http.csrf().disable();
      return http.build();
   }

   @Bean
   public MapReactiveUserDetailsService users() {
      UserDetails user1 = User.builder()
            .username("user1")
            .password("{noop}1234")
            .roles("USER")
            .build();
      UserDetails user2 = User.builder()
            .username("user2")
            .password("{noop}1234")
            .roles("USER")
            .build();
      UserDetails user3 = User.builder()
            .username("user3")
            .password("{noop}1234")
            .roles("USER")
            .build();
      return new MapReactiveUserDetailsService(user1, user2, user3);
   }
}

3. Configure Spring Cloud Gateway Rate Limiter key

A request rate limiter feature needs to be enabled using the component called GatewayFilter. This filter takes an optional keyResolver parameter. The KeyResolver interface allows you to create pluggable strategies derive the key for limiting requests. In our case, it will be a user login. Once a user has been successfully authenticated, its login is stored in the Spring SecurityContext. In order to retrieve the context for a reactive application, we should use ReactiveSecurityContextHolder.

@Bean
KeyResolver authUserKeyResolver() {
   return exchange -> ReactiveSecurityContextHolder.getContext()
           .map(ctx -> ctx.getAuthentication()
              .getPrincipal().toString());
}

4. Test Scenario

In the test scenario, we are going to simulate incoming traffic. Every single request needs to have a Authorization header with the user credentials. A single user may send 4 requests per minute. After exceeding that limit Spring Cloud Gateway will return the HTTP code HTTP 429 - Too Many Requests. The traffic is addressed to the downstream service. Therefore, we are running a mock server using Testcontainers.

spring-gateway-rate-limiting-arch

5. Testing Spring Cloud Gateway secure rate limiter

Finally, we may proceed to the test implementation. I will use JUnit4 since I used it before for the other examples in the sample repository. We have three parameters used for rate limiter configuration: replenishRate, burstCapacity and requestedTokens. Since we also allow less than 1 request per second we need to set the right values for burstCapacity and requestedTokens. In short, the requestedTokens property sets how many tokens a request costs. On the other hand, the burstCapacity property is the maximum number of requests (or cost) that is allowed for a user.

During the test we randomly set the username between user1, user2 and user3. The test is repeated 20 times.

@SpringBootTest(webEnvironment = 
   SpringBootTest.WebEnvironment.DEFINED_PORT,
                properties = {"rateLimiter.secure=true"})
@RunWith(SpringRunner.class)
public class GatewaySecureRateLimiterTest {

   private static final Logger LOGGER = 
      LoggerFactory.getLogger(GatewaySecureRateLimiterTest.class);
   private Random random = new Random();

   @Rule
   public TestRule benchmarkRun = new BenchmarkRule();

   @ClassRule
   public static MockServerContainer mockServer = 
      new MockServerContainer();
   @ClassRule
   public static GenericContainer redis = 
      new GenericContainer("redis:5.0.6").withExposedPorts(6379);

   @Autowired
   TestRestTemplate template;

   @BeforeClass
   public static void init() {
      System.setProperty("spring.cloud.gateway.routes[0].id", "account-service");
      System.setProperty("spring.cloud.gateway.routes[0].uri", "http://" + mockServer.getHost() + ":" + mockServer.getServerPort());
      System.setProperty("spring.cloud.gateway.routes[0].predicates[0]", "Path=/account/**");
      System.setProperty("spring.cloud.gateway.routes[0].filters[0]", "RewritePath=/account/(?<path>.*), /$\\{path}");
      System.setProperty("spring.cloud.gateway.routes[0].filters[1].name", "RequestRateLimiter");
      System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.redis-rate-limiter.replenishRate", "1");
      System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.redis-rate-limiter.burstCapacity", "60");
      System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.redis-rate-limiter.requestedTokens", "15");
      System.setProperty("spring.redis.host", redis.getHost());
      System.setProperty("spring.redis.port", "" + redis.getMappedPort(6379));
      new MockServerClient(mockServer.getContainerIpAddress(), mockServer.getServerPort())
            .when(HttpRequest.request()
                    .withPath("/1"))
            .respond(response()
                    .withBody("{\"id\":1,\"number\":\"1234567890\"}")
                    .withHeader("Content-Type", "application/json"));
   }

   @Test
   @BenchmarkOptions(warmupRounds = 0, concurrency = 1, benchmarkRounds = 20)
   public void testAccountService() {
      String username = "user" + (random.nextInt(3) + 1);
      HttpHeaders headers = createHttpHeaders(username,"1234");
      HttpEntity<String> entity = new HttpEntity<String>(headers);
      ResponseEntity<Account> r = template
         .exchange("/account/{id}", HttpMethod.GET, entity, Account.class, 1);
      LOGGER.info("Received({}): status->{}, payload->{}, remaining->{}",
            username, r.getStatusCodeValue(), r.getBody(), r.getHeaders().get("X-RateLimit-Remaining"));
    }

   private HttpHeaders createHttpHeaders(String user, String password) {
      String notEncoded = user + ":" + password;
      String encodedAuth = Base64.getEncoder().encodeToString(notEncoded.getBytes());
      HttpHeaders headers = new HttpHeaders();
      headers.setContentType(MediaType.APPLICATION_JSON);
      headers.add("Authorization", "Basic " + encodedAuth);
      return headers;
   }

}

Let’s run the test. Thanks to the junit-benchmarks library we may configure the number of rounds for the test. Each time I’m logging the response from the gateway that includes username, HTTP status, payload, and a header X-RateLimit-Remaining that shows a number of remaining tokens. Here’s the result.

The post Secure Rate Limiting with Spring Cloud Gateway appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/05/21/secure-rate-limiting-with-spring-cloud-gateway/feed/ 7 9733
Timeouts and Retries In Spring Cloud Gateway https://piotrminkowski.com/2020/02/23/timeouts-and-retries-in-spring-cloud-gateway/ https://piotrminkowski.com/2020/02/23/timeouts-and-retries-in-spring-cloud-gateway/#comments Sun, 23 Feb 2020 22:47:09 +0000 http://piotrminkowski.com/?p=7772 In this article I’m going to describe two features of Spring Cloud Gateway: retrying based on GatewayFilter pattern and timeout handling based on a global configuration. In some previous articles in this series I have described rate limiting based on Redis, and a circuit breaker pattern built with Resilience4J. For more details about those two […]

The post Timeouts and Retries In Spring Cloud Gateway appeared first on Piotr's TechBlog.

]]>
In this article I’m going to describe two features of Spring Cloud Gateway: retrying based on GatewayFilter pattern and timeout handling based on a global configuration. In some previous articles in this series I have described rate limiting based on Redis, and a circuit breaker pattern built with Resilience4J. For more details about those two features you may refer to the following blog posts:

Example

We use the same repository as for two previous articles about Spring Cloud Gateway. The address of repository is https://github.com/piomin/sample-spring-cloud-gateway.git. The test class dedicated for the current article is GatewayRetryTest.

Implementation and testing

As you probably know most of the operations in Spring Cloud Gateway are realized using filter pattern, which is an implementation of Spring Framework GatewayFilter. Here, we can modify incoming requests and outgoing responses before or after sending the downstream request.
The same as for examples described in my two previous articles about Spring Cloud Gateway we will build JUnit test class. It leverages Testcontainers MockServer for running mock exposing REST endpoints.
Before running the test we need to prepare a sample route containing Retry filter. When defining this type of GatewayFilter we may set multiple parameters. Typically you will use the following three of them:

  • retries – the number of retries that should be attempted for a single incoming request. The default value of this property is 3
  • statuses – the list of HTTP status codes that should be retried, represented by using org.springframework.http.HttpStatus enum name.
  • backoff – the policy used for calculating Spring Cloud Gateway timeout between subsequent retry attempts. By default this property is disabled.

Let’s start from the simplest scenario – using default values of parameters. In that case we just need to set a name of GatewayFilter for a route – Retry.

@ClassRule
public static MockServerContainer mockServer = new MockServerContainer();

@BeforeClass
public static void init() {
   System.setProperty("spring.cloud.gateway.routes[0].id", "account-service");
   System.setProperty("spring.cloud.gateway.routes[0].uri", "http://192.168.99.100:" + mockServer.getServerPort());
   System.setProperty("spring.cloud.gateway.routes[0].predicates[0]", "Path=/account/**");
   System.setProperty("spring.cloud.gateway.routes[0].filters[0]", "RewritePath=/account/(?<path>.*), /$\\{path}");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].name", "Retry");
   MockServerClient client = new MockServerClient(mockServer.getContainerIpAddress(), mockServer.getServerPort());
   client.when(HttpRequest.request()
      .withPath("/1"), Times.exactly(3))
      .respond(response()
         .withStatusCode(500)
         .withBody("{\"errorCode\":\"5.01\"}")
         .withHeader("Content-Type", "application/json"));
   client.when(HttpRequest.request()
      .withPath("/1"))
      .respond(response()
         .withBody("{\"id\":1,\"number\":\"1234567891\"}")
         .withHeader("Content-Type", "application/json"));
   // OTHER RULES
}

Our test method is very simple. It is just using Spring Framework TestRestTemplate to perform a single call to the test endpoint.

@Autowired
TestRestTemplate template;

@Test
public void testAccountService() {
   LOGGER.info("Sending /1...");
   ResponseEntity r = template.exchange("/account/{id}", HttpMethod.GET, null, Account.class, 1);
   LOGGER.info("Received: status->{}, payload->{}", r.getStatusCodeValue(), r.getBody());
   Assert.assertEquals(200, r.getStatusCodeValue());
}

Before running the test we will change a logging level for Spring Cloud Gateway logs, to see the additional information about the retrying process.


logging.level.org.springframework.cloud.gateway.filter.factory: TRACE

Since we didn’t set any backoff policy the subsequent attempts were replied without any delay. As you see on the picture below, a default number of retries is 3, and the filter is trying to retry all HTTP 5XX codes (SERVER_ERROR).

timeouts-and-retries-in-spring-cloud-gateway-defaults

Now, let’s provide a little more advanced configuration. We can change the number of retries and set the exact HTTP status code for retrying instead of the series of codes. In our case a retried status code is HTTP 500, since it is returned by our mock endpoint. We can also enable backoff retrying policy beginning from 50ms to max 500ms. The factor is 2 what means that the backoff is calculated by using formula prevBackoff * factor. A formula is becoming slightly different when you set property basedOnPreviousValue to falsefirstBackoff * (factor ^ n). Here’s the appropriate configuration for our current test.

@ClassRule
public static MockServerContainer mockServer = new MockServerContainer();

@BeforeClass
public static void init() {
   System.setProperty("spring.cloud.gateway.routes[0].id", "account-service");
   System.setProperty("spring.cloud.gateway.routes[0].uri", "http://192.168.99.100:" + mockServer.getServerPort());
   System.setProperty("spring.cloud.gateway.routes[0].predicates[0]", "Path=/account/**");
   System.setProperty("spring.cloud.gateway.routes[0].filters[0]", "RewritePath=/account/(?<path>.*), /$\\{path}");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].name", "Retry");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.retries", "10");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.statuses", "INTERNAL_SERVER_ERROR");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.backoff.firstBackoff", "50ms");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.backoff.maxBackoff", "500ms");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.backoff.factor", "2");
   System.setProperty("spring.cloud.gateway.routes[0].filters[1].args.backoff.basedOnPreviousValue", "true");
   MockServerClient client = new MockServerClient(mockServer.getContainerIpAddress(), mockServer.getServerPort());
   client.when(HttpRequest.request()
      .withPath("/1"), Times.exactly(3))
      .respond(response()
         .withStatusCode(500)
         .withBody("{\"errorCode\":\"5.01\"}")
         .withHeader("Content-Type", "application/json"));
   client.when(HttpRequest.request()
      .withPath("/1"))
      .respond(response()
         .withBody("{\"id\":1,\"number\":\"1234567891\"}")
         .withHeader("Content-Type", "application/json"));
   // OTHER RULES
}

If you run the same test one more time with a new configuration the logs look a little different. I have highlighted the most important differences in the picture below. As you see the current number of retries 10 only for HTTP 500 status. After setting a backoff policy the first retry attempt is performed after 50ms, the second after 100ms, the third after 200ms etc.

timeouts-and-retries-in-spring-cloud-gateway-backoff

We have already analyzed the retry mechanism in Spring Cloud Gateway. Timeouts is another important aspect of request routing. With Spring Cloud Gateway we may easily set a global read and connect timeout. Alternatively, we may also define them for each route separately. Let’s add the following property to our test route definition. It sets a global timeout on 100ms. Now, our test route contains a test Retry filter with newly added global read timeout on 100ms.

System.setProperty("spring.cloud.gateway.httpclient.response-timeout", "100ms");

Alternatively, we may set timeout per single route. If we would prefer such a solution here a line we should add to our sample test.

System.setProperty("spring.cloud.gateway.routes[1].metadata.response-timeout", "100");

Then we define another test endpoint available under context path /2 with 200ms delay. Our current test method is pretty similar to the previous one, except that we are expecting HTTP 504 as a result.

@Test
public void testAccountServiceFail() {
   LOGGER.info("Sending /2...");
   ResponseEntity<Account> r = template.exchange("/account/{id}", HttpMethod.GET, null, Account.class, 2);
   LOGGER.info("Received: status->{}, payload->{}", r.getStatusCodeValue(), r.getBody());
   Assert.assertEquals(504, r.getStatusCodeValue());
}

Let’s run our test. The result is visible in the picture below. I have also highlighted the most important parts of the logs. After several failed retry attempts the delay between subsequent attempts has been set to the maximum backoff time – 500ms. Since the downstream service is delayed 100ms, the visible interval between retry attempts is around 600ms. Moreover, Retry filter by default handles IOException and TimeoutException, what is visible in the logs (exceptions parameter).

timeouts-and-retries-in-spring-cloud-gateway-logs

Summary

The current article is the last in series about traffic management in Spring Cloud Gateway. I have already described the following patterns: rate limiting, circuit breaker, fallback, failure retries and timeouts handling. That is only a part of Spring Cloud Gateway features. I hope that my articles help you in building API gateway for your microservices in an optimal way.

The post Timeouts and Retries In Spring Cloud Gateway appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/02/23/timeouts-and-retries-in-spring-cloud-gateway/feed/ 2 7772