MicroProfile Archives - Piotr's TechBlog https://piotrminkowski.com/tag/microprofile/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 15 Dec 2020 09:12:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 MicroProfile Archives - Piotr's TechBlog https://piotrminkowski.com/tag/microprofile/ 32 32 181738725 Microprofile Java Microservices on WildFly https://piotrminkowski.com/2020/12/14/microprofile-java-microservices-on-wildfly/ https://piotrminkowski.com/2020/12/14/microprofile-java-microservices-on-wildfly/#respond Mon, 14 Dec 2020 14:26:31 +0000 https://piotrminkowski.com/?p=9200 In this guide, you will learn how to implement the most popular Java microservices patterns with the MicroProfile project. We’ll look at how to create a RESTful application using JAX-RS and CDI. Then, we will run our microservices on WildFly as bootable JARs. Finally, we will deploy them on OpenShift in order to use its […]

The post Microprofile Java Microservices on WildFly appeared first on Piotr's TechBlog.

]]>
In this guide, you will learn how to implement the most popular Java microservices patterns with the MicroProfile project. We’ll look at how to create a RESTful application using JAX-RS and CDI. Then, we will run our microservices on WildFly as bootable JARs. Finally, we will deploy them on OpenShift in order to use its service discovery and config maps.

The MicroProfile project breathes a new life into Java EE. Since the rise of microservices Java EE had lost its dominant position in the JVM enterprise area. As a result, application servers and EJBs have been replaced by lightweight frameworks like Spring Boot. MicroProfile is an answer to that. It defines Java EE standards for building microservices. Therefore it can be treated as a base to build more advanced frameworks like Quarkus or KumuluzEE.

If you are interested in frameworks built on top of MicroProfile, Quarkus is a good example: Quick Guide to Microservices with Quarkus on OpenShift. You can always implement your custom service discovery implementation for MicroProfile microservices. You should try with Consul: Quarkus Microservices with Consul Discovery.

Source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-microprofile-microservices. Then you should go to the employee-service and department-service directories, and just follow my instructions 🙂

1. Running on WildFly

A few weeks ago WildFly has introduced the “Fat JAR” packaging feature. This feature is fully supported since WildFly 21. We can apply it during a Maven build by including wildfly-jar-maven-plugin to the pom.xml file. What is important, we don’t have to re-design an application to run it inside a bootable JAR.

In order to use the “Fat JAR” packaging feature, we need to add the package execution goal. Then we should install two features inside the configuration section. The first of them, the jaxrs-server feature, is a layer that allows us to build a typical REST application. The second of them, the microprofile-platform feature, enables MicroProfile on the WildFly server.

<profile>
   <id>bootable-jar</id>
   <activation>
      <activeByDefault>true</activeByDefault>
   </activation>
   <build>
      <finalName>${project.artifactId}</finalName>
      <plugins>
         <plugin>
            <groupId>org.wildfly.plugins</groupId>
            <artifactId>wildfly-jar-maven-plugin</artifactId>
            <version>2.0.2.Final</version>
            <executions>
               <execution>
                  <goals>
                     <goal>package</goal>
                  </goals>
               </execution>
            </executions>
            <configuration>
               <feature-pack-location>
                  wildfly@maven(org.jboss.universe:community-universe)#${version.wildfly}
               </feature-pack-location>
               <layers>
                  <layer>jaxrs-server</layer>
                  <layer>microprofile-platform</layer>
               </layers>
            </configuration>
         </plugin>
      </plugins>
   </build>
</profile>

Finally, we just need to execute the following command to build and run our “Fat JAR” application on WildFly.

$ mvn package wildfly-jar:run

If we run multiple applications on the same machine, we would have to override default HTTP and management ports. To do that we need to add the jvmArguments section inside configuration. We may insert there any number of JVM arguments. In that case, the required arguments are jboss.http.port and jboss.management.http.port.

<configuration>
   ...
   <jvmArguments>
      <jvmArgument>-Djboss.http.port=8090</jvmArgument>
      <jvmArgument>-Djboss.management.http.port=9090</jvmArgument>
   </jvmArguments>
</configuration>

2. Creating JAX-RS applications

In the first step, we will create simple REST applications with JAX-RS. WildFly provides all the required libraries, but we need to include both these artifacts for the compilation phase.

<dependency>
   <groupId>org.jboss.spec.javax.ws.rs</groupId>
   <artifactId>jboss-jaxrs-api_2.1_spec</artifactId>
   <scope>provided</scope>
</dependency>
<dependency>
   <groupId>jakarta.enterprise</groupId>
   <artifactId>jakarta.enterprise.cdi-api</artifactId>
   <scope>provided</scope>
</dependency>

Then, we should set the dependencyManagement section. We will use BOM provided by WildFly for both MicroProfile and Jakarta EE.

<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.wildfly.bom</groupId>
         <artifactId>wildfly-jakartaee8-with-tools</artifactId>
         <version>${version.wildfly}</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
      <dependency>
         <groupId>org.wildfly.bom</groupId>
         <artifactId>wildfly-microprofile</artifactId>
         <version>${version.wildfly}</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

Here’s the JAX-RS controller inside employee-service. It uses an in-memory repository bean. It also injects a random delay to all exposed HTTP endpoints with the @Delay annotation. To clarify, I’m just setting it for future use, in order to present the metrics and fault tolerance features.

@Path("/employees")
@ApplicationScoped
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
@Delay
public class EmployeeController {

   @Inject
   EmployeeRepository repository;

   @POST
   public Employee add(Employee employee) {
      return repository.add(employee);
   }

   @GET
   @Path("/{id}")
   public Employee findById(@PathParam("id") Long id) {
      return repository.findById(id);
   }

   @GET
   public List<Employee> findAll() {
      return repository.findAll();
   }

   @GET
   @Path("/department/{departmentId}")
   public List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId) {
      return repository.findByDepartment(departmentId);
   }

   @GET
   @Path("/organization/{organizationId}")
   public List<Employee> findByOrganization(@PathParam("organizationId") Long organizationId) {
      return repository.findByOrganization(organizationId);
   }

}

Here’s a definition of the delay interceptor class. It is annotated with a base @Interceptor and custom @Delay. It injects a random delay between 0 and 1000 milliseconds to each method invoke.

@Interceptor
@Delay
public class AddDelayInterceptor {

   Random r = new Random();

   @AroundInvoke
   public Object call(InvocationContext invocationContext) throws Exception {
      Thread.sleep(r.nextInt(1000));
      System.out.println("Intercept");
      return invocationContext.proceed();
   }

}

Finally, let’s just take a look on the custom @Delay annotation.

@InterceptorBinding
@Target({METHOD, TYPE})
@Retention(RUNTIME)
public @interface Delay {
}

3. Enable metrics for MicroProfile microservices

Metrics is one of the core MicroProfile modules. Data is exposed via REST over HTTP under the /metrics base path in two different data formats for GET requests. These formats are JSON and OpenMetrics. The OpenMetrics text format is supported by Prometheus. In order to enable the MicroProfile metrics, we need to include the following dependency to Maven pom.xml.

<dependency>
   <groupId>org.eclipse.microprofile.metrics</groupId>
   <artifactId>microprofile-metrics-api</artifactId>
   <scope>provided</scope>
</dependency>

To enable the basic metrics we just need to annotate the controller class with @Timed.

@Path("/employees")
@ApplicationScoped
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
@Delay
@Timed
public class EmployeeController {
   ...
}

The /metrics endpoint is available under the management port. Firstly, let’s send some test requests, for example to the GET /employees endpoint. The application employee-service is available on http://localhost:8080/. Then let’s call the endpoint http://localhost:9990/metrics. Here’s a full list of metrics generated for the findAll method. Similar metrics would be generated for all other HTTP endpoints.

4. Generate OpenAPI specification

The REST API specification is another essential thing for all microservices. So, it is not weird that the OpenAPI module is a part of a MicroProfile core. The API specification is automatically generated after including the microprofile-openapi-api module. This module is a part microprofile-platform layer defined for wildfly-jar-maven-plugin.

After starting the application we may access OpenAPI documentation by calling http://localhost:8080/openapi endpoint. Then, we can copy the result to the Swagger editor. The graphical representation of the employee-service API is visible below.

microprofile-java-microservices-openapi

5. Microservices inter-communication with MicroProfile REST client

The department-service calls endpoint GET /employees/department/{departmentId} from the employee-service. Then it returns a department with a list of all assigned employees.

@Getter
@Setter
@NoArgsConstructor
@AllArgsConstructor
public class Department {
   private Long id;
   private String name;
   private Long organizationId;
   private List<Employee> employees = new ArrayList<>();
}

Of course, we need to include the REST client module to the Maven dependencies.

<dependency>
   <groupId>org.eclipse.microprofile.rest.client</groupId>
   <artifactId>microprofile-rest-client-api</artifactId>
   <scope>provided</scope>
</dependency>

The MicroProfile REST module allows defining a client declaratively. We should annotate the client interface with @RegisterRestClient. The rest of the implementation is rather obvious.

@Path("/employees")
@RegisterRestClient(baseUri = "http://employee-service:8080")
public interface EmployeeClient {

   @GET
   @Path("/department/{departmentId}")
   @Produces(MediaType.APPLICATION_JSON)
   List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId);
}

Finally, we just need to inject the EmployeeClient bean to the controller class.

@Path("/departments")
@ApplicationScoped
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
@Timed
public class DepartmentController {

   @Inject
   DepartmentRepository repository;
   @Inject
   EmployeeClient employeeClient;

   @POST
   public Department add(Department department) {
      return repository.add(department);
   }

   @GET
   @Path("/{id}")
   public Department findById(@PathParam("id") Long id) {
      return repository.findById(id);
   }

   @GET
   public List<Department> findAll() {
      return repository.findAll();
   }

   @GET
   @Path("/organization/{organizationId}")
   public List<Department> findByOrganization(@PathParam("organizationId") Long organizationId) {
      return repository.findByOrganization(organizationId);
   }

   @GET
   @Path("/organization/{organizationId}/with-employees")
   public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
      return departments;
   }

}

The MicroProfile project does not implement service discovery patterns. There are some frameworks built on top of MicroProfile that provide such kind of implementation, for example, KumuluzEE. If you do not deploy our applications on OpenShift you may add the following entry in your /etc/hosts file to test it locally.

127.0.0.1 employee-service

Finally, let’s call endpoint GET /departments/organization/{organizationId}/with-employees. The result is visible in the picture below.

6. Java microservices fault tolerance with MicroProfile

To be honest, fault tolerance handling is my favorite feature of MicroProfile. We may configure them on the controller methods using annotations. We can choose between @Timeout, @Retry, @Fallback and @CircuitBreaker. Alternatively, it is possible to use a mix of those annotations on a single method. As you probably remember, we injected a random delay between 0 and 1000 milliseconds into all the endpoints exposed by employee-service. Now, let’s consider the method inside department-service that calls endpoint GET /employees/department/{departmentId} from employee-service. Firstly, we will annotate that method with @Timeout as shown below. The current timeout is 500 ms.

public class DepartmentController {

   @Inject
   DepartmentRepository repository;
   @Inject
   EmployeeClient employeeClient;

   ...

   @GET
   @Path("/organization/{organizationId}/with-employees")
   @Timeout(500)
   public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
      return departments;
   }

}

Before calling the method, let’s create an exception mapper. If TimeoutException occurs, the department-service endpoint will return status HTTP 504 - Gateway Timeout.

@Provider
public class TimeoutExceptionMapper implements 
      ExceptionMapper<TimeoutException> {

   public Response toResponse(TimeoutException e) {
      return Response.status(Response.Status.GATEWAY_TIMEOUT).build();
   }

}

Then, we may proceed to call our test endpoint. Probably 50% of requests will finish with the result visible below.

On the other hand, we may enable a retry mechanism for such an endpoint. After that, the change for receive status HTTP 200 OK becomes much bigger than before.

@GET
@Path("/organization/{organizationId}/with-employees")
@Timeout(500)
@Retry(retryOn = TimeoutException.class)
public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
   List<Department> departments = repository.findByOrganization(organizationId);
   departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
   return departments;
}

7. Deploy MicroProfile microservices on OpenShift

We can easily deploy MicroProfile Java microservices on OpenShift using the JKube plugin. It is a successor of the deprecated Fabric8 Maven Plugin. Eclipse JKube is a collection of plugins and libraries that are used for building container images using Docker, JIB or S2I build strategies. It generates and deploys Kubernetes and OpenShift manifests at compile time too. So, let’s add openshift-maven-plugin to the pom.xml file.

The configuration visible below sets 2 replicas for the deployment and enforces using health checks. In addition to this, openshift-maven-plugin generates the rest of a deployment config based on Maven pom.xml structure. For example, it generates employee-service-deploymentconfig.yml, employee-service-route.yml, and employee-service-service.yml for the employee-service application.

<plugin>
   <groupId>org.eclipse.jkube</groupId>
   <artifactId>openshift-maven-plugin</artifactId>
   <version>1.0.2</version>
   <executions>
      <execution>
         <id>jkube</id>
         <goals>
            <goal>resource</goal>
            <goal>build</goal>
         </goals>
      </execution>
   </executions>
   <configuration>
      <resources>
         <replicas>2</replicas>
      </resources>
      <enricher>
         <config>
            <jkube-healthcheck-wildfly-jar>
               <enforceProbes>true</enforceProbes>
            </jkube-healthcheck-wildfly-jar>
         </config>
      </enricher>
   </configuration>
</plugin>

In order to deploy the application on OpenShift we need to run the following command.

$ mvn oc:deploy -P bootable-jar-openshift

Since the property enforceProbes has been enabled openshift-maven-plugin adds liveness and readiness probes to the DeploymentConfig. Therefore, we need to implement both these endpoints in our MicroProfile applications. MicroProfile provides a smart mechanism for creating liveness and readiness health checks. We just need to annotate the class with @Liveness or @Readiness, and implement the HealthCheck interface. Here’s the example implementation of the liveness endpoint.

@Liveness
@ApplicationScoped
public class LivenessEndpoint implements HealthCheck {
   @Override
   public HealthCheckResponse call() {
      return HealthCheckResponse.up("Server up");
   }
}

On the other hand, the implementation of the readiness probe also verifies the status of the repository bean. Of course, it is just a simple example.

@Readiness
@ApplicationScoped
public class ReadinessEndpoint implements HealthCheck {
   @Inject
   DepartmentRepository repository;

   @Override
   public HealthCheckResponse call() {
      HealthCheckResponseBuilder responseBuilder = HealthCheckResponse
         .named("Repository up");
      List<Department> departments = repository.findAll();
      if (repository != null && departments.size() > 0)
         responseBuilder.up();
      else
         responseBuilder.down();
      return responseBuilder.build();
   }
}

After deploying both employee-service and department-service application we may verify a list of DeploymentConfigs.

We can also navigate to the OpenShift console. Let’s take a look at a list of running pods. There are two instances of the employee-service and a single instance of department-service.

microprofile-java-microservices-openshift-pods

8. MicroProfile OpenTracing with Jaeger

Tracing is another important pattern in microservices architecture. The OpenTracing module is a part of MicroProfile specification. Besides the microprofile-opentracing-api library we also need to include the opentracing-api module.

<dependency>
   <groupId>org.eclipse.microprofile.opentracing</groupId>
   <artifactId>microprofile-opentracing-api</artifactId>
   <scope>provided</scope>
</dependency>
<dependency>
   <groupId>io.opentracing</groupId>
   <artifactId>opentracing-api</artifactId>
   <version>0.31.0</version>
</dependency>

By default, MicroProfile OpenTracing integrates the application with Jaeger. If you are testing our sample microservices on OpenShift, you may install Jaeger using an operator. Otherwise, we may just start it on the Docker container. The Jaeger UI is available on the address http://localhost:16686.

$ docker run -d --name jaeger \
-p 6831:6831/udp \
-p 16686:16686 \
jaegertracing/all-in-one:1.16.0

We don’t have to do anything more than adding the required dependencies to enable tracing. However, it is worth overriding the names of recorded operations. We may do it by annotating a particular method with @Traced and then by setting parameter operationName. The implementation of findByOrganizationWithEmployees method in the department-service is visible below.

public class DepartmentController {

   @Inject
   DepartmentRepository repository;
   @Inject
   EmployeeClient employeeClient;

   ...

   @GET
   @Path("/organization/{organizationId}/with-employees")
   @Timeout(500)
   @Retry(retryOn = TimeoutException.class)
   @Traced(operationName = "findByOrganizationWithEmployees")
   public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
      return departments;
   }
   
}

We can also take a look at the fragment of implementation of EmployeeController.

public class EmployeeController {

   @Inject
   EmployeeRepository repository;

   ...
   
   @GET
   @Traced(operationName = "findAll")
   public List<Employee> findAll() {
      return repository.findAll();
   }

   @GET
   @Path("/department/{departmentId}")
   @Traced(operationName = "findByDepartment")
   public List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId) {
      return repository.findByDepartment(departmentId);
   }
   
}

Before running the applications we should at least set the environment variable JAEGER_SERVICE_NAME. It configures the name of the application visible by Jaeger. For example, before starting the employee-service application we should set the value JAEGER_SERVICE_NAME=employee-service. Finally, let’s send some test requests to the department-service endpoint GET departments/organization/{organizationId}/with-employees.

$ curl http://localhost:8090/departments/organization/1/with-employees
$ curl http://localhost:8090/departments/organization/2/with-employees

After sending some test requests we may go to the Jaeger UI. The picture visible below shows the history of requests processed by the method findByOrganizationWithEmployees inside department-service.

As you probably remember, this method calls a method from the employee-service, and configures timeout and retries in case of failure. The picture below shows the details about a single request processed by the method findByOrganizationWithEmployees. To clarify, it has been retried once.

microprofile-java-microservices-jeager-details

Conclusion

This article guides you through the most important steps of building Java microservices with MicroProfile. You may learn how to implement tracing, health checks, OpenAPI, and inter-service communication with a REST client. after reading you are able to run your MicroProfile Java microservices locally on WildFly, and moreover deploy them on OpenShift using a single maven command. Enjoy 🙂

The post Microprofile Java Microservices on WildFly appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/12/14/microprofile-java-microservices-on-wildfly/feed/ 0 9200
Quick Guide to Microservices with Quarkus on Openshift https://piotrminkowski.com/2020/08/18/quick-guide-to-microservices-with-quarkus-on-openshift/ https://piotrminkowski.com/2020/08/18/quick-guide-to-microservices-with-quarkus-on-openshift/#comments Tue, 18 Aug 2020 14:18:04 +0000 https://piotrminkowski.wordpress.com/?p=7219 In this article I will show you how to use the Quarkus OpenShift module. Quarkus is a framework for building Java applications in times of microservices and serverless architectures. If you compare it with other frameworks like Spring Boot / Spring Cloud or Micronaut, the first difference is native support for running on Kubernetes or […]

The post Quick Guide to Microservices with Quarkus on Openshift appeared first on Piotr's TechBlog.

]]>
In this article I will show you how to use the Quarkus OpenShift module. Quarkus is a framework for building Java applications in times of microservices and serverless architectures. If you compare it with other frameworks like Spring Boot / Spring Cloud or Micronaut, the first difference is native support for running on Kubernetes or Openshift platforms. It is built on top of well-known Java standards like CDI, JAX-RS, and Eclipse MicroProfile which also distinguishes it from Spring Boot or Micronaut.
Some other features that may convince you to use Quarkus are extremely fast boot time, minimal memory footprint optimized for running in containers, and lower time-to-first-request. Also, even though it is a relatively new framework, it has a lot of extensions including support for Hibernate, Kafka, RabbitMQ, OpenApi, Vert.x, and many more.
I’m going to guide you through building microservices with Quarkus and running them on OpenShift 4. We will cover the following topics:

  • Building REST-based application with input validation
  • Communication between microservices with RestClient
  • Exposing health checks (liveness, readiness)
  • Exposing OpenAPI/Swagger documentation
  • Running applications on the local machine with Quarkus Maven plugin
  • Testing with JUnit and RestAssured
  • Deploying and running Quarkus applications on OpenShift using source-2-image

github-logo Source code

The source code of application is available on GitHub: https://github.com/piomin/sample-quarkus-microservices.git.

If you are interested in more materials related to Quarkus framework you can read my previous articles about it. In the article Guide to Quarkus with Kotlin I’m showing how to build a simple REST-based application written in Kotlin. In the article Guide to Quarkus on Kubernetes, I’m showing how to deploy it using Quarkus built-in support for Kubernetes.

1. Dependencies required for Quarkus OpenShift

When creating a new application you may execute a single Maven command that uses quarkus-maven-plugin. A list of dependencies should be declared in parameter -Dextensions.


mvn io.quarkus:quarkus-maven-plugin:1.7.0.Final:create \
    -DprojectGroupId=pl.piomin.services \
    -DprojectArtifactId=employee-service \
    -DclassName="pl.piomin.services.employee.controller.EmployeeController" \
    -Dpath="/employees" \
    -Dextensions="resteasy-jackson, hibernate-validator"

Here’s the structure of our pom.xml:

<properties>
   <quarkus.version>1.7.0.Final</quarkus.version>
   <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
   <maven.compiler.source>11</maven.compiler.source>
   <maven.compiler.target>11</maven.compiler.target>
</properties>
<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>io.quarkus</groupId>
         <artifactId>quarkus-bom</artifactId>
         <version>${quarkus.version}</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>
<build>
   <plugins>
      <plugin>
         <groupId>io.quarkus</groupId>
         <artifactId>quarkus-maven-plugin</artifactId>
         <version>${quarkus.version}</version>
         <executions>
            <execution>
               <goals>
                  <goal>build</goal>
               </goals>
            </execution>
         </executions>
      </plugin>
   </plugins>
</build>

For building a simple REST-based application with input validation we don’t need to include many modules. As you have probably noticed I declared just two extensions, which is the same as the following list of dependencies in Maven pom.xml:

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-resteasy-jackson</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-hibernate-validator</artifactId>
</dependency>

2. Source code

What might be a bit surprising for Spring Boot or Micronaut users there is no main, runnable class with static method. A resource/controller class is defacto the main class. Quarkus resource/controller class and methods should be marked using annotations from javax.ws.rs library. Here’s the implementation of REST controller inside employee-service:

@Path("/employees")
@Produces(MediaType.APPLICATION_JSON)
public class EmployeeController {

   private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
	
   @Inject
   EmployeeRepository repository;
	
   @POST
   public Employee add(@Valid Employee employee) {
      LOGGER.info("Employee add: {}", employee);
      return repository.add(employee);
   }
	
   @Path("/{id}")
   @GET
   public Employee findById(@PathParam("id") Long id) {
      LOGGER.info("Employee find: id={}", id);
      return repository.findById(id);
   }

   @GET
   public Set<Employee> findAll() {
      LOGGER.info("Employee find");
      return repository.findAll();
   }
	
   @Path("/department/{departmentId}")
   @GET
   public Set<Employee> findByDepartment(@PathParam("departmentId") Long departmentId) {
      LOGGER.info("Employee find: departmentId={}", departmentId);
      return repository.findByDepartment(departmentId);
   }
	
   @Path("/organization/{organizationId}")
   @GET
   public Set<Employee> findByOrganization(@PathParam("organizationId") Long organizationId) {
      LOGGER.info("Employee find: organizationId={}", organizationId);
      return repository.findByOrganization(organizationId);
   }
	
}

We use CDI for dependency injection and SLF4J for logging. The controller class uses an in-memory repository bean for storing and retrieving data. Repository bean is annotated with CDI @ApplicationScoped and injected into the controller:

public class EmployeeRepository {

   private Set<Employee> employees = new HashSet<>();

   public EmployeeRepository() {
      add(new Employee(1L, 1L, "John Smith", 30, "Developer"));
      add(new Employee(1L, 1L, "Paul Walker", 40, "Architect"));
   }

   public Employee add(Employee employee) {
      employee.setId((long) (employees.size()+1));
      employees.add(employee);
      return employee;
   }
   
   public Employee findById(Long id) {
      Optional<Employee> employee = employees.stream()
            .filter(a -> a.getId().equals(id))
            .findFirst();
      if (employee.isPresent())
         return employee.get();
      else
         return null;
   }

   public Set<Employee> findAll() {
      return employees;
   }
   
   public Set<Employee> findByDepartment(Long departmentId) {
      return employees.stream()
            .filter(a -> a.getDepartmentId().equals(departmentId))
            .collect(Collectors.toSet());
   }
   
   public Set<Employee> findByOrganization(Long organizationId) {
      return employees.stream()
            .filter(a -> a.getOrganizationId().equals(organizationId))
            .collect(Collectors.toSet());
   }

}

And the last component is domain class with validation:

public class Employee {

   private Long id;
   @NotNull
   private Long organizationId;
   @NotNull
   private Long departmentId;
   @NotBlank
   private String name;
   @Min(1)
   @Max(100)
   private int age;
   @NotBlank
   private String position;
   
   // ... GETTERS AND SETTERS
   
}

3. Unit Testing

Unit testing with Quarkus is very simple. If you are testing REST-based web application you should include the following dependencies in your pom.xml:


<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-junit5</artifactId>
   <scope>test</scope>
</dependency>
<dependency>
   <groupId>io.rest-assured</groupId>
   <artifactId>rest-assured</artifactId>
   <scope>test</scope>
</dependency>

Let’s analyze the test class from organization-service (our another microservice along with employee-service and department-service). A test class should be annotated with @QuarkusTest. We may inject other beans via @Inject annotation. The rest is typical for JUnit and RestAssured – we are testing the API methods exposed by the controller. Because we are using an in-memory repository we don’t have to mock anything except inter-service communication (we discuss it later in that article). We have some positive scenarios for GET, POST methods and a single negative scenario that does not pass input validation (testInvalidAdd).

@QuarkusTest
public class OrganizationControllerTests {

   @Inject
   OrganizationRepository repository;

   @Test
   public void testFindAll() {
      given().when().get("/organizations")
	         .then()
			 .statusCode(200)
			 .body(notNullValue());
   }

   @Test
   public void testFindById() {
      Organization organization = new Organization("Test3", "Address3");
      organization = repository.add(organization);
      given().when().get("/organizations/{id}", organization.getId()).then().statusCode(200)
             .body("id", equalTo(organization.getId().intValue()))
             .body("name", equalTo(organization.getName()));
   }

   @Test
   public void testFindByIdWithDepartments() {
      given().when().get("/organizations/{id}/with-departments", 1L).then().statusCode(200)
             .body(notNullValue())
             .body("departments.size()", is(1));
   }

   @Test
   public void testAdd() {
      Organization organization = new Organization("Test5", "Address5");
      given().contentType("application/json").body(organization)
             .when().post("/organizations").then().statusCode(200)
             .body("id", notNullValue())
             .body("name", equalTo(organization.getName()));
   }

   @Test
   public void testInvalidAdd() {
      Organization organization = new Organization();
      given().contentType("application/json").body(organization)
	         .when()
			 .post("/organizations")
			 .then()
			 .statusCode(400);
   }

}

4. Inter-service communication

Since Quarkus is dedicated to running on Kubernetes it does not provide any built-in support for third-party service discovery (for example through Consul or Netflix Eureka) and HTTP client integrated with this discovery. However, it provides dedicated client support for REST communication. To use it we first need to include the following dependency:


<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-rest-client</artifactId>
</dependency>

Quarkus provides declarative REST client based on MicroProfile REST Client. You need to create an interface with the required methods and annotate it with @RegisterRestClient. Other annotations are pretty the same as on the server-side. Since you use @RegisterRestClient for marking Quarkus know that this interface is meant to be available for CDI injection as a REST Client.

@Singleton
@Path("/departments")
@RegisterRestClient
public interface DepartmentClient {

   @GET
   @Path("/organization/{organizationId}")
   @Produces(MediaType.APPLICATION_JSON)
   List<Department> findByOrganization(@PathParam("organizationId") Long organizationId);

   @GET
   @Path("/organization/{organizationId}/with-employees")
   @Produces(MediaType.APPLICATION_JSON)
   List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId);
   
}

Now, let’s take a look at the controller class inside the organization-service. Together with @Inject we need to use @RestClient annotation to inject REST client bean properly. After that, you can use interface methods to call endpoints exposed by other services.

@Path("/organizations")
@Produces(MediaType.APPLICATION_JSON)
public class OrganizationController {

   private static final Logger LOGGER = LoggerFactory.getLogger(OrganizationController.class);
   
   @Inject
   OrganizationRepository repository;
   @Inject
   @RestClient
   DepartmentClient departmentClient;
   @Inject
   @RestClient
   EmployeeClient employeeClient;
   
   // ... OTHER FIND METHODS

   @Path("/{id}/with-departments")
   @GET
   public Organization findByIdWithDepartments(@PathParam("id") Long id) {
      LOGGER.info("Organization find: id={}", id);
      Organization organization = repository.findById(id);
      organization.setDepartments(departmentClient.findByOrganization(organization.getId()));
      return organization;
   }
   
   @Path("/{id}/with-departments-and-employees")
   @GET
   public Organization findByIdWithDepartmentsAndEmployees(@PathParam("id") Long id) {
      LOGGER.info("Organization find: id={}", id);
      Organization organization = repository.findById(id);
      organization.setDepartments(departmentClient.findByOrganizationWithEmployees(organization.getId()));
      return organization;
   }
   
   @Path("/{id}/with-employees")
   @GET
   public Organization findByIdWithEmployees(@PathParam("id") Long id) {
      LOGGER.info("Organization find: id={}", id);
      Organization organization = repository.findById(id);
      organization.setEmployees(employeeClient.findByOrganization(organization.getId()));
      return organization;
   }
   
}

The last missing thing required for communication is the addresses of target services. We may provide them using field baseUri of @RegisterRestClient annotation. However, it seems that a better solution would be to place them inside application.properties. The name of the property needs to contain the fully qualified name of the client interface and suffix mp-rest/url. The address used for communication in development mode is different than in production mode when the application is deployed on Kubernetes or OpenShift. That’s why I’m using prefix %dev in the name of the property setting target URL.


%dev.pl.piomin.services.organization.client.DepartmentClient/mp-rest/url=http://localhost:8090
%dev.pl.piomin.services.organization.client.EmployeeClient/mp-rest/url=http://localhost:8080

I have already mentioned unit testing and inter-service communication in the previous section. To test the API method that communicates with other applications we need to mock the REST client. Here’s the sample of mock created for DepartmentClient. It should be visible only during the tests, so we have to place it inside src/test/java. If we annotate it with @Mock and @RestClient Quarkus automatically use this bean by default instead of declarative REST client-defined inside src/main/java.

@Mock
@ApplicationScoped
@RestClient
public class MockDepartmentClient implements DepartmentClient {

    @Override
    public List<Department> findByOrganization(Long organizationId) {
        return Collections.singletonList(new Department("Test1"));
    }

    @Override
    public List<Department> findByOrganizationWithEmployees(Long organizationId) {
        return null;
    }

}

5. Monitoring and Documentation

We can easily expose health checks or API documentation with Quarkus. API documentation is built using OpenAPI/Swagger. Quarkus leverages libraries available within the project SmallRye. We should include the following dependencies to our pom.xml:


<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-openapi</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-health</artifactId>
</dependency>

We can define two types of health checks: readiness and liveness. There are available under /health/ready and /health/live context paths. To expose them outside application we need to define a bean that implements MicroProfile HealthCheck interface. Readiness endpoint should be annotated with @Readiness, while liveness with @Liveness.

@ApplicationScoped
@Readiness
public class ReadinessHealthcheck implements HealthCheck {

    @Override
    public HealthCheckResponse call() {
        return HealthCheckResponse.named("Employee Health Check").up().build();
    }

}

To enable Swagger documentation we don’t need to do anything more than adding a dependency. Quarkus also provides a built-in UI for Swagger. By default it is enabled on development mode, so if you are willing to use on the production you should add the line quarkus.swagger-ui.always-include=true to your application.properties file. Now, if run the application employee-service locally in development mode by executing Maven command mvn compile quarkus:dev you may view API specification available under URL http://localhost:8080/swagger-ui.

swagger

Here’s my log from application startup. It prints a listening port and list of loaded extensions.

quarkus-startup

6. Running Quarkus Microservices on the Local Machine

Because we would like to run more than one application on the same machine we need to override their default HTTP listening port. While employee-service is still running on the default 8080 port, other microservices use different ports as shown below.

department-service:
port-department

organization-service:
port-organization

Let’s test inter-service communication from Swagger UI. I called endpoint GET /organizations/{id}/with-departments that calls endpoint GET /departments/organization/{organizationId} exposed by department-service. The result is visible on the below.

quarkus-communication

7. Running Quarkus on OpenShift

We have already finished the implementation of our sample microservices architecture and run them on the local machine. Now, we can proceed to the last step and deploy these applications on OpenShift or Minishift. We have some different approaches when deploying the Quarkus application on OpenShift. Today I’ll show you leverage the S2I build mechanism for that.
We are going to use Quarkus GraalVM Native S2I Builder. It is available on quai.io as quarkus/ubi-quarkus-native-s2i. I’m using the Openshift 4 cluster running on Azure. You try it as well on the local version OpenShift 3 – Minishift. Before deploying our applications we need to start Minishift. Following Quarkus documentation GraalVM-based native build consumes much memory and CPU, so you should set 6GB and 4 cores for Minishift.


$ minishift start --vm-driver=virtualbox --memory=6G --cpus=4

Also, we need to modify the source code of our application a little. As you probably remember we used JDK 11 for running them locally. We also need to include a declaration of native profile as shown below:

<properties>
   <quarkus.version>1.7.0.Final</quarkus.version>
   <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
   <maven.compiler.source>1.8</maven.compiler.source>
   <maven.compiler.target>1.8</maven.compiler.target>
</properties>
...
<profiles>
   <profile>
      <id>native</id>
      <activation>
         <property>
            <name>native</name>
         </property>
      </activation>
      <build>
         <plugins>
            <plugin>
               <groupId>io.quarkus</groupId>
               <artifactId>quarkus-maven-plugin</artifactId>
               <version>${quarkus.version}</version>
               <executions>
                  <execution>
                     <goals>
                        <goal>native-image</goal>
                     </goals>
                     <configuration>
                        <enableHttpUrlHandler>true</enableHttpUrlHandler>
                     </configuration>
                  </execution>
               </executions>
            </plugin>
            <plugin>
               <artifactId>maven-failsafe-plugin</artifactId>
               <version>2.22.1</version>
               <executions>
                  <execution>
                     <goals>
                        <goal>integration-test</goal>
                        <goal>verify</goal>
                     </goals>
                     <configuration>
                        <systemProperties>
                           <native.image.path>${project.build.directory}/${project.build.finalName}-runner</native.image.path>
                        </systemProperties>
                     </configuration>
                  </execution>
               </executions>
            </plugin>
         </plugins>
      </build>
   </profile>
</profiles>

Two other changes should be performed inside application.properties file. We don’t have to override port number, since OpenShift dynamically assigns virtual IP for every pod. An inter-service communication is realized via OpenShift discovery, so we just need to set the name of service instead of the localhost. It can be set in the default profile for the property since properties with %dev prefix are used in development mode.

quarkus.swagger-ui.always-include=true
pl.piomin.services.organization.client.DepartmentClient/mp-rest/url=http://department:8080
pl.piomin.services.organization.client.EmployeeClient/mp-rest/url=http://employee:8080

Finally we may deploy our applications on OpenShift. To do that you should execute the following commands using your oc client:

$ oc new-app quay.io/quarkus/ubi-quarkus-native-s2i:20.1.0-java11~https://github.com/piomin/sample-quarkus-microservices.git --context-dir=employee-service --name=employee
$ oc new-app quay.io/quarkus/ubi-quarkus-native-s2i:20.1.0-java11~https://github.com/piomin/sample-quarkus-microservices.git --context-dir=department-service --name=department
$ oc new-app quay.io/quarkus/ubi-quarkus-native-s2i:20.1.0-java11~https://github.com/piomin/sample-quarkus-microservices.git --context-dir=organization-service --name=organization

As you can see the repository with applications source code is available on my GitHub account under address https://github.com/piomin/sample-quarkus-microservices.git. Because all the applications are stored within a single repository we need to define a parameter context-dir for every single deployment.
I was quite disappointed. Since we are using GraalVM for compilation the memory consumption of the build is pretty large. The whole build process takes around 10 minutes.

quarkus-microservices-openshift-build

Here’s the list of performed builds.

quarkus-microservices-openshift-build-list

Although a build process consumes much memory, the memory usage of Quarkus applications compiled using GraalVM is just amazing.

quarkus-microservices-openshift-pods

To execute some test calls we need to expose applications outside the OpenShift cluster.

$ oc expose svc employee
$ oc expose svc department
$ oc expose svc organization

In my OpenShift cluster they will be available under the address, for example http://department-quarkus.apps.np9zir0r.westeurope.aroapp.io. You can run Swagger UI by calling /swagger-ui context path on every single application.

quarkus-microservices-openshift-routes

The post Quick Guide to Microservices with Quarkus on Openshift appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/08/18/quick-guide-to-microservices-with-quarkus-on-openshift/feed/ 2 7219
Guide to Quarkus with Kotlin https://piotrminkowski.com/2020/08/09/guide-to-quarkus-with-kotlin/ https://piotrminkowski.com/2020/08/09/guide-to-quarkus-with-kotlin/#comments Sun, 09 Aug 2020 08:28:56 +0000 http://piotrminkowski.com/?p=8353 Quarkus is a lightweight Java framework developed by RedHat. It is dedicated for cloud-native applications that require a small memory footprint and a fast startup time. Its programming model is built on top of proven standards like Eclipse MicroProfile. Recently it is growing in popularity. It may be considered as an alternative to Spring Boot […]

The post Guide to Quarkus with Kotlin appeared first on Piotr's TechBlog.

]]>
Quarkus is a lightweight Java framework developed by RedHat. It is dedicated for cloud-native applications that require a small memory footprint and a fast startup time. Its programming model is built on top of proven standards like Eclipse MicroProfile. Recently it is growing in popularity. It may be considered as an alternative to Spring Boot framework, especially if you are running your applications on Kubernetes or OpenShift.
In this guide, you will learn how to implement a simple Quarkus Kotlin application, that exposes REST endpoints and connects to a database. We will discuss the following topics:

  • Implementation of REST endpoints
  • Integration with H2 with Hibernate and Panache project
  • Generating and exposing OpenAPI/Swagger documentation
  • Exposing health checks
  • Exposing basic metrics
  • Logging request and response
  • Testing REST endpoints with RestAssured library

github-logo Source code

The source code with the sample Quarkus Kotlin applications is available on GitHub. First, you need to clone the following repository: https://github.com/piomin/sample-quarkus-applications.git. Then, you need to go to the employee-service directory.

1. Enable Quarkus Kotlin support

To enable Kotlin support in Quarkus we need to include quarkus-kotlin module. We also have to add kotlin-stdlib library.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-kotlin</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-stdlib</artifactId>
</dependency>

In the next step we need to include kotlin-maven-plugin. Besides standard configuration, we have to use all-open Kotlin compiler plugin. The all-open compiler plugin makes classes annotated with a specific annotation and their members open without the explicit open keyword. Since classes annotated with @Path, @ApplicationScoped, or @QuarkusTest should not be final, we need to add all those annotations to the pluginOptions section.

<build>
   <sourceDirectory>src/main/kotlin</sourceDirectory>
   <testSourceDirectory>src/test/kotlin</testSourceDirectory>
   <plugins>
      <plugin>
         <groupId>io.quarkus</groupId>
         <artifactId>quarkus-maven-plugin</artifactId>
         <version>${quarkus-plugin.version}</version>
         <executions>
            <execution>
               <goals>
                  <goal>build</goal>
               </goals>
            </execution>
         </executions>
      </plugin>
      <plugin>
         <groupId>org.jetbrains.kotlin</groupId>
         <artifactId>kotlin-maven-plugin</artifactId>
         <version>${kotlin.version}</version>
         <executions>
            <execution>
               <id>compile</id>
               <goals>
                  <goal>compile</goal>
               </goals>
            </execution>
            <execution>
               <id>test-compile</id>
               <goals>
                  <goal>test-compile</goal>
               </goals>
            </execution>
         </executions>
         <dependencies>
            <dependency>
               <groupId>org.jetbrains.kotlin</groupId>
               <artifactId>kotlin-maven-allopen</artifactId>
               <version>${kotlin.version}</version>
            </dependency>
         </dependencies>
         <configuration>
            <javaParameters>true</javaParameters>
            <jvmTarget>11</jvmTarget>
            <compilerPlugins>
               <plugin>all-open</plugin>
            </compilerPlugins>
            <pluginOptions>
               <option>all-open:annotation=javax.ws.rs.Path</option>
               <option>all-open:annotation=javax.enterprise.context.ApplicationScoped</option>
               <option>all-open:annotation=io.quarkus.test.junit.QuarkusTest</option>
            </pluginOptions>
         </configuration>
      </plugin>
   </plugins>
</build>

2. Implement REST endpoint

In Quarkus support for REST is built on top of Resteasy and JAX-RS libraries. You can choose between two available extentions for JSON serialization/deserialization: JsonB and Jackson. Since I decided to use Jackson I need to include quarkus-resteasy-jackson dependency. It also includes quarkus-resteasy module.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-resteasy-jackson</artifactId>
</dependency>

We mostly use JAX-RS annotations for mapping controller methods and fields into HTTP endpoints. We may also use Resteasy annotations like @PathParam, that does not require to set any fields. In order to interact with database, we are injecting a repository bean.

@Path("/employees")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
class EmployeeResource(val repository: EmployeeRepository) {

    @POST
    @Transactional
    fun add(employee: Employee): Response {
        repository.persist(employee)
        return Response.ok(employee).status(201).build()
    }

    @DELETE
    @Path("/{id}")
    @Transactional
    fun delete(@PathParam id: Long) {
        repository.deleteById(id)
    }

    @GET
    fun findAll(): List<Employee> = repository.listAll()

    @GET
    @Path("/{id}")
    fun findById(@PathParam id: Long): Employee? = repository.findById(id)

    @GET
    @Path("/first-name/{firstName}/last-name/{lastName}")
    fun findByFirstNameAndLastName(@PathParam firstName: String, @PathParam lastName: String): List<Employee>
            = repository.findByFirstNameAndLastName(firstName, lastName)

    @GET
    @Path("/salary/{salary}")
    fun findBySalary(@PathParam salary: Int): List<Employee> = repository.findBySalary(salary)

    @GET
    @Path("/salary-greater-than/{salary}")
    fun findBySalaryGreaterThan(@PathParam salary: Int): List<Employee>
            = repository.findBySalaryGreaterThan(salary)

}

3. Integration with database

Quarkus provides Panache JPA extension to simplify work with Hibernate ORM. It also provides driver extensions for the most popular SQL databases like Postgresql, MySQL, or H2. To enable both these features for H2 in-memory database we need to include the following dependencies.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-hibernate-orm-panache-kotlin</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-jdbc-h2</artifactId>
</dependency>

We should also configure connection settings inside application.properties file.


quarkus.datasource.db-kind=h2
quarkus.datasource.username=sa
quarkus.datasource.password=password
quarkus.datasource.jdbc.url=jdbc:h2:mem:testdb

Panache extension allows to use well-known repository pattern. To use it we should first define entity that extends PanacheEntity class.

@Entity
data class Employee(var firstName: String = "",
                    var lastName: String = "",
                    var position: String = "",
                    var salary: Int = 0,
                    var organizationId: Int? = null,
                    var departmentId: Int? = null): PanacheEntity()

In the next step, we are defining repository bean that implements PanacheRepository interface. It comes with some basic methods like persist, deleteById or listAll. We may also use those basic methods to implement more advanced queries or operations.

@ApplicationScoped
class EmployeeRepository: PanacheRepository<Employee> {
    fun findByFirstNameAndLastName(firstName: String, lastName: String): List<Employee> =
           list("firstName = ?1 and lastName = ?2", firstName, lastName)

    fun findBySalary(salary: Int): List<Employee> = list("salary", salary)

    fun findBySalaryGreaterThan(salary: Int): List<Employee> = list("salary > ?1", salary)
}

4. Enable OpenAPI documentation for Quarkus Kotlin

It is possible to generate OpenAPI v3 specification automatically. To do that we need to include SmallRye OpenAPI extension. The specification is available under path /openapi.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-openapi</artifactId>
</dependency>

We may provide some additional informations to the generated OpenAPI specification like description or version number. To do that we need to create application class that extends javax.ws.rs.core.Application, and annotate it with @OpenAPIDefinition, as shown below.

@OpenAPIDefinition(info = Info(title = "Employee API", version = "1.0"))
class EmployeeApplication: Application()

Usually, we want to expose OpenAPI specification using Swagger UI. Such a feature may be enabled using configuration property quarkus.swagger-ui.always-include=true.

quarkus-swagger

5. Health checks

We may expose built-in health checks implementation by including SmallRye Health extension.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-health</artifactId>
</dependency>

It exposes three REST endpoints compliant with Kubernetes health checks pattern:

  • /health/live – The application is up and running (Kubernetes liveness probe).
  • /health/ready – The application is ready to serve requests (Kubernetes readiness probe).
  • /health – Accumulating all health check procedures in the application.

The default implementation of readiness health check verifies database connection status, while liveness just determines if the application is running.

quarkus-readiness

6. Expose metrics

We may enable metrics collection by adding SmallRye Metrics extension. By default, it collects only JVM, CPU and processes metrics.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-metrics</artifactId>
</dependency>

We may force the library to collect metrics from JAX-RS endpoints. To do that we need to annotate the selected endpoints with @Timed.

@POST
@Transactional
@Timed(name = "add", unit = MetricUnits.MILLISECONDS)
fun add(employee: Employee): Response {
   repository.persist(employee)
   return Response.ok(employee).status(201).build()
}

Now, we may call endpoint POST /employee 100 times in a row. Here’s the list of metrics generated for the single endpoint. If you would like to ensure compatibility with Micrometer metrics format you need to set the following configuration property: quarkus.smallrye-metrics.micrometer.compatibility=true.

quarkus-metrics

7. Logging request and response for Quarkus Kotlin application

There is no built-in mechanism for logging HTTP requests and responses. We may implement custom logging filter that implements interfaces ContainerRequestFilter, and ContainerResponseFilter.

@Provider
class LoggingFilter: ContainerRequestFilter, ContainerResponseFilter {

    private val logger: Logger = LoggerFactory.getLogger(LoggingFilter::class.java)

    @Context
    lateinit var info: UriInfo
    @Context
    lateinit var request: HttpServerRequest

    override fun filter(ctx: ContainerRequestContext) {
        logger.info("Request {} {}", ctx.method, info.path)
    }

    override fun filter(r: ContainerRequestContext, ctx: ContainerResponseContext) {
        logger.info("Response {} {}: {}", r.method, info.path, ctx.status)
    }
    
}

8. Testing

The module quarkus-junit5 is required for testing, as it provides the @QuarkusTest annotation that controls the testing framework. The extension rest-assured is not required, but is a convenient way to test HTTP endpoints.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-junit5</artifactId>
   <scope>test</scope>
</dependency>
<dependency>
   <groupId>io.rest-assured</groupId>
   <artifactId>kotlin-extensions</artifactId>
   <scope>test</scope>
</dependency>

We are adding new Employee in the first test. Then the second test verifies if there is a single Employee stored inside in-memory database.

@QuarkusTest
class EmployeeResourceTest {

    @Test
    fun testAddEmployee() {
        val emp = Employee(firstName = "John", lastName = "Smith", position = "Developer", salary = 20000)
        given().body(emp).contentType(ContentType.JSON)
                .post("/employees")
                .then()
                .statusCode(201)
    }

    @Test
    fun testGetAll() {
        given().get("/employees")
                .then()
                .statusCode(200)
                .assertThat().body("size()", `is`(1))
    }

}

Conclusion

In this guide, I showed you how to build a Quarkus Kotlin application that connects to a database and follows some best practices like exposing health checks, metrics, or logging incoming requests and outgoing responses. The last step is to run our sample application. To do that in development mode we just need to execute command mvn compile quarkus:dev. Here’s my start screen. You can see there, for example, the list of included Quarkus modules.

quarkus-run

If you are interested in Quarkus framework the next useful article for you is Guide to Quarkus on Kubernetes.

The post Guide to Quarkus with Kotlin appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/08/09/guide-to-quarkus-with-kotlin/feed/ 5 8353
JavaEE MicroProfile with KumuluzEE https://piotrminkowski.com/2017/07/31/javaee-microprofile-with-kumuluzee/ https://piotrminkowski.com/2017/07/31/javaee-microprofile-with-kumuluzee/#respond Mon, 31 Jul 2017 08:45:13 +0000 https://piotrminkowski.wordpress.com/?p=5194 Preface Enterprise Java seems to be a step back from the others when it comes to microservices architecture. Some weeks ago I took a part in Code Europe – the programming conference in Warsaw. One of the speakers was Ivar Grimstad who was talking about MicroProfile – an open initiative for optimizing Enterprise Java for […]

The post JavaEE MicroProfile with KumuluzEE appeared first on Piotr's TechBlog.

]]>
Preface

Enterprise Java seems to be a step back from the others when it comes to microservices architecture. Some weeks ago I took a part in Code Europe – the programming conference in Warsaw. One of the speakers was Ivar Grimstad who was talking about MicroProfile – an open initiative for optimizing Enterprise Java for a microservices architecture. This idea is very interesting, but at the moment it is rather at the beginning of the road.
However, while I was reading about the microprofile initiative I came across information about JavaEE framework developed by Slovenian company – KumuluzEE. The solution seemed to be interesting enough that I decided to take a closer look on it. Well, we can read on the web site that KumuluzEE is the Java Duke’s Choice Award Winner, so there is still a hope for JavaEE and microservices 🙂

What’s KumuluzEE

Can KumuluzEE be a competitor for the Spring Cloud framework? He is certainly not as popular and advanced in the solutions for microservices like Spring Cloud, but has basic modules for service registration, discovery, distributed configuration propagation, circuit breaking, metrics and support for Docker and Kubernetes. It uses CDI on JBoss Weld container for dependency injection and Jersey as a REST API provider. Modules for configuration and discovery basing on Consul or etcd and they are rather on early stage of development (1.0.0-SNAPSHOT), but let’s try it out.

Preparation

I’ll show you sample application which consists of two independent microservices account-service and customer-service. Both of them exposes REST API and one of customer-service methods invokes method from account-service. Every microservice registers itself in Consul and is able to get configuration properties from Consul. Sample application source code is available on GitHub. Before we begin let’s start Consul instance using Docker container.

[code]
docker run -d –name consul -p 8500:8500 -p 8600:8600 consul
[/code]

We should also add some KumuluzEE dependencies to Maven pom.xml.

[code language=”xml”]
<dependency>
<groupId>com.kumuluz.ee</groupId>
<artifactId>kumuluzee-core</artifactId>
</dependency>
<dependency>
<groupId>com.kumuluz.ee</groupId>
<artifactId>kumuluzee-servlet-jetty</artifactId>
</dependency>
<dependency>
<groupId>com.kumuluz.ee</groupId>
<artifactId>kumuluzee-jax-rs-jersey</artifactId>
</dependency>
<dependency>
<groupId>com.kumuluz.ee</groupId>
<artifactId>kumuluzee-cdi-weld</artifactId>
</dependency>
[/code]

Service Registration

To enable service registration we should add one additional dependency to our pom.xml. I chose Consul as a registration and discovery server, but you can also use etcd (kumuluzee-discovery-consul).

[code language=”xml”]
<dependency>
<groupId>com.kumuluz.ee.discovery</groupId>
<artifactId>kumuluzee-discovery-consul</artifactId>
<version>1.0.0-SNAPSHOT</version>
</dependency>
[/code]

Inside application configuration file we should set discovery properties and server URL. For me it is 192.168.99.100.

[code]
kumuluzee:
service-name: account-service
env: dev
version: 1.0.0
discovery:
consul:
agent: http://192.168.99.100:8500
hosts: http://192.168.99.100:8500
ttl: 20
ping-interval: 15
[/code]

Here’s account microservice main class. As you probably guess annotation @RegisterService enables registration on server.

[code langugae=”java”]
@RegisterService("account-service")
@ApplicationPath("v1")
public class AccountApplication extends Application {

}
[/code]

We are starting application by running java -cp target/classes;target/dependency/* com.kumuluz.ee.EeApplication. Remember to override default port by setting environment property PORT. I started two instances of account and one of customer microservice.

kumuluzee-1

Service Discovery

Microservice customer exposes API, but also invokes API method from account-service, so it has to discover and connect this service. Maven dependencies and configuration settings are the same as for account-service. The only difference is the resource class. Here’s CustomerResource fragment where we are invoking enpoint GET /customer/{id}.

[code language=”java”]
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
@Path("customers")
@RequestScoped
public class CustomerResource {

private List<Customer> customers;

@Inject
@DiscoverService(value = "account-service", version = "1.0.x", environment = "dev")
private WebTarget target;

@GET
@Path("{id}")
@Log(value = LogParams.METRICS, methodCall = true)
public Customer findById(@PathParam("id") Integer id) {
Customer customer = customers.stream().filter(it -> it.getId().intValue() == id.intValue()).findFirst().get();
WebTarget t = target.path("v1/accounts/customer/" + customer.getId());
List<Account> accounts = t.request().buildGet().invoke(List.class);
customer.setAccounts(accounts);
return customer;
}

}
[/code]

There is one pretty cool thing in discovery with KumuluzEE. As you see in the @DiscoverService we can specify version and environment for account-service instance. Version and environment for microservice is read automatically from config.yml during registration in discovery server. So we can maintain many versions of single microservice and freely invoke them from other microservices. Requests are automatically load balanced between all microservices matches conditions from annotation @ServiceDiscovery.

We can also monitor metrics such as response time by declaring @Log(value = LogParams.METRICS, methodCall = true) on API method. Here’s log fragment for account-service.

[code]
2017-07-28 13:57:01,114 TRACE ENTRY[ METHOD ] Entering method. {class=pl.piomin.services.kumuluz.account.resource.AccountResource, method=findByCustomer, parameters=[1]}
2017-07-28 13:57:01,118 TRACE EXIT[ METHOD ] Exiting method. {class=pl.piomin.services.kumuluz.account.resource.AccountResource, method=findByCustomer, parameters=[1], response-time=3, result=[pl.piomin.services.kumuluz.account.model.Account@1eb26fe3, pl.piomin.services.kumuluz.account.model.Account@2dda41c5]}
[/code]

Distributed configuration

To enable KumuluzEE Config include Consul implementation by adding the following dependency to pom.xml.

[code language=”xml”]
<dependency>
<groupId>com.kumuluz.ee.config</groupId>
<artifactId>kumuluzee-config-consul</artifactId>
<version>1.0.0-SNAPSHOT</version>
</dependency>
[/code]

I do not use Consul agent running on localhost, so I need to override some properties in config.yml. I also defined one configuration property blacklist

[code]
kumuluzee:
config:
start-retry-delay-ms: 500
max-retry-delay-ms: 900000
consul:
agent: http://192.168.99.100:8500

rest-config:
blacklist:
[/code]

Here’s the class that loads configuration properties and enables dynamically updated on any change in configuration source by declaring @ConfigValue(watch = true) on property.

[code language=”java”]
@ApplicationScoped
@ConfigBundle("rest-config")
public class AccountConfiguration {

@ConfigValue(watch = true)
private String blacklist;

public String getBlacklist() {
return blacklist;
}

public void setBlacklist(String blacklist) {
this.blacklist = blacklist;
}

}
[/code]

We use configution property blacklist in the resource class for filtering all accounts by blacklisted ids.

[code language=”java”]
@GET
@Log(value = LogParams.METRICS, methodCall = true)
public List<Account> findAll() {
final String blacklist = ConfigurationUtil.getInstance().get("rest-config.blacklist").orElse("nope");
final String[] ids = blacklist.split(",");
final List<Integer> blacklistIds = Arrays.asList(ids).stream().map(it -> new Integer(it)).collect(Collectors.toList());
return accounts.stream().filter(it -> !blacklistIds.contains(it.getId())).collect(Collectors.toList());
}
[/code]

Configuration property should be defined in Consul UI Dashboard under KEY/VALUE tab. KumuluzEE enforces a certain format of key name. In this case it has to be environments/dev/services/account-service/1.0.0/config/rest-config/blacklist. You can update property value and test changes by invoking http://localhost:2222/v1/accounts.

kumuluzee-2

Final Words

Creating microservices with KumuluzEE is pretty easy. I showed you the main capabilities of this framework. KumulezEE has also modules for bircuit breaker with Hystrix, streaming with Apache Kafka and security with OAuth2/OpenID. I will keep a close eye on this library and I hope it will continue to be developed.

The post JavaEE MicroProfile with KumuluzEE appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/07/31/javaee-microprofile-with-kumuluzee/feed/ 0 5194