java microservices Archives - Piotr's TechBlog https://piotrminkowski.com/tag/java-microservices/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 15 Dec 2020 09:07:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 java microservices Archives - Piotr's TechBlog https://piotrminkowski.com/tag/java-microservices/ 32 32 181738725 Microprofile Java Microservices on WildFly https://piotrminkowski.com/2020/12/14/microprofile-java-microservices-on-wildfly/ https://piotrminkowski.com/2020/12/14/microprofile-java-microservices-on-wildfly/#respond Mon, 14 Dec 2020 14:26:31 +0000 https://piotrminkowski.com/?p=9200 In this guide, you will learn how to implement the most popular Java microservices patterns with the MicroProfile project. We’ll look at how to create a RESTful application using JAX-RS and CDI. Then, we will run our microservices on WildFly as bootable JARs. Finally, we will deploy them on OpenShift in order to use its […]

The post Microprofile Java Microservices on WildFly appeared first on Piotr's TechBlog.

]]>
In this guide, you will learn how to implement the most popular Java microservices patterns with the MicroProfile project. We’ll look at how to create a RESTful application using JAX-RS and CDI. Then, we will run our microservices on WildFly as bootable JARs. Finally, we will deploy them on OpenShift in order to use its service discovery and config maps.

The MicroProfile project breathes a new life into Java EE. Since the rise of microservices Java EE had lost its dominant position in the JVM enterprise area. As a result, application servers and EJBs have been replaced by lightweight frameworks like Spring Boot. MicroProfile is an answer to that. It defines Java EE standards for building microservices. Therefore it can be treated as a base to build more advanced frameworks like Quarkus or KumuluzEE.

If you are interested in frameworks built on top of MicroProfile, Quarkus is a good example: Quick Guide to Microservices with Quarkus on OpenShift. You can always implement your custom service discovery implementation for MicroProfile microservices. You should try with Consul: Quarkus Microservices with Consul Discovery.

Source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-microprofile-microservices. Then you should go to the employee-service and department-service directories, and just follow my instructions 🙂

1. Running on WildFly

A few weeks ago WildFly has introduced the “Fat JAR” packaging feature. This feature is fully supported since WildFly 21. We can apply it during a Maven build by including wildfly-jar-maven-plugin to the pom.xml file. What is important, we don’t have to re-design an application to run it inside a bootable JAR.

In order to use the “Fat JAR” packaging feature, we need to add the package execution goal. Then we should install two features inside the configuration section. The first of them, the jaxrs-server feature, is a layer that allows us to build a typical REST application. The second of them, the microprofile-platform feature, enables MicroProfile on the WildFly server.

<profile>
   <id>bootable-jar</id>
   <activation>
      <activeByDefault>true</activeByDefault>
   </activation>
   <build>
      <finalName>${project.artifactId}</finalName>
      <plugins>
         <plugin>
            <groupId>org.wildfly.plugins</groupId>
            <artifactId>wildfly-jar-maven-plugin</artifactId>
            <version>2.0.2.Final</version>
            <executions>
               <execution>
                  <goals>
                     <goal>package</goal>
                  </goals>
               </execution>
            </executions>
            <configuration>
               <feature-pack-location>
                  wildfly@maven(org.jboss.universe:community-universe)#${version.wildfly}
               </feature-pack-location>
               <layers>
                  <layer>jaxrs-server</layer>
                  <layer>microprofile-platform</layer>
               </layers>
            </configuration>
         </plugin>
      </plugins>
   </build>
</profile>

Finally, we just need to execute the following command to build and run our “Fat JAR” application on WildFly.

$ mvn package wildfly-jar:run

If we run multiple applications on the same machine, we would have to override default HTTP and management ports. To do that we need to add the jvmArguments section inside configuration. We may insert there any number of JVM arguments. In that case, the required arguments are jboss.http.port and jboss.management.http.port.

<configuration>
   ...
   <jvmArguments>
      <jvmArgument>-Djboss.http.port=8090</jvmArgument>
      <jvmArgument>-Djboss.management.http.port=9090</jvmArgument>
   </jvmArguments>
</configuration>

2. Creating JAX-RS applications

In the first step, we will create simple REST applications with JAX-RS. WildFly provides all the required libraries, but we need to include both these artifacts for the compilation phase.

<dependency>
   <groupId>org.jboss.spec.javax.ws.rs</groupId>
   <artifactId>jboss-jaxrs-api_2.1_spec</artifactId>
   <scope>provided</scope>
</dependency>
<dependency>
   <groupId>jakarta.enterprise</groupId>
   <artifactId>jakarta.enterprise.cdi-api</artifactId>
   <scope>provided</scope>
</dependency>

Then, we should set the dependencyManagement section. We will use BOM provided by WildFly for both MicroProfile and Jakarta EE.

<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.wildfly.bom</groupId>
         <artifactId>wildfly-jakartaee8-with-tools</artifactId>
         <version>${version.wildfly}</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
      <dependency>
         <groupId>org.wildfly.bom</groupId>
         <artifactId>wildfly-microprofile</artifactId>
         <version>${version.wildfly}</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

Here’s the JAX-RS controller inside employee-service. It uses an in-memory repository bean. It also injects a random delay to all exposed HTTP endpoints with the @Delay annotation. To clarify, I’m just setting it for future use, in order to present the metrics and fault tolerance features.

@Path("/employees")
@ApplicationScoped
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
@Delay
public class EmployeeController {

   @Inject
   EmployeeRepository repository;

   @POST
   public Employee add(Employee employee) {
      return repository.add(employee);
   }

   @GET
   @Path("/{id}")
   public Employee findById(@PathParam("id") Long id) {
      return repository.findById(id);
   }

   @GET
   public List<Employee> findAll() {
      return repository.findAll();
   }

   @GET
   @Path("/department/{departmentId}")
   public List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId) {
      return repository.findByDepartment(departmentId);
   }

   @GET
   @Path("/organization/{organizationId}")
   public List<Employee> findByOrganization(@PathParam("organizationId") Long organizationId) {
      return repository.findByOrganization(organizationId);
   }

}

Here’s a definition of the delay interceptor class. It is annotated with a base @Interceptor and custom @Delay. It injects a random delay between 0 and 1000 milliseconds to each method invoke.

@Interceptor
@Delay
public class AddDelayInterceptor {

   Random r = new Random();

   @AroundInvoke
   public Object call(InvocationContext invocationContext) throws Exception {
      Thread.sleep(r.nextInt(1000));
      System.out.println("Intercept");
      return invocationContext.proceed();
   }

}

Finally, let’s just take a look on the custom @Delay annotation.

@InterceptorBinding
@Target({METHOD, TYPE})
@Retention(RUNTIME)
public @interface Delay {
}

3. Enable metrics for MicroProfile microservices

Metrics is one of the core MicroProfile modules. Data is exposed via REST over HTTP under the /metrics base path in two different data formats for GET requests. These formats are JSON and OpenMetrics. The OpenMetrics text format is supported by Prometheus. In order to enable the MicroProfile metrics, we need to include the following dependency to Maven pom.xml.

<dependency>
   <groupId>org.eclipse.microprofile.metrics</groupId>
   <artifactId>microprofile-metrics-api</artifactId>
   <scope>provided</scope>
</dependency>

To enable the basic metrics we just need to annotate the controller class with @Timed.

@Path("/employees")
@ApplicationScoped
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
@Delay
@Timed
public class EmployeeController {
   ...
}

The /metrics endpoint is available under the management port. Firstly, let’s send some test requests, for example to the GET /employees endpoint. The application employee-service is available on http://localhost:8080/. Then let’s call the endpoint http://localhost:9990/metrics. Here’s a full list of metrics generated for the findAll method. Similar metrics would be generated for all other HTTP endpoints.

4. Generate OpenAPI specification

The REST API specification is another essential thing for all microservices. So, it is not weird that the OpenAPI module is a part of a MicroProfile core. The API specification is automatically generated after including the microprofile-openapi-api module. This module is a part microprofile-platform layer defined for wildfly-jar-maven-plugin.

After starting the application we may access OpenAPI documentation by calling http://localhost:8080/openapi endpoint. Then, we can copy the result to the Swagger editor. The graphical representation of the employee-service API is visible below.

microprofile-java-microservices-openapi

5. Microservices inter-communication with MicroProfile REST client

The department-service calls endpoint GET /employees/department/{departmentId} from the employee-service. Then it returns a department with a list of all assigned employees.

@Getter
@Setter
@NoArgsConstructor
@AllArgsConstructor
public class Department {
   private Long id;
   private String name;
   private Long organizationId;
   private List<Employee> employees = new ArrayList<>();
}

Of course, we need to include the REST client module to the Maven dependencies.

<dependency>
   <groupId>org.eclipse.microprofile.rest.client</groupId>
   <artifactId>microprofile-rest-client-api</artifactId>
   <scope>provided</scope>
</dependency>

The MicroProfile REST module allows defining a client declaratively. We should annotate the client interface with @RegisterRestClient. The rest of the implementation is rather obvious.

@Path("/employees")
@RegisterRestClient(baseUri = "http://employee-service:8080")
public interface EmployeeClient {

   @GET
   @Path("/department/{departmentId}")
   @Produces(MediaType.APPLICATION_JSON)
   List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId);
}

Finally, we just need to inject the EmployeeClient bean to the controller class.

@Path("/departments")
@ApplicationScoped
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
@Timed
public class DepartmentController {

   @Inject
   DepartmentRepository repository;
   @Inject
   EmployeeClient employeeClient;

   @POST
   public Department add(Department department) {
      return repository.add(department);
   }

   @GET
   @Path("/{id}")
   public Department findById(@PathParam("id") Long id) {
      return repository.findById(id);
   }

   @GET
   public List<Department> findAll() {
      return repository.findAll();
   }

   @GET
   @Path("/organization/{organizationId}")
   public List<Department> findByOrganization(@PathParam("organizationId") Long organizationId) {
      return repository.findByOrganization(organizationId);
   }

   @GET
   @Path("/organization/{organizationId}/with-employees")
   public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
      return departments;
   }

}

The MicroProfile project does not implement service discovery patterns. There are some frameworks built on top of MicroProfile that provide such kind of implementation, for example, KumuluzEE. If you do not deploy our applications on OpenShift you may add the following entry in your /etc/hosts file to test it locally.

127.0.0.1 employee-service

Finally, let’s call endpoint GET /departments/organization/{organizationId}/with-employees. The result is visible in the picture below.

6. Java microservices fault tolerance with MicroProfile

To be honest, fault tolerance handling is my favorite feature of MicroProfile. We may configure them on the controller methods using annotations. We can choose between @Timeout, @Retry, @Fallback and @CircuitBreaker. Alternatively, it is possible to use a mix of those annotations on a single method. As you probably remember, we injected a random delay between 0 and 1000 milliseconds into all the endpoints exposed by employee-service. Now, let’s consider the method inside department-service that calls endpoint GET /employees/department/{departmentId} from employee-service. Firstly, we will annotate that method with @Timeout as shown below. The current timeout is 500 ms.

public class DepartmentController {

   @Inject
   DepartmentRepository repository;
   @Inject
   EmployeeClient employeeClient;

   ...

   @GET
   @Path("/organization/{organizationId}/with-employees")
   @Timeout(500)
   public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
      return departments;
   }

}

Before calling the method, let’s create an exception mapper. If TimeoutException occurs, the department-service endpoint will return status HTTP 504 - Gateway Timeout.

@Provider
public class TimeoutExceptionMapper implements 
      ExceptionMapper<TimeoutException> {

   public Response toResponse(TimeoutException e) {
      return Response.status(Response.Status.GATEWAY_TIMEOUT).build();
   }

}

Then, we may proceed to call our test endpoint. Probably 50% of requests will finish with the result visible below.

On the other hand, we may enable a retry mechanism for such an endpoint. After that, the change for receive status HTTP 200 OK becomes much bigger than before.

@GET
@Path("/organization/{organizationId}/with-employees")
@Timeout(500)
@Retry(retryOn = TimeoutException.class)
public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
   List<Department> departments = repository.findByOrganization(organizationId);
   departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
   return departments;
}

7. Deploy MicroProfile microservices on OpenShift

We can easily deploy MicroProfile Java microservices on OpenShift using the JKube plugin. It is a successor of the deprecated Fabric8 Maven Plugin. Eclipse JKube is a collection of plugins and libraries that are used for building container images using Docker, JIB or S2I build strategies. It generates and deploys Kubernetes and OpenShift manifests at compile time too. So, let’s add openshift-maven-plugin to the pom.xml file.

The configuration visible below sets 2 replicas for the deployment and enforces using health checks. In addition to this, openshift-maven-plugin generates the rest of a deployment config based on Maven pom.xml structure. For example, it generates employee-service-deploymentconfig.yml, employee-service-route.yml, and employee-service-service.yml for the employee-service application.

<plugin>
   <groupId>org.eclipse.jkube</groupId>
   <artifactId>openshift-maven-plugin</artifactId>
   <version>1.0.2</version>
   <executions>
      <execution>
         <id>jkube</id>
         <goals>
            <goal>resource</goal>
            <goal>build</goal>
         </goals>
      </execution>
   </executions>
   <configuration>
      <resources>
         <replicas>2</replicas>
      </resources>
      <enricher>
         <config>
            <jkube-healthcheck-wildfly-jar>
               <enforceProbes>true</enforceProbes>
            </jkube-healthcheck-wildfly-jar>
         </config>
      </enricher>
   </configuration>
</plugin>

In order to deploy the application on OpenShift we need to run the following command.

$ mvn oc:deploy -P bootable-jar-openshift

Since the property enforceProbes has been enabled openshift-maven-plugin adds liveness and readiness probes to the DeploymentConfig. Therefore, we need to implement both these endpoints in our MicroProfile applications. MicroProfile provides a smart mechanism for creating liveness and readiness health checks. We just need to annotate the class with @Liveness or @Readiness, and implement the HealthCheck interface. Here’s the example implementation of the liveness endpoint.

@Liveness
@ApplicationScoped
public class LivenessEndpoint implements HealthCheck {
   @Override
   public HealthCheckResponse call() {
      return HealthCheckResponse.up("Server up");
   }
}

On the other hand, the implementation of the readiness probe also verifies the status of the repository bean. Of course, it is just a simple example.

@Readiness
@ApplicationScoped
public class ReadinessEndpoint implements HealthCheck {
   @Inject
   DepartmentRepository repository;

   @Override
   public HealthCheckResponse call() {
      HealthCheckResponseBuilder responseBuilder = HealthCheckResponse
         .named("Repository up");
      List<Department> departments = repository.findAll();
      if (repository != null && departments.size() > 0)
         responseBuilder.up();
      else
         responseBuilder.down();
      return responseBuilder.build();
   }
}

After deploying both employee-service and department-service application we may verify a list of DeploymentConfigs.

We can also navigate to the OpenShift console. Let’s take a look at a list of running pods. There are two instances of the employee-service and a single instance of department-service.

microprofile-java-microservices-openshift-pods

8. MicroProfile OpenTracing with Jaeger

Tracing is another important pattern in microservices architecture. The OpenTracing module is a part of MicroProfile specification. Besides the microprofile-opentracing-api library we also need to include the opentracing-api module.

<dependency>
   <groupId>org.eclipse.microprofile.opentracing</groupId>
   <artifactId>microprofile-opentracing-api</artifactId>
   <scope>provided</scope>
</dependency>
<dependency>
   <groupId>io.opentracing</groupId>
   <artifactId>opentracing-api</artifactId>
   <version>0.31.0</version>
</dependency>

By default, MicroProfile OpenTracing integrates the application with Jaeger. If you are testing our sample microservices on OpenShift, you may install Jaeger using an operator. Otherwise, we may just start it on the Docker container. The Jaeger UI is available on the address http://localhost:16686.

$ docker run -d --name jaeger \
-p 6831:6831/udp \
-p 16686:16686 \
jaegertracing/all-in-one:1.16.0

We don’t have to do anything more than adding the required dependencies to enable tracing. However, it is worth overriding the names of recorded operations. We may do it by annotating a particular method with @Traced and then by setting parameter operationName. The implementation of findByOrganizationWithEmployees method in the department-service is visible below.

public class DepartmentController {

   @Inject
   DepartmentRepository repository;
   @Inject
   EmployeeClient employeeClient;

   ...

   @GET
   @Path("/organization/{organizationId}/with-employees")
   @Timeout(500)
   @Retry(retryOn = TimeoutException.class)
   @Traced(operationName = "findByOrganizationWithEmployees")
   public List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId) {
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
      return departments;
   }
   
}

We can also take a look at the fragment of implementation of EmployeeController.

public class EmployeeController {

   @Inject
   EmployeeRepository repository;

   ...
   
   @GET
   @Traced(operationName = "findAll")
   public List<Employee> findAll() {
      return repository.findAll();
   }

   @GET
   @Path("/department/{departmentId}")
   @Traced(operationName = "findByDepartment")
   public List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId) {
      return repository.findByDepartment(departmentId);
   }
   
}

Before running the applications we should at least set the environment variable JAEGER_SERVICE_NAME. It configures the name of the application visible by Jaeger. For example, before starting the employee-service application we should set the value JAEGER_SERVICE_NAME=employee-service. Finally, let’s send some test requests to the department-service endpoint GET departments/organization/{organizationId}/with-employees.

$ curl http://localhost:8090/departments/organization/1/with-employees
$ curl http://localhost:8090/departments/organization/2/with-employees

After sending some test requests we may go to the Jaeger UI. The picture visible below shows the history of requests processed by the method findByOrganizationWithEmployees inside department-service.

As you probably remember, this method calls a method from the employee-service, and configures timeout and retries in case of failure. The picture below shows the details about a single request processed by the method findByOrganizationWithEmployees. To clarify, it has been retried once.

microprofile-java-microservices-jeager-details

Conclusion

This article guides you through the most important steps of building Java microservices with MicroProfile. You may learn how to implement tracing, health checks, OpenAPI, and inter-service communication with a REST client. after reading you are able to run your MicroProfile Java microservices locally on WildFly, and moreover deploy them on OpenShift using a single maven command. Enjoy 🙂

The post Microprofile Java Microservices on WildFly appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/12/14/microprofile-java-microservices-on-wildfly/feed/ 0 9200
Apache Camel K and Quarkus on Kubernetes https://piotrminkowski.com/2020/12/08/apache-camel-k-and-quarkus-on-kubernetes/ https://piotrminkowski.com/2020/12/08/apache-camel-k-and-quarkus-on-kubernetes/#respond Tue, 08 Dec 2020 08:40:48 +0000 https://piotrminkowski.com/?p=9181 Apache Camel K and Quarkus may simplify our development on Kubernetes. They are both relatively new products. Apache Camel K is a lightweight integration framework that runs natively on Kubernetes. It allows us to run code written in Camel DSL on the cloud. We may easily integrate it with the Quarkus framework. As a result, […]

The post Apache Camel K and Quarkus on Kubernetes appeared first on Piotr's TechBlog.

]]>
Apache Camel K and Quarkus may simplify our development on Kubernetes. They are both relatively new products. Apache Camel K is a lightweight integration framework that runs natively on Kubernetes. It allows us to run code written in Camel DSL on the cloud. We may easily integrate it with the Quarkus framework. As a result, we would have a powerful solution that helps us in building serverless or microservices applications.

It is my first article about Apache Camel K. However, you will find many interesting posts about Quarkus on my blog. It is worth reading Guide to Quarkus on Kubernetes before proceeding with this article. If you would like to know more about microservices with Quarkus you should also refer to Quick Guide to Microservices with Quarkus on OpenShift.

In this article, I will show you how to integrate Quarkus with Apache Camel. Consequently, we will use the Camel Quarkus project for that. It provides Quarkus extensions for many of the Camel components. Finally, you will also learn how to install Apache Camel K on Kubernetes and then use it to deploy our Quarkus Camel application there. Ok, let’s do this!

Source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-camel-quarkus. Then you should go to the account-service directory, and just follow my instructions 🙂 If you are interested in more details about Apache Camel you should read its documentation.

Enable integration between Apache Camel and Quarkus

We are going to create a simple application that exposes a REST API and uses an in-memory repository. There are many camel.quarkus.* components that may be used between Apache Camel and Quarkus. In order to build a REST-based application, we need to include three of them: Rest, Jackson, and Platform HTTP. I have also included a well-known Lombok module.

<dependency>
   <groupId>org.apache.camel.quarkus</groupId>
   <artifactId>camel-quarkus-platform-http</artifactId>
</dependency>
<dependency>
   <groupId>org.apache.camel.quarkus</groupId>
   <artifactId>camel-quarkus-rest</artifactId>
</dependency>
<dependency>
   <groupId>org.apache.camel.quarkus</groupId>
   <artifactId>camel-quarkus-jackson</artifactId>
</dependency>
<dependency>
   <groupId>org.projectlombok</groupId>
   <artifactId>lombok</artifactId>
   <version>1.18.16</version>
</dependency>

Then we should add a dependencyManagement section with Camel Quarkus BOM. Apache Camel K ignores this section during deployment on Kubernetes. However, it may be useful for a local run with mvn compile quarkus:dev.

<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.apache.camel.quarkus</groupId>
         <artifactId>camel-quarkus-bom</artifactId>
         <version>${camel-quarkus.version}</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

Create Quarkus application using Apache Camel DSL

The most comfortable way to deploy the application on Kubernetes with Apache Camel K is by setting a single file name in the running command. So, we should have the logic closed inside a single source code file. Although it is a standard for serverless applications, it is not comfortable for microservices. Let’s take a look at the model class. Both model and repository classes will be nested inside the single AccountRoute class.

@Getter
@Setter
@AllArgsConstructor
@NoArgsConstructor
public class Account {
   private Integer id;
   private String number;
   private int amount;
   private Integer customerId;
}

Here’s our in-memory repository implementation.

public class AccountService {

   private List<Account> accounts = new ArrayList<>();

   AccountService() {
      accounts.add(new Account(1, "1234567890", 5000, 1));
      accounts.add(new Account(2, "1234567891", 12000, 1));
      accounts.add(new Account(3, "1234567892", 30000, 2));
   }

   public Optional<Account> findById(Integer id) {
      return accounts.stream()
            .filter(it -> it.getId().equals(id))
            .findFirst();
   }

   public List<Account> findByCustomerId(Integer customerId) {
      return accounts.stream()
            .filter(it -> it.getCustomerId().equals(customerId))
            .collect(Collectors.toList());
   }

   public List<Account> findAll() {
      return accounts;
   }

   public Account add(Account account) {
      account.setId(accounts.size() + 1);
      accounts.add(account);
      return account;
   }

}

Finally, we may proceed to the AccountRoute implementation. It extends a Camel RouteBuilder base class. It has to override the configure method to define Camel routes using the REST component. Firstly, we need to set a global JSON binding mode for all the routes. Then we may define REST endpoints using rest() DSL method. There are four HTTP endpoints as shown below:

  • GET /accounts/{id} – find a single account object by its id.
  • GET /accounts/customer/{customerId} – find a list of account objects by the customer id.
  • POST /accounts – add a new account.
  • GET /accounts – list all available accounts.
@ApplicationScoped
public class AccountRoute extends RouteBuilder {

   AccountService accountService = new AccountService();

   @Override
   public void configure() throws Exception {
      restConfiguration().bindingMode(RestBindingMode.json);

      rest("/accounts")
            .get("/{id}")
               .route().bean(accountService, "findById(${header.id})").endRest()
            .get("/customer/{customerId}")
               .route().bean(accountService, "findByCustomerId(${header.customerId})").endRest()
            .get().route().bean(accountService, "findAll").endRest()
            .post("/")
               .consumes("application/json").type(Account.class)
               .route().bean(accountService, "add(${body})").endRest();
   }

   // model and repository implementations ...
}

Install Apache Camel K on Kubernetes

Our Quarkus application is ready. Now, we need to deploy it on Kubernetes. And here comes Apache Camel K. With this solution, we can easily deploy Camel routes directly from the source code by executing command kamel run $SOURCE_FILE_LOCATION. After that, the integration code immediately runs in the cloud. Of course, to achieve that we first need to install Apache Camel K on Kubernetes. Let’s take a closer look at the installation process.

The first information is not very positive. Apache Camel K supports the default registry for OpenShift (including local Minishift or CRC) and Minikube. I’m using Kubernetes on Docker Desktop… Ok, it doesn’t put me off. We just need to add some additional parameters in the installation command. Let’s set the docker.io repository. Of course, you need to have an account on Docker Hub as shown below.

$ kamel install --registry docker.io --organization your-user-id --registry-auth-username your-user-id --registry-auth-password your-password

This command will create custom resource definitions on the cluster and install the operator on the current namespace. Let’s verify it.

Deploy Quarkus on Kubernetes using Apache Camel K

Then, we should go to the account-service directory. In order, to deploy the Quarkus application with Apache Camel K we need to execute the following command.

$ kamel run --name account --dev \
   src/main/java/pl/piomin/samples/quarkus/account/route/AccountRoute.java \
   --save

Unfortunately, Apache Camel K is not able to detect all the dependencies declared in Maven pom.xml. Therefore, we will declare them inside the comment on the AccountRoute class. They must be preceded by a keyword camel-k as shown below.

// camel-k: dependency=mvn:org.apache.camel.quarkus:camel-quarkus-jackson
// camel-k: dependency=mvn:org.projectlombok:lombok:1.18.16

@ApplicationScoped
public class AccountRoute extends RouteBuilder {
   ...
}

If everything goes fine, our Quarkus Camel application successfully starts on Kubernetes. So, you should see similar logs after executing the kamel run command.

apache-camel-k-quarkus-on-kubernetes-quarkus-logs

Then, we may take a look at a list of deployments once again. Apache Camel K has created a new deployment with the name account. We had set such a name in kamel run command using the name parameter.

In the background, Apache Camel K creates the CRD Integration. So, if you are interested in how it works, it is worth to print details about the Integration object.

$ kubectl get integration account -o yaml

Final testing

Since our application is running on Kubernetes, we may proceed to the tests. Firstly, let’s see a list of Kubernetes services. Fortunately, Apache Camel K automatically creates the service with a NodePort type. In my case, the account service is exposed at the port 30860.

apache-camel-k-quarkus-on-kubernetes-svc

Finally, let’s just send some test requests.

$ curl http://localhost:30860/accounts
[{"id":1,"number":"1234567890","amount":5000,"customerId":1},
 {"id":2,"number":"1234567891","amount":12000,"customerId":1},
 {"id":3,"number":"1234567892","amount":30000,"customerId":2}]
$ curl http://localhost:30860/accounts/1
{"id":1,"number":"1234567890","amount":5000,"customerId":1}
$ curl http://localhost:30860/accounts/2
{"id":2,"number":"1234567891","amount":12000,"customerId":1}
$ curl http://localhost:30860/accounts/customer/1
[{"id":1,"number":"1234567890","amount":5000,"customerId":1},
 {"id":2,"number":"1234567891","amount":12000,"customerId":1}]

And the last cool feature at the end. Let’s assume we will perform some changes in the Account.java file. The command kamel run is still running and watching for changes in the main Camel file. The latest version of the application with changes is immediately deployed on Kubernetes.

Conclusion

Apache Camel K fits perfectly to the serverless applications, where the whole code is encapsulated in a single file. However, we may also use it to deploy microservices on Kubernetes. We may take advantage of the built-in integration between Apache Camel K and Quarkus. The only constraint on using it is the necessity to have the whole code not directly related to Camel routes in a single file or in an external library. Other than that, Apache Camel K seems promising. Unfortunately, it doesn’t have the same level of support for Spring Boot as for Quarkus. I hope it will be improved in the near future. I’ll definitely keep an eye on the development of this framework.

The post Apache Camel K and Quarkus on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/12/08/apache-camel-k-and-quarkus-on-kubernetes/feed/ 0 9181
Asynchronous Microservices with Vertx https://piotrminkowski.com/2017/08/24/asynchronous-microservices-with-vert-x/ https://piotrminkowski.com/2017/08/24/asynchronous-microservices-with-vert-x/#respond Thu, 24 Aug 2017 10:57:02 +0000 https://piotrminkowski.wordpress.com/?p=5625 Preface I must admit that as soon as I saw Vertx documentation I liked this concept. This may have happened because I had previously used a very similar framework which I used to create simple and lightweight applications exposing REST APIs – Node.js. It is a really fine framework, but has one big disadvantage for […]

The post Asynchronous Microservices with Vertx appeared first on Piotr's TechBlog.

]]>
Preface

I must admit that as soon as I saw Vertx documentation I liked this concept. This may have happened because I had previously used a very similar framework which I used to create simple and lightweight applications exposing REST APIs – Node.js. It is a really fine framework, but has one big disadvantage for me – it is JavaScript runtime. What is worth mentioning Vertx is polyglot and asynchronous. It supports all the most popular JVM based languages like Java, Scala, Groovy, Kotlin, and even JavaScript. These are not all of its advantages. It’s lightweight, fast, and modular. I was pleasantly surprised when I added the main Vertx dependencies to my pom.xml and there were not downloaded many other dependencies, as is often the case when using Spring Boot framework.

Well, I will not elaborate on the advantages and key concepts of this toolkit. I think you can read more about it in other articles. The most important thing for us is that using Vertx we can create high performance and asynchronous microservices based on the Netty framework. In addition, we can use standardized microservices mechanisms such as service discovery, configuration server, or circuit breaking.

Sample application source code is available on Github. It consists of two modules account-vertx-service and customer-vertx-service. Customer service retrieves data from Consul registry and invokes account service API. The architecture of the sample solution is visible in the figure below.

vertx

Building Vertx asynchronous services

To be able to create HTTP service exposing REST API we need to include the following dependency into pom.xml.


<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-web</artifactId>
   <version>${vertx.version}</version>
</dependency>

Here’s the fragment from the account service where I defined all API methods. The first step (1) was to declare Router which is one of the core concepts of Vertx-Web. A router takes an HTTP request, finds the first matching route for that request, and passes the request to that route. The next step (2), (3) is to add some handlers, for example BodyHandler, which allows you to retrieve request bodies and has been added to the POST method. Then we can begin to define API methods (4), (5), (6), (7), (8). And finally (9) we are starting the HTTP server on the port retrieved from the configuration.

Router router = Router.router(vertx); // (1)
router.route("/account/*").handler(ResponseContentTypeHandler.create()); // (2)
router.route(HttpMethod.POST, "/account").handler(BodyHandler.create()); // (3)
router.get("/account/:id").produces("application/json").handler(rc -> { // (4)
   repository.findById(rc.request().getParam("id"), res -> {
      Account account = res.result();
      LOGGER.info("Found: {}", account);
      rc.response().end(account.toString());
   });
});
router.get("/account/customer/:customer").produces("application/json").handler(rc -> { // (5)
   repository.findByCustomer(rc.request().getParam("customer"), res -> {
      List<Account> accounts = res.result();
      LOGGER.info("Found: {}", accounts);
      rc.response().end(Json.encodePrettily(accounts));
   });
});
router.get("/account").produces("application/json").handler(rc -> { // (6)
   repository.findAll(res -> {
      List<Account> accounts = res.result();
      LOGGER.info("Found all: {}", accounts);
      rc.response().end(Json.encodePrettily(accounts));
   });
});
router.post("/account").produces("application/json").handler(rc -> { // (7)
   Account a = Json.decodeValue(rc.getBodyAsString(), Account.class);
   repository.save(a, res -> {
      Account account = res.result();
      LOGGER.info("Created: {}", account);
      rc.response().end(account.toString());
   });
});
router.delete("/account/:id").handler(rc -> { // (8)
   repository.remove(rc.request().getParam("id"), res -> {
      LOGGER.info("Removed: {}", rc.request().getParam("id"));
      rc.response().setStatusCode(200);
   });
});
...
vertx.createHttpServer().requestHandler(router::accept).listen(conf.result().getInteger("port")); // (9)

All API methods use a repository object to communicate with the data source. In this case, I decided to use Mongo. Vertx has a module for interacting with that database, we need to include as a new dependency.


<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-mongo-client</artifactId>
   <version>${vertx.version}</version>
</dependency>

Mongo client, same as all other Vertx modules, works asynchronously. That’s why we need to use the AsyncResult Handler to pass results from the repository object. To be able to pass custom object as AsyncResult we have to annotate it with @DataObject and add toJson method.

public AccountRepositoryImpl(final MongoClient client) {
   this.client = client;
}

@Override
public AccountRepository save(Account account, Handler<AsyncResult<Account>> resultHandler) {
   JsonObject json = JsonObject.mapFrom(account);
   client.save(Account.DB_TABLE, json, res -> {
      if (res.succeeded()) {
         LOGGER.info("Account created: {}", res.result());
         account.setId(res.result());
         resultHandler.handle(Future.succeededFuture(account));
      } else {
         LOGGER.error("Account not created", res.cause());
         resultHandler.handle(Future.failedFuture(res.cause()));
      }
   });
   return this;
}

@Override
public AccountRepository findAll(Handler<AsyncResult<List<Account>>> resultHandler) {
   client.find(Account.DB_TABLE, new JsonObject(), res -> {
      if (res.succeeded()) {
         List<Account> accounts = res.result().stream().map(it -> new Account(it.getString("_id"), it.getString("number"), it.getInteger("balance"), it.getString("customerId"))).collect(Collectors.toList());
         resultHandler.handle(Future.succeededFuture(accounts));
      } else {
         LOGGER.error("Account not found", res.cause());
         resultHandler.handle(Future.failedFuture(res.cause()));
      }
   });
   return this;
}

Here’s Account model class.

@DataObject
public class Account {

   public static final String DB_TABLE = "account";

   private String id;
   private String number;
   private int balance;
   private String customerId;

   public Account() {

   }

   public Account(String id, String number, int balance, String customerId) {
      this.id = id;
      this.number = number;
      this.balance = balance;
      this.customerId = customerId;
   }

   public Account(JsonObject json) {
      this.id = json.getString("id");
      this.number = json.getString("number");
      this.balance = json.getInteger("balance");
      this.customerId = json.getString("customerId");
   }

   public String getId() {
      return id;
   }

   public void setId(String id) {
      this.id = id;
   }

   public String getNumber() {
      return number;
   }

   public void setNumber(String number) {
      this.number = number;
   }

   public int getBalance() {
      return balance;
   }

   public void setBalance(int balance) {
      this.balance = balance;
   }

   public String getCustomerId() {
      return customerId;
   }

   public void setCustomerId(String customerId) {
      this.customerId = customerId;
   }

   public JsonObject toJson() {
      return JsonObject.mapFrom(this);
   }

   @Override
   public String toString() {
      return Json.encodePrettily(this);
   }

}

Verticles

It is worth mentioning a few words about running an application written in Vertx. It is based on verticles. Verticles are chunks of code that get deployed and run by Vertx. A Vertx instance maintains N event loop threads by default. When creating a verticle we have to extend abstract class AbstractVerticle.


public class AccountServer extends AbstractVerticle {

   @Override
   public void start() throws Exception {
      ...
   }
}

I created two verticles per microservice. First for HTTP server and second for communication with Mongo. Here’s the main application method where I’m deploying verticles.

public static void main(String[] args) throws Exception {
   Vertx vertx = Vertx.vertx();
   vertx.deployVerticle(new MongoVerticle());
   vertx.deployVerticle(new AccountServer());
}

Well, now we should obtain the reference inside AccountServer verticle to the service running on MongoVerticle. To achieve it we have to generate proxy classes using vertx-codegen module.

<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-service-proxy</artifactId>
   <version>${vertx.version}</version>
</dependency>
<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-codegen</artifactId>
   <version>${vertx.version}</version>
   <scope>provided</scope>
</dependency>

First, annotate repository interface with @ProxyGen ad all public methods with @Fluent.

@ProxyGen
public interface AccountRepository {

   @Fluent
   AccountRepository save(Account account, Handler<AsyncResult<Account>> resultHandler);

   @Fluent
   AccountRepository findAll(Handler<AsyncResult<List<Account>>> resultHandler);

   @Fluent
   AccountRepository findById(String id, Handler<AsyncResult<Account>> resultHandler);

   @Fluent
   AccountRepository findByCustomer(String customerId, Handler<AsyncResult<List<Account>>> resultHandler);

   @Fluent
   AccountRepository remove(String id, Handler<AsyncResult<Void>> resultHandler);

   static AccountRepository createProxy(Vertx vertx, String address) {
      return new AccountRepositoryVertxEBProxy(vertx, address);
   }

   static AccountRepository create(MongoClient client) {
      return new AccountRepositoryImpl(client);
   }

}

Generator needs additional configuration inside pom.xml file. After running command mvn clean install on the parent project all generated classes should be available under src/main/generated directory for every microservice module.

<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-compiler-plugin</artifactId>
   <version>3.6.2</version>
   <configuration>
      <encoding>${project.build.sourceEncoding}</encoding>
      <source>${java.version}</source>
      <target>${java.version}</target>
      <useIncrementalCompilation>false</useIncrementalCompilation>
      <annotationProcessors>      
         <annotationProcessor>io.vertx.codegen.CodeGenProcessor</annotationProcessor>
      </annotationProcessors>
      <generatedSourcesDirectory>${project.basedir}/src/main/generated</generatedSourcesDirectory>
      <compilerArgs>
         <arg>-AoutputDirectory=${project.basedir}/src/main</arg>
      </compilerArgs>
   </configuration>
</plugin>

Now we are able to obtain AccountRepository reference by calling createProxy with account-service name.


AccountRepository repository = AccountRepository.createProxy(vertx, "account-service");

Service Discovery with Consul

To use the Vertx service discovery, we have to add the following dependencies into pom.xml. In the first of them, there are mechanisms for built-in Vertx discovery, which is rather not usable if we would like to invoke microservices running on different hosts. Fortunately, there are also available some additional bridges, for example, Consul bridge.

<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-service-discovery</artifactId>
   <version>${vertx.version}</version>
</dependency>
<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-service-discovery-bridge-consul</artifactId>
   <version>${vertx.version}</version>
</dependency>

Great, we only have to declare service discovery and register service importer. Now, we can retrieve configuration from Consul, but I assume we also would like to register our service. Unfortunately, problems start here… Like the toolkit authors say It (Vert.x) does not export to Consul and does not support service modification. Maybe somebody will explain why this library can not also export data to Consul – I just do not understand it. I had the same problem with Apache Camel some months ago and I will use the same solution I developed that time. Fortunately, Consul has a simple API for service registration and deregistration. To use it in our application we need to include Vertx asynchronous HTTP client to our dependencies.

<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-web-client</artifactId>
   <version>${vertx.version}</version>
</dependency>

Then using declared WebClient while starting the application we can register service by invoking the Consul PUT method.


WebClient client = WebClient.create(vertx);
...
JsonObject json = new JsonObject().put("ID", "account-service-1").put("Name", "account-service").put("Address", "127.0.0.1").put("Port", 2222).put("Tags", new 		JsonArray().add("http-endpoint"));
client.put(discoveryConfig.getInteger("port"), discoveryConfig.getString("host"), "/v1/agent/service/register").sendJsonObject(json, res -> {
   LOGGER.info("Consul registration status: {}", res.result().statusCode());
});

Once the account-service have registered itself on discovery server we can invoke it from another microservice – in this case from customer-service. We only have to create ServiceDiscovery object and register Consul service importer.


ServiceDiscovery discovery = ServiceDiscovery.create(vertx);
...
discovery.registerServiceImporter(new ConsulServiceImporter(), new JsonObject().put("host", discoveryConfig.getString("host")).put("port", discoveryConfig.getInteger("port")).put("scan-period", 2000));

Here’s AccountClient fragment, which is responsile for invoking GET /account/customer/{customerId} from account-service. It obtains service reference from discovery object and cast it to WebClient instance. I don’t know if you have noticed that apart from the standard fields such as ID, Name or Port, I also set the Tags field to the value of the type of service that we register. In this case it will be an http-endpoint. Whenever Vert.x reads data from Consul, it will be able to automatically assign a service reference to WebClient object.

public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List<Account>>> resultHandler) {
   discovery.getRecord(r -> r.getName().equals("account-service"), res -> {
      LOGGER.info("Result: {}", res.result().getType());
      ServiceReference ref = discovery.getReference(res.result());
      WebClient client = ref.getAs(WebClient.class);
      client.get("/account/customer/" + customerId).send(res2 -> {
         LOGGER.info("Response: {}", res2.result().bodyAsString());
         List<Account> accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList());
         resultHandler.handle(Future.succeededFuture(accounts));
      });
   });
   return this;
}

Configuration

For configuration management within the application Vert.x Config module is responsible.


<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-config</artifactId>
   <version>${vertx.version}</version>
</dependency>

There are many configuration stores, which can be used as configuration data location:

  • File
  • Environment Variables
  • HTTP
  • Event Bus
  • Git
  • Redis
  • Consul
  • Kubernetes
  • Spring Cloud Config Server

I selected the simplest one – file. But it can be easily changed only by defining another type on ConfigStoreOptions object. For loading configuration data from the store ConfigRetriever is responsible. It reads configuration as JsonObject.

ConfigStoreOptions file = new ConfigStoreOptions().setType("file").setConfig(new JsonObject().put("path", "application.json"));
ConfigRetriever retriever = ConfigRetriever.create(vertx, new ConfigRetrieverOptions().addStore(file));
retriever.getConfig(conf -> {
   JsonObject discoveryConfig = conf.result().getJsonObject("discovery");
   vertx.createHttpServer().requestHandler(router::accept).listen(conf.result().getInteger("port"));
   JsonObject json = new JsonObject().put("ID", "account-service-1").put("Name", "account-service").put("Address", "127.0.0.1").put("Port", 2222).put("Tags", new JsonArray().add("http-endpoint"));
   client.put(discoveryConfig.getInteger("port"), discoveryConfig.getString("host"), "/v1/agent/service/register").sendJsonObject(json, res -> {
      LOGGER.info("Consul registration status: {}", res.result().statusCode());
   });
});

Configuration file application.json is available under src/main/resources and it contains application port, service discovery and datasource adresses.

{
   "port" : 2222,
   "discovery" : {
      "host" : "192.168.99.100",
      "port" : 8500
   },
   "datasource" : {
      "host" : "192.168.99.100",
      "port" : 27017,
      "db_name" : "test"
   }
}

Final thoughts

Vertx authors wouldn’t like to define their solution as a framework but as a tool-kit. They don’t tell you what is a correct way to write an application, but only give you a lot of useful bricks helping to create your app. With Vertx you can create fast and lightweight APIs basing on non-blocking, asynchronous I/O. It gives you a lot of possibilities, as you can see on the Config module example, where you can even use Spring Cloud Config Server as a configuration store. But it is also not free from drawbacks, as I showed on the service registration with the Consul example. Vertx also allows to create reactive microservices with RxJava, what seems to be interesting option, I hope to describe in the future.

The post Asynchronous Microservices with Vertx appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/08/24/asynchronous-microservices-with-vert-x/feed/ 0 5625