Micronaut Archives - Piotr's TechBlog https://piotrminkowski.com/tag/micronaut/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 23 Feb 2021 10:51:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Micronaut Archives - Piotr's TechBlog https://piotrminkowski.com/tag/micronaut/ 32 32 181738725 Microservices with Micronaut, KrakenD and Consul https://piotrminkowski.com/2021/02/23/microservices-with-micronaut-krakend-and-consul/ https://piotrminkowski.com/2021/02/23/microservices-with-micronaut-krakend-and-consul/#comments Tue, 23 Feb 2021 10:50:58 +0000 https://piotrminkowski.com/?p=9481 In this article, you will learn how to use the KrakenD API gateway with Consul DNS and Micronaut to build microservices. Micronaut is a modern, JVM framework for building microservice and serverless applications. It provides built-in support for the most popular discovery servers. One of them is Hashicorp’s Consul. We can also easily integrate Micronaut […]

The post Microservices with Micronaut, KrakenD and Consul appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use the KrakenD API gateway with Consul DNS and Micronaut to build microservices. Micronaut is a modern, JVM framework for building microservice and serverless applications. It provides built-in support for the most popular discovery servers. One of them is Hashicorp’s Consul. We can also easily integrate Micronaut with Zipkin to implement distributed tracing. The only thing that is missing here is the API gateway tool. Especially if we compare it with Spring Boot, where we can run Spring Cloud Gateway. Is it a problem? Of course no, since we may include a third-party API gateway to our system.

We will use KrakenD. Why? Although it is not the most popular API gateway tool, it seems to be very interesting. First of all, it is very fast and lightweight. Also, we can easily integrate it with Zipkin and Consul – which obviously is our goal in this article.

Source Code

In this article, I will use the source code from my previous article Guide to Microservices with Micronaut and Consul. Since it has been written two years ago, I had to update a version of the Micronaut framework. Fortunately, the only thing I also had to change was groupId of the micronaut-openapi artifact. After that change, everything was working perfectly fine.

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂

Architecture

Firstly, let’s take a look at the architecture of our sample system. We have three microservices: employee-service, department-service and organization-service. All of them are simple REST applications. The organization-service calls API exposed by the department-service, while the department-service calls API from the employee-service. They use Consul discovery to locate a network address of target microservices. They also send traces to Zipkin. Each application may be started in multiple instances. At the edge of our system, there is an API gateway – KrakenD. Krakend is integrating with Consul discovery through DNS. It also sends traces to Zipkin. The architecture is visible in the picture below.

krakend-consul-micronaut-architecture

Running Consul, Zipkin and microservices

In the first step, we are going to run Consul and Zipkin on Docker containers. The simplest way to start Consul is to run it in the development mode. To do that you should execute the following command. It is important to expose two ports 8500 and 8600. The first of them is responsible for the discovery, while the second for DNS.

$ docker run -d --name=consul \
   -p 8500:8500 -p 8600:8600/udp \
   -e CONSUL_BIND_INTERFACE=eth0 consul

Then, we need to run Zipkin. Don’t forget to expose port 9411.

$ docker run -d --name=zipkin -p 9411:9411 openzipkin/zipkin

Finally, we can run each of our application. They register themselves in Consul on startup. They are listening on the randomly generated port number. Here’s the common. configuration for every single Micronaut application.

micronaut:
  server:
    port: -1
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**
endpoints:
  info:
    enabled: true
    sensitive: false
consul:
  client:
    registration:
      enabled: true
tracing:
  zipkin:
    enabled: true
    http:
      url: http://localhost:9411
    sampler:
      probability: 1

Consul acts as a configuration server for the applications. We use Micronaut Config Client for fetching property sources on startup.

micronaut:
  application:
    name: employee-service
  config-client:
    enabled: true
consul:
  client:
    defaultZone: "localhost:8500"
    config:
      format: YAML

Using Micronaut framework

In order to expose REST API, integrate with Consul and Zipkin we need to include the following dependencies.

<dependencies>
        <dependency>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-http-server-netty</artifactId>
        </dependency>
        <dependency>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-tracing</artifactId>
        </dependency>
        <dependency>
            <groupId>io.zipkin.brave</groupId>
            <artifactId>brave-instrumentation-http</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>io.zipkin.reporter2</groupId>
            <artifactId>zipkin-reporter</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>io.opentracing.brave</groupId>
            <artifactId>brave-opentracing</artifactId>
        </dependency>
        <dependency>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-discovery-client</artifactId>
        </dependency>
<dependencies>

The tracing headers (spans) are propagated across applications. Here’s the endpoint in department-service. It calls endpoint GET /employees/department/{departmentId} exposed by employee-service.

@Get("/organization/{organizationId}/with-employees")
@ContinueSpan
public List<Department> findByOrganizationWithEmployees(@SpanTag("organizationId") Long organizationId) {
   LOGGER.info("Department find: organizationId={}", organizationId);
   List<Department> departments = repository.findByOrganization(organizationId);
   departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
   return departments;
}

In order to call employee-service endpoint we use Micronaut declarative REST client.

@Client(id = "employee-service", path = "/employees")
public interface EmployeeClient {
   @Get("/department/{departmentId}")
   List<Employee> findByDepartment(Long departmentId);	
}

Here’s the implementation of the GET /employees/department/{departmentId} endpoint inside employee-service. Micronaut propagates tracing spans between subsequent requests using the @ContinueSpan annotation.

@Get("/department/{departmentId}")
@ContinueSpan
public List<Employee> findByDepartment(@SpanTag("departmentId") Long departmentId) {
    LOGGER.info("Employees find: departmentId={}", departmentId);
    return repository.findByDepartment(departmentId);
}

Configure KrakenD Gateway and Consul DNS

We can configure KrakenD using JSON notation. Firstly, we need to define endpoints. The integration with Consul discovery needs to be configured in the backend section. The host has to be the same as the DNS name of a downstream service in Consul. We also set a target URL (url_pattern) and service discovery type (sd). Let’s take a look at the list of endpoints for department-service. We expose methods for searching by id (GET /department/{id}), adding a new department (POST /department), and searching all departments with a list of employees within a single organization (GET /department-with-employees/{organizationId}).

    {
      "endpoint": "/department/{id}",
      "method": "GET",
      "backend": [
        {
          "url_pattern": "/departments/{id}",
          "sd": "dns",
          "host": [
            "department-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    },
    {
      "endpoint": "/department-with-employees/{organizationId}",
      "method": "GET",
      "backend": [
        {
          "url_pattern": "/departments/organization/{organizationId}/with-employees",
          "sd": "dns",
          "host": [
            "department-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    },
    {
      "endpoint": "/department",
      "method": "POST",
      "backend": [
        {
          "url_pattern": "/departments",
          "sd": "dns",
          "host": [
            "department-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    }

It is also worth mentioning that we cannot create conflicting routes with Krakend. For example, I couldn’t define endpoint GET /department/organization/{organizationId}/with-employees, because it would conflict with the already existing endpoint GET /department/{id}. To avoid it I created a new context path /department-with-employees for my endpoint.

Similarly, I created a following configuration for employee-service endpoints.

    {
      "endpoint": "/employee/{id}",
      "method": "GET",
      "backend": [
        {
          "url_pattern": "/employees/{id}",
          "sd": "dns",
          "host": [
            "employee-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    },
    {
      "endpoint": "/employee",
      "method": "POST",
      "backend": [
        {
          "url_pattern": "/employees",
          "sd": "dns",
          "host": [
            "employee-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    }

In order to integrate KrakenD with Consul, we need to configure local DNS properly on our machine. It was a quite challenging task for me since I’m not very familiar with network topics. By default, Consul will listen on port 8600 for DNS queries in the consul domain. But DNS is served from port 53. Therefore, we need to configure DNS forwarding for Consul service discovery. There are several ways to do that, and you may read more about it here. I chose the dnsmasq tool for that. Following the guide, we need to create a file e.g. /etc/dnsmasq.d/10-consul with the following single line.

server=/consul/127.0.0.1#8600

Finally we need to start dnsmasq service and add 127.0.0.1 to the list of nameservers. Here’s my configuration of DNS servers.

Testing Consul DNS

Firstly, let’s run all our sample microservices. They are registered in Consul under the following names.

krakend-consul-services

I run two instances of employee-service. Of course, all the applications are listening on a randomly generated port.

krakend-consul-instances

Finally, if you run the dig command with the DNS name of service you should have a similar response to the following. It means we may proceed to the last part of our exercise!

Running KrakenD API Gateway

Before we run KrakenD API gateway let’s configure one additional thing – integration with Zipkin. To do that we need to create section extra_config. Enabling Zipkin only requires us to add the zipkin exporter in the opencensus module. We need to URL (including port and path) where Zipkin is accepting the spans, and a service name for Krakend spans. I have also enabled metrics. Here’s the currently described part of the KrakenD configuration.

  "extra_config": {
    "github_com/devopsfaith/krakend-opencensus": {
      "sample_rate": 100,
      "reporting_period": 1,
      "exporters": {
        "zipkin": {
          "collector_url": "http://localhost:9411/api/v2/spans",
          "service_name": "api-gateway"
        }
      }
    },
    "github_com/devopsfaith/krakend-metrics": {
      "collection_time": "30s",
      "proxy_disabled": false,
      "listen_address": ":8090"
    }
  }

Finally, we can run KrakenD. The only parameter we need to pass is the location of the krakend.json configuration file. You may find a full version of that file in my GitHub repository inside the config directory.

$ krakend run -c krakend.json -d

Testing KrakenD with Consul and Micronaut

Once we started all our microservices, Consul, Zipkin, and KrakenD we may proceed to the tests. So first, let’s add some employees and departments by sending requests through an API gateway. KrakenD is listening on port 8080.

$ curl http://localhost:8080/employee -d '{"name":"John Smith","age":30,"position":"Architect","departmentId":1,"organizationId":1}' -H "Content-Type: application/json"
{"age":30,"departmentId":1,"id":1,"name":"John Smith","organizationId":1,"position":"Architect"}

$ curl http://localhost:8080/employee -d '{"name":"Paul Walker","age":22,"position":"Developer","departmentId":1,"organizationId":1}' -H "Content-Type: application/json"
{"age":22,"departmentId":1,"id":2,"name":"Paul Walker","organizationId":1,"position":"Developer"}

$ curl http://localhost:8080/employee -d '{"name":"Anna Hamilton","age":26,"position":"Developer","departmentId":2,"organizationId":1}' -H "Content-Type: application/json"
{"age":26,"departmentId":2,"id":3,"name":"Anna Hamilton","organizationId":1,"position":"Developer"}

$ curl http://localhost:8080/department -d '{"name":"Test1","organizationId":1}' -H "Content-Type:application/json"
{"id":1,"name":"Test1","organizationId":1}

$ curl http://localhost:8080/department -d '{"name":"Test2","organizationId":1}' -H "Content-Type:application/json"
{"id":2,"name":"Test2","organizationId":1}

Then let’s call a little bit more complex endpoint GET /department-with-employees/{organizationId}. As you probably remember, it is exposed by department-service and calls employee-service to fetch all employees assigned to the particular department.

$ curl http://localhost:8080/department-with-employees/1

However, we received a response with the HTTP 500 error code. We can find more details in the Krakend logs.

krakend-consul-logs

Krakend is unable to parse JSON array returned as a response by department-service. Therefore, we need to declare it explicitly with "is_collection": true so that KrakenD can convert it to an object for further manipulation. Here’s our current configuration for that endpoint.

    {
      "endpoint": "/department-with-employees/{organizationId}",
      "method": "GET",
      "backend": [
        {
          "url_pattern": "/departments/organization/{organizationId}/with-employees",
          "sd": "dns",
          "host": [
            "department-service.service.consul"
          ],
          "disable_host_sanitize": true,
          "is_collection": true
        }
      ]
    }

Now, let’s call the same endpoint once again. It works perfectly fine!

$ curl http://localhost:8080/department-with-employees/1   
{"collection":[{"id":1,"name":"Test1","organizationId":1},{"employees":[{"age":26,"id":3,"name":"Anna Hamilton","position":"Developer"}],"id":2,"name":"Test2","organizationId":1}]}

The last thing we can do here is to check out traces collected by Zipkin. Thanks to Micronaut support for spans propagation (@ContinueSpan), all the subsequent requests are grouped altogether.

The picture visible below shows Zipkin timeline for the latest request.

Conclusion

If you are looking for a gateway for your microservices – KrakenD is an excellent choice. Moreover, if you use Consul as a discovery server, and Zipkin (or Jaeger) as a tracing tool, it is easy to start with KrakenD. It also offers support for service discovery with Netflix Eureka, but it is a little bit more complicated to configure. Of course, you may also run KrakenD on Kubernetes (and integrate with its discovery), which is an absolute “must-have” for the modern API gateway.

The post Microservices with Micronaut, KrakenD and Consul appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/02/23/microservices-with-micronaut-krakend-and-consul/feed/ 2 9481
Micronaut OAuth2 and security with Keycloak https://piotrminkowski.com/2020/09/21/micronaut-oauth2-and-security-with-keycloak/ https://piotrminkowski.com/2020/09/21/micronaut-oauth2-and-security-with-keycloak/#comments Mon, 21 Sep 2020 12:47:41 +0000 https://piotrminkowski.com/?p=8848 Micronaut OAuth2 module supports both the authorization code grant and the password credentials grant flows. In this article, you will learn how to integrate your Micronaut application with the OAuth2 authorization server like Keycloak. We will implement the password credentials grant scenario with Micronaut OAuth2. Before starting with Micronaut Security you should learn about the […]

The post Micronaut OAuth2 and security with Keycloak appeared first on Piotr's TechBlog.

]]>
Micronaut OAuth2 module supports both the authorization code grant and the password credentials grant flows. In this article, you will learn how to integrate your Micronaut application with the OAuth2 authorization server like Keycloak. We will implement the password credentials grant scenario with Micronaut OAuth2.
Before starting with Micronaut Security you should learn about the basics. Therefore, I suggest reading the article Micronaut Tutorial: Beans and scopes. After that, you may read about building REST-based applications in the article Micronaut Tutorial: Server application.

Source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-micronaut-security. Then go to the sample-micronaut-oauth2 directory, and just follow my instructions 🙂 If you are interested in more details about Micronaut Security you should read the documentation.

Introduction to OAuth2 with Micronaut

Micronaut supports authentication with OAuth 2.0 servers, including the OpenID standard. You can choose between available providers like Okta, Auth0, AWS Cognito, Keycloak, or Google. By default, Micronaut provides the login handler. You can access by calling the POST /login endpoint. In that case, the Micronaut application tries to obtain an access token from the OAuth2 provider. The only thing you need to implement by yourself is a bean responsible for mapping an access token to the user details. After that, the Micronaut application is returning a token to the caller. To clarify, you can take a look at the picture below.

micronaut-oauth2-login-architecture

Include Micronaut Security dependencies

In the first step, we need to include Micronaut modules for REST, security, and OAuth2. Since Keycloak is generating JTW tokens, we should also add the micronaut-security-jwt dependency. Of course, our application uses some other modules, but those five are required.

<dependency>
   <groupId>io.micronaut.security</groupId>
   <artifactId>micronaut-security</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut.security</groupId>
   <artifactId>micronaut-security-oauth2</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut.security</groupId>
   <artifactId>micronaut-security-jwt</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-http-server-netty</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-http-client</artifactId>
</dependency>

That’s not all that we need to configure in Maven pom.xml. In the next step, we have to enable annotation processing for the Micronaut Security module.

<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-compiler-plugin</artifactId>
   <version>3.8.1</version>
   <configuration>
      <source>${java.version}</source>
      <target>${java.version}</target>
      <compilerArgs>
         <arg>-parameters</arg>
      </compilerArgs>
      <annotationProcessorPaths>
         <path>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-inject-java</artifactId>
            <version>${micronaut.version}</version>
         </path>
         <path>
            <groupId>io.micronaut.security</groupId>
            <artifactId>micronaut-security</artifactId>
         </path>
         <path>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.12</version>
         </path>
      </annotationProcessorPaths>
   </configuration>
   <executions>
      <execution>
         <id>test-compile</id>
         <goals>
            <goal>testCompile</goal>
         </goals>
         <configuration>
            <compilerArgs>
               <arg>-parameters</arg>
            </compilerArgs>
            <annotationProcessorPaths>
               <path>
                  <groupId>io.micronaut</groupId>
                  <artifactId>micronaut-inject-java</artifactId>
                  <version>${micronaut.version}</version>
               </path>
               <path>
                  <groupId>io.micronaut.security</groupId>
                  <artifactId>micronaut-security</artifactId>
               </path>
               <path>
                  <groupId>org.projectlombok</groupId>
                  <artifactId>lombok</artifactId>
                  <version>1.18.12</version>
               </path>
            </annotationProcessorPaths>
         </configuration>
      </execution>
   </executions>
</plugin>

Using Micronaut OAuth2 for securing endpoints

Let’s discuss a typical implementation of the REST controller with Micronaut. Micronaut Security provides a set of annotations for setting permissions. We may use JSR-250 annotations. These are @PermitAll, @DenyAll, or @RolesAllowed. In addition, Micronaut provides @Secured annotation. It also allows you to limit access to controllers and their methods. For example, we may use @Secured(SecurityRule.IS_ANONYMOUS) as a replacement for @PermitAll.

The controller class is very simple. I’m using two roles: admin and viever. The third endpoint is allowed for all users.

@Controller("/secure")
@Secured(SecurityRule.IS_AUTHENTICATED)
public class SampleController {

   @Get("/admin")
   @Secured({"admin"})
   public String admin() {
      return "You are admin!";
   }

   @Get("/view")
   @Secured({"viewer"})
   public String view() {
      return "You are viewer!";
   }

   @Get("/anonymous")
   @Secured(SecurityRule.IS_ANONYMOUS)
   public String anonymous() {
      return "You are anonymous!";
   }
}

Running Keycloak

We are running Keycloak on a Docker container. By default, Keycloak exposes API and a web console on port 8080. However, that port number must be different than the Micronaut application port, so we are overriding it with 8888. We also need to set a username and password to the admin console.

$ docker run -d --name keycloak -p 8888:8080 -e KEYCLOAK_USER=micronaut -e KEYCLOAK_PASSWORD=micronaut123 jboss/keycloak

Create client on Keycloak

First, we need to create a client with a given name. Let’s say this name is micronaut. The client credentials are used during the authorization process. It is important to choose confidential in the “Access Type” section and enable option “Direct Access Grants”.

micronaut-oauth2-client-id

Then we may switch to the “Credentials” tab, and copy the client secret.

Integration between Micronaut OAuth2 and Keycloak

In the next steps, we will use two HTTP endpoints exposed by Keycloak. First of them, token_endpoint allows you to generate new access tokens. The second endpoint introspection_endpoint is used to retrieve the active state of a token. In other words, you can use it to validate access or refresh token. The third endpoint jwks allows you to validate JWT signatures.

We need to provide several configuration properties. In the first step, we are setting the login handler implementation to idtoken. We will also enable the login controller with the micronaut.security.endpoints.login.enabled property. Of course, we need to provide the client id, client secret, and token endpoint address. The property grant-type enables the password credentials grant flow. All these configuration settings are required during the login action. After login Micronaut Security is returning a cookie with JWT access token. We will use that cookie in the next requests. Micronaut uses the Keycloak JWKS endpoint to validate each token.

micronaut:
  application:
    name: sample-micronaut-oauth2
  security:
    authentication: idtoken
    endpoints:
      login:
        enabled: true
    redirect:
      login-success: /secure/anonymous
    token:
      jwt:
        enabled: true
        signatures.jwks.keycloak:
          url: http://localhost:8888/auth/realms/master/protocol/openid-connect/certs
    oauth2.clients.keycloak:
      grant-type: password
      client-id: micronaut
      client-secret: 7dd4d516-e06d-4d81-b5e7-3a15debacebf
      authorization:
        url: http://localhost:8888/auth/realms/master/protocol/openid-connect/auth
      token:
        url: http://localhost:8888/auth/realms/master/protocol/openid-connect/token
        auth-method: client-secret-post

With Micronaut Security we need to provide an implementation of OauthUserDetailsMapper. It is responsible for transform from the TokenResponse into a UserDetails. Our implementation of OauthUserDetailsMapper is using the Keycloak introspect endpoint. It validates an access token and returns the information about user. We need the username and roles.

@Getter
@Setter
public class KeycloakUser {
   private String email;
   private String username;
   private List<String> roles;
}

I’m using the Micronaut low-level HTTP client for communication with Keycloak. It needs to send client credentials for authorization in the Authorization header. Keycloak is validating the input token. The KeycloakUserDetailsMapper returns UserDetails object, that contains username, list of roles, and token. The token should be set as the openIdToken attribute.

@Named("keycloak")
@Singleton
@Slf4j
public class KeycloakUserDetailsMapper implements OauthUserDetailsMapper {

   @Property(name = "micronaut.security.oauth2.clients.keycloak.client-id")
   private String clientId;
   @Property(name = "micronaut.security.oauth2.clients.keycloak.client-secret")
   private String clientSecret;

   @Client("http://localhost:8888")
   @Inject
   private RxHttpClient client;

   @Override
   public Publisher<UserDetails> createUserDetails(TokenResponse tokenResponse) {
      return Publishers.just(new UnsupportedOperationException());
   }

   @Override
   public Publisher<AuthenticationResponse> createAuthenticationResponse(
         TokenResponse tokenResponse, @Nullable State state) {
      Flowable<HttpResponse<KeycloakUser>> res = client
            .exchange(HttpRequest.POST("/auth/realms/master/protocol/openid-connect/token/introspect",
            "token=" + tokenResponse.getAccessToken())
            .contentType(MediaType.APPLICATION_FORM_URLENCODED)
            .basicAuth(clientId, clientSecret), KeycloakUser.class);
      return res.map(user -> {
         log.info("User: {}", user.body());
         Map<String, Object> attrs = new HashMap<>();
         attrs.put("openIdToken", tokenResponse.getAccessToken());
         return new UserDetails(user.body().getUsername(), user.body().getRoles(), attrs);
      });
   }

}

Creating users and roles on Keycloak

Our application uses two roles: viewer and admin. Therefore, we will create two test users on Keycloak. Each of them has a single role assigned. Here’s the full list of test users.

micronaut-oauth2-users

Of course, we also need to define roles. In the picture below, I highlighted the roles used by our application.

Before proceeding to the tests, we need to do one thing. We have to edit the client scope responsible for displaying a list of roles. To do that go to the section “Client Scopes”, and then find the roles scope. After editing it, you should switch to the “Mappers” tab. Finally, you need to find and edit the “realm roles” entry. I highlighted it in the picture below. In the next section, I’ll show you how Micronaut OAuth2 retrieves roles from the introspection endpoint.

keycloak-clientclaim

Testing Micronaut OAuth2 process

After starting the Micronaut application we can call the endpoint POST /login. It expects a request in a JSON format. We should send there the username and password. Our test user is test_viewer with the 123456 password. Micronaut application sends a redirect to the site configured with parameter micronaut.security.redirect.login-succcess. It also returns JWT access token in the Set-Cookie header.

curl -v http://localhost:8080/login -H "Content-Type: application/json" -d "{\"username\":\"test_viewer\",\"password\": \"123456\"}"
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> POST /login HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.55.1
> Accept: */*
> Content-Type: application/json
> Content-Length: 47
>
* upload completely sent off: 47 out of 47 bytes
< HTTP/1.1 303 See Other
< Location: /secure/anonymous
< set-cookie: JWT=eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJBOUIweGhFckUtbk1nTmMxVUg5ZnU0ellNcFZncDRBc1dQNFgyVnk2ZnNjIn0.eyJleHAiOjE2MDA2OTE2ND
UsImlhdCI6MTYwMDY4OTg0NSwianRpIjoiNmQzMmJkMjMtMjIwOC00NDBjLTlmZTYtNGQ4NTBlOTdmMjQ1IiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4ODg4L2F1dGgvcmVhbG1zL21hc3RlciIsIm
F1ZCI6ImFjY291bnQiLCJzdWIiOiJmNDE4MjhmNi1kNTk3LTQxY2ItOTA4MS00NmMyZDdhNGQ3NmIiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiJtaWNyb25hdXQiLCJzZXNzaW9uX3N0YXRlIjoiM2JjNz
c0YWMtZjk3OC00MzhhLTk3NDktMDY2ZTcwMmIyMzMzIiwiYWNyIjoiMSIsInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bn
QtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwicm9sZXMiOlsidmlld2VyIiwib2ZmbGluZV9hY2Nlc3MiLCJ1bWFfYX
V0aG9yaXphdGlvbiJdLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJ0ZXN0X3ZpZXdlciIsImVtYWlsIjoidGVzdF92aWV3ZXJAZXhhbXBsZS5jb20ifQ.bb5uiGe8jp5eaEs3ql_k_A56xBKzBaSduBbG0_s
olj82BGQ3d8wJp0LMqPe86gj4RvOEPQD31CetGM5T2c6AluvPkBw_5Bh_5ZyD28Ueh-TvmY76yoBYF2r__zCJh8yKKN78xTx0Qp_qRM6M6T57Ke9lOE0O87CmlWR8tUSzTE4azSOksxyX_PRW2jtE8GV
Un8SlJMyjgA5iYOhmbTsINSiMTtMEWk3ofAoYJquk6vis_ZG4_vTRYsKD1GQ-7Kk0Y7d1_l1YLhfOajgxrKMQm-QIovNS0aThgvijto4ibjHBm3HRigQAi3fbOJo9Yj8F9uXs-tdaKe6JZGGV_G0eCA;
 Max-Age=1799; Expires=Mon, 21 Sep 2020 12:34:04 GMT; Path=/; HTTPOnly
< Date: Mon, 21 Sep 2020 12:04:05 GMT
< connection: keep-alive
< transfer-encoding: chunked
<
* Connection #0 to host localhost left intact

After receiving the login request Micronaut OAuth2 calls the token endpoint on Keycloak. Then, after receiving the token from Keycloak, it invokes the KeycloakUserDetailsMapper bean. The mapper calls another Keycloak endpoint – this time it is the introspect endpoint. You can verify the further steps by looking at the application logs.

Once, we received the response with the access token, we can set it in the Cookie header. The token is valid for 1800 seconds. Here’s the request to the GET secure/view endpoint.

curl -v http://localhost:8080/secure/view -H "Cookie: JWT=..."
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /secure/view HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.55.1
> Accept: */*
> Cookie: JWT=...
>
< HTTP/1.1 200 OK
< Date: Mon, 21 Sep 2020 12:25:14 GMT
< content-type: application/json
< content-length: 15
< connection: keep-alive
<
You are viewer!* 

We can also call the endpoint GET /secure/admin. Since, it is not allowed for the test_viewer user, you will receive the reposnse HTTP 403 Forbidden.

curl -v http://localhost:8080/secure/view -H "Cookie: JWT=..."
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /secure/view HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.55.1
> Accept: */*
> Cookie: JWT=...
>
< HTTP/1.1 403 Forbidden
< connection: keep-alive
< transfer-encoding: chunked
 

The post Micronaut OAuth2 and security with Keycloak appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/09/21/micronaut-oauth2-and-security-with-keycloak/feed/ 2 8848
Local Java Development on Kubernetes https://piotrminkowski.com/2020/02/14/local-java-development-on-kubernetes/ https://piotrminkowski.com/2020/02/14/local-java-development-on-kubernetes/#comments Fri, 14 Feb 2020 10:06:59 +0000 http://piotrminkowski.com/?p=7706 There are many tools, which may simplify your local Java development on Kubernetes. For Java applications you may also take an advantage of integration between popular runtime frameworks and Kubernetes. In this article I’m going to present some of the available solutions. Skaffold Skaffold is a simple command-line tool that is able to handle the […]

The post Local Java Development on Kubernetes appeared first on Piotr's TechBlog.

]]>
There are many tools, which may simplify your local Java development on Kubernetes. For Java applications you may also take an advantage of integration between popular runtime frameworks and Kubernetes. In this article I’m going to present some of the available solutions.

Skaffold

Skaffold is a simple command-line tool that is able to handle the workflow for building, pushing and deploying your Java application on Kubernetes. It saves a lot of developer time by automating most of the work from source code to the deployment. It natively supports the most common image-building and application deployment strategies. Skaffold is an open-source project from Google. It is not the only one interesting tool from Google that may be used to help in local development on Kubernetes. Another one of them, Jib, is dedicated only for Java applications. It allows you to build optimized Docker and OCI images for your Java applications without a Docker daemon. It is available as Maven of Gradle plugin, or just as a Java library. With Jib you do not need to maintain a Dockerfile or even run a Docker daemon. It is also able to take advantage of image layering and registry caching to achieve fast, incremental builds. To use Jib during our application build we just need to include it to Maven pom.xml. We may easily customize the behaviour of Jib Maven Plugin by using properties inside configuration section. But for a standard Java application it is highly possible you could use default settings as shown below.

<plugin>
   <groupId>com.google.cloud.tools</groupId>
   <artifactId>jib-maven-plugin</artifactId>
   <version>1.8.0</version>
</plugin>

By default, Skaffold uses Dockerfile while building an image with our application. We may customize this behaviour to use Jib Maven Plugin instead of Dockerfile. We may do it by changing the Skaffold configuration file available in our project root directory – skaffold.yaml. We should also define there a name of the generated Docker image and its tagging policy.

apiVersion: skaffold/v2alpha1
kind: Config
build:
  artifacts:
    - image: piomin/department
       jib: {}
  tagPolicy:
    gitCommit: {}

If your Kubernetes deployment manifest is located inside k8s directory and its name is deployment.yaml you don’t have to provide any additional configuration. Here’s a structure of our sample project that fulfills Skaffold requirements.

local-java-development-kubernetes-skaffold

Assuming you have successfully run a Minikube instance on your local machine, you just need to run command skaffold dev in your root project directory. This command starts the process of building a Docker image with your application and then deploys it on Minikube. After that it watches for any change in your source code and triggers a new build after every change in the filesystem. There are some parameters, which may be used for customization. Option --port-forward is responsible for running command kubectl port-forward for all the ports exposed outside the container. We may also disable auto-build triggered after file change, and enable only manual mode that triggers build on demand. It may be especially useful, for example if you are using autosave mode in your IDE like IntelliJ. The last option used in the example of command visible below, --no-prune, is responsible for disable removal of images, containers and deployment created by Skaffold.

$ skaffold dev --port-forward --trigger=manual --no-prune

Another useful Skaffold command during development is skaffold debug. It is very similar to skaffold dev, but configures our pipeline for debugging. For Java applications it is running JWDP agent exposed on port 5005 outside the container. Then you may easily connect with the agent, for example using your IDE.

$ skaffold debug --port-forward --no-prune

I think the most suitable way to show you Skaffold in action is on video. Here’s a 9 minutes long movie that shows you how to use Skaffold for local Java development, running and debugging a Spring Boot application on Kubernetes.

[wpvideo 3Op96XNi]

Cloud Code

Not every developer likes command-line tools. At this point Google comes GUI tools, which may be easily installed as plugins on your IDEs: IntelliJ or Visual Studio Code. This set of tools called Cloud Code help you write, run, and debug cloud-native applications quickly and easily. Cloud Code uses Skaffold in background, but hides it behind two buttons available in your Run Configurations (IntelliJ): Develop on Kubernetes and Run on Kubernetes.
Develop on Kubernetes is running Skaffold in the default notify mode that triggers build after every change of file inside your project.

prez-3

Develop on Kubernetes is running Skaffold in the manual mode that starts the build on demand after you click that button.

prez-2

Cloud Cloud offers some other useful features. It provides an auto-completion for syntax inside Kubernetes YAML manifests.

local-java-development-kubernetes-cloud-code

You may also display a graphical representation of your Kubernetes cluster as shown below.

prez-1

Dekorate

We have already discussed some interesting tools for automating the deployment process beginning from a change in the source code to running an application on Kubernetes cluster (Minikube). Beginning from this section we will discuss interesting libraries and extensions to popular JVM frameworks, which helps you to speed-up your Java development on Kubernetes. First of them is Dekorate. Dekorate is a collection of compile time generators and decorators of Kubernetes manifests. It makes generating and decorating Kubernetes manifests as simple as adding a dependency to your project. It allows you to use well-known Java annotations style to define Kubernetes resources used by your application. It provides integration for Spring Boot and Quarkus frameworks.
To enable integration for your Spring Boot application you just need to include the following dependency to your Maven pom.xml.


<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>kubernetes-spring-starter</artifactId>
  <version>0.10.10</version>
</dependency>

Now, if you build your application using the Maven command visible below Dekorate is able to analyze your source code and generate Kubernetes manifest basing on that.


$ mvn clean install -Ddekorate.build=true -Ddekorate.deploy=true

Besides just an analysis of source code Dekarate allows to define Kubernetes resources using configuration files or annotations. The following code snippet shows you how to set 2 replicas of your application, expose it outside the cluster as route and refer to the existing ConfigMap on your OpenShift instance. You may also use @JvmOptions to set some JVM running parameters like maximum heap usage.

@SpringBootApplication
@OpenshiftApplication(replicas = 2, expose = true, envVars = {
        @Env(name="sample-app-config", configmap = "sample-app-config")
})
@JvmOptions(xms = 128, xmx = 256, heapDumpOnOutOfMemoryError = true)
@EnableSwagger2
public class SampleApp {

    public static void main(String[] args) {
        SpringApplication.run(SampleApp.class, args);
    }
   
}

Of course, I presented only a small set of options offered by Dekorate. You can also define Kubernetes labels, annotations, secrets, volumes and many more. For more details about using Dekorate with OpenShift you may in one of my previous articles Deploying Spring Boot Application on OpenShift with Dekorate.

Spring Cloud Kubernetes

If you are building your web applications on top of Spring Boot you should consider using Spring Cloud Kubernetes for integration with Kubernetes. Spring Cloud Kubernetes provides Spring Cloud common interface implementations that consume Kubernetes native services via master API. The main features of that project are:

  • Kubernetes PropertySource implementation including auto-reload of configuration after ConfigMap or Secret change
  • Kubernetes native discovery with DiscoveryClient implementation including multi-namespace discovery
  • Client side load balancing with Spring Cloud Netflix Ribbon
  • Pod health indicator

If you would like to use both Spring Cloud Kubernetes Discovery and Config modules you should include the following property to your Maven pom.xml

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-kubernetes-all</artifactId>
</dependency>   
<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-dependencies</artifactId>
         <version>Hoxton.RELEASE</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

After that discovery and configuration based on ConfigMap is enabled. If you also would like to use Secret<.code> as property source for the application you need to enable it in bootstrap.yml.

spring:
  application:
    name: department
  cloud:
    kubernetes:
      secrets:
        enableApi: true

The name of ConfigMap or Secret (property manifest.name) should be the same as the application name to use them without any configuration customization. Here’s sample ConfigMap for department application. It is managed by Spring Cloud Kubernetes without a necessity to inject it Deployment manifest.

apiVersion: v1
kind: ConfigMap
metadata:
  name: department
data:
  application.yml: |-
    spring:
     cloud:
      kubernetes:
        discovery:
          all-namespaces: true
    spring:
      data:
       mongodb:
        database: admin
        host: mongodb

Spring Cloud Kubernetes Discovery and Ribbon integration allows you to use any of Spring Rest Client to communicate with other services via name. Here’s the example of Spring Cloud OpenFeign usage.

@FeignClient(name = "employee")
public interface EmployeeClient {

   @GetMapping("/department/{departmentId}")
   List<Employee> findByDepartment(@PathVariable("departmentId") String departmentId);
   
}

Another useful Spring Cloud Kubernetes feature is an ability to reload configuration after change in ConfigMap or Secret. That’s a pretty amazing thing for a developer, because it is possible to refresh some beans without restarting the whole pod with application. However, you should keep in mind that configuration beans annotated with @ConfigurationProperties or @RefreshScope are reloaded. By default, this feature is disabled. To enable you should use the following property.

spring:
  cloud:
    kubernetes:
      reload:
        enabled: true

For more details about Spring Cloud Kubernetes including source code examples you may refer to my previous article Microservices with Spring Cloud Kubernetes.

Micronaut

Similar to Spring Boot, Micronaut provides a library for integration with Kubernetes. In comparison to Spring Cloud Kubernetes it additionally allows to read ConfigMap and Secret from mounted volumes and allows to enable filtering services by their name during discovery. To enable Kubernetes discovery for Micronaut applications we first to include the following library to our Maven pom.xml.

<dependency>
    <groupId>io.micronaut.kubernetes</groupId>
    <artifactId>micronaut-kubernetes-discovery-client</artifactId>
</dependency>

This module also allows us to use Micronaut HTTP Client with discovery by service name.

@Client(id = "employee", path = "/employees")
public interface EmployeeClient {
 
    @Get("/department/{departmentId}")
    List<Employee> findByDepartment(Long departmentId);
 
}

You don’t have to include any additional library to enable integration with Kubernetes PropertySource, since it is provided in Micronaut Config Client core library. You just need to enable it in application bootstrap.yml. Unlike Spring Boot, Micronaut uses labels instead of metada.name to match ConfigMap or Secret with application. After enabling Kubernetes config client, also configuration auto-reload feature is enabled. Here’s our bootstrap.yml file.

micronaut:
  application:
    name: department
  config-client:
    enabled: true
kubernetes:
  client:
    config-maps:
      labels:
        - app: department
    secrets:
      enabled: true
      labels:
        - app: department

Now, our ConfigMap also needs to be labeled with app=department.

apiVersion: v1
kind: ConfigMap
metadata:
  name: department
  labels:
    app: department
data:
  application.yaml: |-
    mongodb:
      collection: department
      database: admin
    kubernetes:
      client:
        discovery:
          includes:
            - employee

For more details about integration between Micronaut and Kubernetes you may refer to my article Guide to Micronaut Kubernetes.

The post Local Java Development on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/02/14/local-java-development-on-kubernetes/feed/ 14 7706
Guide To Micronaut Kubernetes https://piotrminkowski.com/2020/01/07/guide-to-micronaut-kubernetes/ https://piotrminkowski.com/2020/01/07/guide-to-micronaut-kubernetes/#respond Tue, 07 Jan 2020 10:24:11 +0000 http://piotrminkowski.com/?p=7597 Micronaut provides a library that eases the development of applications deployed on Kubernetes or on a local single-node cluster like Minikube. The project Micronaut Kubernetes is relatively new in the Micronaut family, its current release version is 1.0.3. It allows you to integrate a Micronaut application with Kubernetes discovery, and use Micronaut Configuration Client to […]

The post Guide To Micronaut Kubernetes appeared first on Piotr's TechBlog.

]]>
Micronaut provides a library that eases the development of applications deployed on Kubernetes or on a local single-node cluster like Minikube. The project Micronaut Kubernetes is relatively new in the Micronaut family, its current release version is 1.0.3. It allows you to integrate a Micronaut application with Kubernetes discovery, and use Micronaut Configuration Client to read Kubernetes ConfigMap and Secret as a property sources. Additionally, it provides a health check indicator based on communication with Kubernetes API.
Thanks to that module you can simplify and speed up your Micronaut application deployment on Kubernetes during development. In this article I’m going to show how to use Micronaut Kubernetes together with some other interesting tools to simplify local development with Minikube. The topics covered in this article are:

  • Using Skaffold together with Jib Maven Plugin to automatically publish application to Minikube after source code change
  • Providing communication between applications using Micronaut HTTP Client basing on Kubernetes Endpoints name
  • Enabling Kubernetes ConfigMap and Secret as Micronaut Property Sources
  • Using application health check
  • Integrating application with MongoDB running on Minikube

Micronaut Kubernetes example on GitHub

The source code with Micronaut Kubernetes example is as usual available on GitHub: https://github.com/piomin/sample-micronaut-kubernetes.git. Here’s the architecture of our example system consisting of three microservices built on top of Micronaut Framework.

guide-to-micronaut-kubernetes-architecture.png

Using Skaffold and Jit

Development with Minikube may be a little bit complicated in comparison to the standard approach when you are testing an application locally without running it on the platform. First you need to build your application from source code, then build its Docker image and finally redeploy application on Kubernetes using the newest image. Skaffold performs all these steps automatically for you. The only thing you need to do is to install it on a machine and enable it for your maven project using command skaffold init. The command skaffold init just creates a file skaffold.yaml in the root of the project. Of course, you can create such a manifest by yourself, especially if you would like to use Skaffold together with Jib. Here’s my skaffold.yaml manifest. We set the name of Docker image, tagging policy to Git commit id and also enabled Jib.

apiVersion: skaffold/v2alpha1
kind: Config
build:
  artifacts:
    - image: piomin/employee
      jib: {}
  tagPolicy:
    gitCommit: {}

Why do we need to use Jib? By default, Skaffold is based on Dockerfile, so each change will be published to Kubernetes only after the JAR file changes. With Jib it is watching for changes in the source code and first automatically rebuilding your Maven projects.

<plugin>
   <groupId>com.google.cloud.tools</groupId>
   <artifactId>jib-maven-plugin</artifactId>
   <version>1.8.0</version>
</plugin>

Now you just need to run command skaffold dev on a selected Maven project, and your application will be automatically deployed to Kubernetes on every change in the source code. Additionally, Skaffold may apply Kubernetes manifest file if it is located in k8s directory.

k8s

Implementation of Micronaut Kubernetes example

Let’s begin from implementation. Each of our applications uses MongoDB as a backend store. We are using a synchronous Java client for integration with MongoDB. Micronaut comes with project micronaut-mongo-reactive that provides auto-configuration for both reactive and non-reactive drivers.

<dependency>
   <groupId>io.micronaut.configuration</groupId>
   <artifactId>micronaut-mongo-reactive</artifactId>
</dependency>
<dependency>
   <groupId>org.mongodb</groupId>
   <artifactId>mongo-java-driver</artifactId>
</dependency>

It is based on mongodb.uri property and allows you to inject preconfigured MongoClient bean. Then, we use MongoClient for save and find operations. When using it we first need to set a current database and collection name. All required parameters uri, database and collection are taken from external configuration.

@Singleton
public class EmployeeRepository {

   private MongoClient mongoClient;

   @Property(name = "mongodb.database")
   private String mongodbDatabase;
   @Property(name = "mongodb.collection")
   private String mongodbCollection;

   EmployeeRepository(MongoClient mongoClient) {
      this.mongoClient = mongoClient;
   }

   public Employee add(Employee employee) {
      employee.setId(repository().countDocuments() + 1);
      repository().insertOne(employee);
      return employee;
   }

   public Employee findById(Long id) {
      return repository().find().first();
   }

   public List<Employee> findAll() {
      final List<Employee> employees = new ArrayList<>();
      repository()
            .find()
            .iterator()
            .forEachRemaining(employees::add);
      return employees;
   }

   public List<Employee> findByDepartment(Long departmentId) {
      final List<Employee> employees = new ArrayList<>();
      repository()
            .find(Filters.eq("departmentId", departmentId))
            .iterator()
            .forEachRemaining(employees::add);
      return employees;
   }

   public List<Employee> findByOrganization(Long organizationId) {
      final List<Employee> employees = new ArrayList<>();
      repository()
            .find(Filters.eq("organizationId", organizationId))
            .iterator()
            .forEachRemaining(employees::add);
      return employees;
   }

   private MongoCollection<Employee> repository() {
      return mongoClient.getDatabase(mongodbDatabase).getCollection(mongodbCollection, Employee.class);
   }

}

Each application exposes REST endpoints for CRUD operations. Here’s controller implementation for employee-service.

@Controller("/employees")
public class EmployeeController {

   private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);

   @Inject
   EmployeeRepository repository;

   @Post
   public Employee add(@Body Employee employee) {
      LOGGER.info("Employee add: {}", employee);
      return repository.add(employee);
   }

   @Get("/{id}")
   public Employee findById(Long id) {
      LOGGER.info("Employee find: id={}", id);
      return repository.findById(id);
   }

   @Get
   public List<Employee> findAll() {
      LOGGER.info("Employees find");
      return repository.findAll();
   }

   @Get("/department/{departmentId}")
   public List<Employee> findByDepartment(Long departmentId) {
      LOGGER.info("Employees find: departmentId={}", departmentId);
      return repository.findByDepartment(departmentId);
   }

   @Get("/organization/{organizationId}")
   public List<Employee> findByOrganization(Long organizationId) {
      LOGGER.info("Employees find: organizationId={}", organizationId);
      return repository.findByOrganization(organizationId);
   }

}

We may use Micronaut declarative HTTP client for communication with REST endpoints. We just need to create an interface annotated with @Client that declares calling methods.

@Client(id = "employee", path = "/employees")
public interface EmployeeClient {

   @Get("/department/{departmentId}")
   List<Employee> findByDepartment(Long departmentId);

}

It allows you to integrate Micronaut HTTP Clients with Kubernetes discovery in order to use the name of Kubernetes Endpoints as a service id. Then the client is injected into the controller. In the following code you may see the implementation of a controller in the department-service that uses EmployeeClient.

@Controller("/departments")
public class DepartmentController {

   private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);

   private DepartmentRepository repository;
   private EmployeeClient employeeClient;

   DepartmentController(DepartmentRepository repository, EmployeeClient employeeClient) {
      this.repository = repository;
      this.employeeClient = employeeClient;
   }

   @Post
   public Department add(@Body Department department) {
      LOGGER.info("Department add: {}", department);
      return repository.add(department);
   }

   @Get("/{id}")
   public Department findById(Long id) {
      LOGGER.info("Department find: id={}", id);
      return repository.findById(id);
   }

   @Get
   public List<Department> findAll() {
      LOGGER.info("Department find");
      return repository.findAll();
   }

   @Get("/organization/{organizationId}")
   public List<Department> findByOrganization(Long organizationId) {
      LOGGER.info("Department find: organizationId={}", organizationId);
      return repository.findByOrganization(organizationId);
   }

   @Get("/organization/{organizationId}/with-employees")
   public List<Department> findByOrganizationWithEmployees(Long organizationId) {
      LOGGER.info("Department find: organizationId={}", organizationId);
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
      return departments;
   }

}

Discovery with Micronaut Kubernetes

Using serviceId for communication with Micronaut HTTP Client requires integration with service discovery. Since we are running our applications on Kubernetes we are going to use its service registry. Here comes Micronaut Kubernetes. It integrates Micronaut application and Kubernetes discovery via Endpoints object. First, let’s add the required dependency.

<dependency>
   <groupId>io.micronaut.kubernetes</groupId>
   <artifactId>micronaut-kubernetes-discovery-client</artifactId>
</dependency>

In fact we don’t have to do anything else, because after adding the required dependency integration with Kubernetes discovery is enabled. We may proceed to the deployment. In Kubernetes Service definition the field metadata.name should be the same as field id inside @Client annotation.


apiVersion: v1
kind: Service
metadata:
  name: employee
  labels:
    app: employee
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: employee
  type: NodePort

Here’s a YAML deployment manifest for Service employee. The container is exposed on port 8080 and uses the latest tag of image piomin/employee, which is set in Skaffold manifest.


apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee
  labels:
    app: employee
spec:
  replicas: 1
  selector:
    matchLabels:
      app: employee
  template:
    metadata:
      labels:
        app: employee
    spec:
      containers:
        - name: employee
          image: piomin/employee
          ports:
            - containerPort: 8080

We can also increase log level for Kubernetes API client calls and for the whole Micronaut Kubernetes project to DEBUG. Here’s the fragment of our logback.xml.

<logger name="io.micronaut.http.client" level="DEBUG"/>
<logger name="io.micronaut.kubernetes" level="DEBUG"/>

Micronaut Kubernetes Discovery additionally allows us to filter the list of registered services. We may define the list of included or excluded services using property kubernetes.client.discovery.includes or kubernetes.client.discovery.excludes. Assuming we have many services registered in the same namespace, this feature may be applicable. Here’s the list of services registered in the default namespace after deploying all our sample microservices and MongoDB.

guide-to-micronaut-kubernetes-services

Since one of our applications department-service is communicating only with employee-service we may reduce the list of discovered services only to employee.


kubernetes:
  client:
    discovery:
      includes:
        - employee

Configuration Client

The Configuration client is reading Kubernetes ConfigMaps and Secrets, and making them available as PropertySources for your application. Since configuration parsing happens in the bootstrap phase, we need to define the following property in bootstrap.yml in order to enable distributed configuration clients.


micronaut:
  application:
    name: employee
  config-client:
    enabled: true

By default, the configuration client is reading all the ConfigMaps and Secrets for the configured namespace. You can filter the list of config map names by defining kubernetes.client.config-maps.includes or kubernetes.client.config-maps.excludes. Alternatively we may use Kubernetes labels, which give us more flexibility. This configuration also needs to be provided in the bootstrap phase. Reading Secrets is disabled by default. Therefore, we also need to enable it. Here’s the configuration for department-service, which is similar for all other apps.


kubernetes:
  client:
    config-maps:
      labels:
        - app: department
    secrets:
      enabled: true
      labels:
        - app: department

Kubernetes ConfigMap and Secret also need to be labeled with app=department.


apiVersion: v1
kind: ConfigMap
metadata:
  name: department
  labels:
    app: department
data:
  application.yaml: |-
    mongodb:
      collection: department
      database: admin
    kubernetes:
      client:
        discovery:
          includes:
            - employee

Here’s Secret definition for department-service. We configure there mongodb.uri property, which contains sensitive data like username or password. It is used by MongoClient for establishing connection with the server.


apiVersion: v1
kind: Secret
metadata:
  name: department
  labels:
    app: department
type: Opaque
data:
  mongodb.uri: bW9uZ29kYjovL21pY3JvbmF1dDptaWNyb25hdXRfMTIzQG1vbmdvZGI6MjcwMTcvYWRtaW4=

Running sample applications

Before running any application in default namespace we need to set the appropriate permissions. Micronaut Kubernetes requires read access to pods, endpoints, secrets, services and config maps. For development needs we may set the highest level of permissions by creating ClusterRoleBinding pointing to cluster-admin role.

$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default

One of useful Skaffold features is an ability to print standard output of the started container to a console. Thanks to that you don’t have to execute command kubectl logs on a pod. Let’s take a closer look on the logs during application startup. After increasing a level of logging we may find here some interesting information, for example client calls od Kubernetes API. As you see on the screen below our application tries to find ConfigMap and Secret with the label departament following configuration provided in bootstrap.yaml.

guide-to-micronaut-kubernetes-config.PNG

Let’s add some test data to our database by calling endpoints exposed by our applications running on Kubernetes. Each of them is exposed outside the node thanks to NodePort service type.

$ curl http://192.168.99.100:32356/employees -d '{"name":"John Smith","age":30,"position":"director","departmentId":2,"organizationId":2}' -H "Content-Type: application/json"
{"id":1,"organizationId":2,"departmentId":2,"name":"John Smith","age":30,"position":"director"}
$ curl http://192.168.99.100:32356/employees -d '{"name":"Paul Walker","age":50,"position":"director","departmentId":2,"organizationId":2}' -H "Content-Type: application/json"
{"id":2,"organizationId":2,"departmentId":2,"name":"Paul Walker","age":50,"position":"director"}
$ curl http://192.168.99.100:31144/departments -d '{"name":"Test2","organizationId":2}' -H "Content-Type: application/json"
{"id":2,"organizationId":2,"name":"Test2"}

Now, we can test HTTP communication between department-service and employee by calling method GET /organization/{organizationId}/with-employees that finds all departments with employees belonging to a given organization.

$ curl http://192.168.99.100:31144/departments/organization/2/with-employees

Here’s the current list of endpoints registered in the namespace default.

guide-to-micronaut-kubernetes-endpoints

Let’s take a look on the Micronaut HTTP Client logs from department-service. As you see below when it tries to call endpoint GET /employees/department/{departmentId} it finds the container under IP 172.17.0.11.

guide-to-micronaut-kubernetes-client

Health checks

To enable health checks for Micronaut applications we first need to add the following dependency to Maven pom.xml.

<dependency>
    <groupId>io.micronaut</groupId>
    <artifactId>micronaut-management</artifactId>
</dependency>

Micronaut configuration module provides a health check that probes communication with the Kubernetes API, and shows some information about the pod and application. To enable a detailed view for unauthenticated users we need to set the following property.


endpoints:
  health:
    details-visible: ANONYMOUS

After that we can take advantage of quite detailed information about an application including MongoDB connection status or HTTP Client status as shown below. By default, a health check is available under path /health.

guide-to-micronaut-kubernetes-health

Conclusion

Our Micronaut Kubernetes example integrates with Kubernetes API in order to allow applications to read components responsible for discovery and configuration. Integration between Micronaut HTTP Client and Kubernetes Endpoints or between Micronaut Configuration Client and Kubernetes ConfigMap or Secret are useful features. I’m looking for some other interesting features which may be included in Micronaut Kubernetes, since it is a relatively new project within Micronaut. Before starting with Micronaut Kubernetes example you should learn about Micronaut basics: Micronaut Tutorial – Beans and Scopes.

The post Guide To Micronaut Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/01/07/guide-to-micronaut-kubernetes/feed/ 0 7597
Micronaut Tutorial: Reactive https://piotrminkowski.com/2019/11/12/micronaut-tutorial-reactive/ https://piotrminkowski.com/2019/11/12/micronaut-tutorial-reactive/#respond Tue, 12 Nov 2019 10:17:00 +0000 https://piotrminkowski.wordpress.com/?p=7451 This is the fourth part of my tutorial to Micronaut Framework – created after a longer period of time. In this article I’m going to show you some examples of reactive programming on the server and client side. By default, Micronaut support to reactive APIs and streams is built on top of RxJava. If you […]

The post Micronaut Tutorial: Reactive appeared first on Piotr's TechBlog.

]]>
This is the fourth part of my tutorial to Micronaut Framework – created after a longer period of time. In this article I’m going to show you some examples of reactive programming on the server and client side. By default, Micronaut support to reactive APIs and streams is built on top of RxJava. If you are interested in some previous parts of my tutorial and would like to read it before starting with this part you can learn about basics, security and server-side applications here:

Reactive programming is becoming increasingly popular recently. Therefore, all the newly created web frameworks supports it by default. There is no difference for Micronaut. In this part of tutorial you will learn how to:

  • Use RxJava framework with Micronaut on the server and client side
  • Streaming JSON over HTTP
  • Use low-level HTTP client and declarative HTTP for retrieving reactive stream
  • Regulate back pressure on the server and client side
  • Test reactive API with JUnit

1. Dependencies

Support for reactive programming with RxJava is enabled by default on Micronaut. The dependency io.reactivex.rxjava2:rxjava is included together with some core micronaut libraries like micronaut-runtime or micronaut-http-server-netty on the server side and with micronaut-http-client on the client side. So, the set of dependencies is the same as for example from Part 2 of my tutorial, which has been describing an approach to building classic web application using Micronaut Framework. Just to recap, here’s the list of the most important dependencies in this tutorial:

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-inject</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-runtime</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-http-server-netty</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-management</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-http-client</artifactId>
   <scope>test</scope>
</dependency>
<dependency>
   <groupId>io.micronaut.test</groupId>
   <artifactId>micronaut-test-junit5</artifactId>
   <scope>test</scope>
</dependency>

2. Controller

Let’s begin from server-side application and a controller implementation. There are some optional annotation for enabling validation (@Validated) and ignoring security constraints (@Secured(SecurityRule.IS_ANONYMOUS)) described in the previous parts of this tutorial. However, the most important thing in the implementation of controller visible below are the RxJava objects used in the return statements: Single, Maybe and Flowable. The rest of implementation is pretty similar to a standard REST controller.

@Controller("/persons/reactive")
@Secured(SecurityRule.IS_ANONYMOUS)
@Validated
public class PersonReactiveController {

    private static final Logger LOGGER = LoggerFactory.getLogger(PersonReactiveController.class);

    List<Person> persons = new ArrayList<>();

    @Post
    public Single<Person> add(@Body @Valid Person person) {
        person.setId(persons.size() + 1);
        persons.add(person);
        return Single.just(person);
    }

    @Get("/{id}")
    public Maybe<Person> findById(@PathVariable Integer id) {
       return Maybe.just(persons.stream().filter(person -> person.getId().equals(id)).findAny().get());
   }

    @Get(value = "/stream", produces = MediaType.APPLICATION_JSON_STREAM)
    public Flowable<Person> findAllStream() {
        return Flowable.fromIterable(persons).doOnNext(person -> LOGGER.info("Server: {}", person));
    }

}

In the implementation of controller visible above I used 3 of 5 available RxJava2 types that can be observed:

  • Single – an observable that emits only one item and then completes. It ensures that one item will be sent, so it’s for non empty outputs.
  • Maybe – works very similar to a Single, but with a particular difference: it can complete without emitting a value. This is useful when we have optional emissions.
  • Flowable – it emits a stream of elements and supports back pressure mechanism

Now, if you understand the meaning of basic observable types in RxJava the example controller should become pretty easy for you. We have three methods: add for adding new element into the list, findById for searching by id that may not return any element and findAllStream that emits all elements as a stream. The last method has to produce application/x-json-stream in order to take an advantage of reactive streams also on the client side. When using that type of content type, events are retrieved continuously by the HTTP client thanks to the Flowable type.

3. Using Low-level Reactive HTTP Client

Micronaut Reactive offers two type of clients for accessing HTTP APIs: low-level client and declarative client. We can choose between HttpClient, RxHttpClient and RxStreamingHttpClient with support for streaming data over HTTP. The recommended way for accessing reference to a client is through injecting it with @Client annotation. However, the most suitable way inside JUnit test is with the create static method of the RxHttpClient, since we may dynamically set port number.
Here’s the implementation of JUnit tests with low-level HTTP client. To read a Single or Maybe we are using method retrieve of RxHttpClient. After subscribing to an observable I’m using using ConcurrentUnit library for handling asynchronous results in the test. For accessing Flowable returned as application/x-json-stream on the server side we need to use RxStreamingHttpClient. It provides method jsonStream, which is dedicated for reading a non-blocking stream of JSON objects.

@MicronautTest
public class PersonReactiveControllerTests {

    private static final Logger LOGGER = LoggerFactory.getLogger(PersonReactiveControllerTests.class);

    @Inject
    EmbeddedServer server;

    @Test
    public void testAdd() throws MalformedURLException, TimeoutException, InterruptedException {
        final Waiter waiter = new Waiter();
        final Person person = new Person(null, "Name100", "Surname100", 22, Gender.MALE);
        RxHttpClient client = RxHttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        Single<Person> s = client.retrieve(HttpRequest.POST("/persons/reactive", person), Person.class).firstOrError();
        s.subscribe(person1 -> {
            LOGGER.info("Retrieved: {}", person1);
            waiter.assertNotNull(person1);
            waiter.assertNotNull(person1.getId());
            waiter.resume();
        });
        waiter.await(3000, TimeUnit.MILLISECONDS);
    }

    @Test
    public void testFindById() throws MalformedURLException, TimeoutException, InterruptedException {
        final Waiter waiter = new Waiter();
        RxHttpClient client = RxHttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        Maybe<Person> s = client.retrieve(HttpRequest.GET("/persons/reactive/1"), Person.class).firstElement();
        s.subscribe(person1 -> {
            LOGGER.info("Retrieved: {}", person1);
            waiter.assertNotNull(person1);
            waiter.assertEquals(1, person1.getId());
            waiter.resume();
        });
        waiter.await(3000, TimeUnit.MILLISECONDS);
    }

    @Test
    public void testFindAllStream() throws MalformedURLException, TimeoutException, InterruptedException {
        final Waiter waiter = new Waiter();
        RxStreamingHttpClient client = RxStreamingHttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        client.jsonStream(HttpRequest.GET("/persons/reactive/stream"), Person.class)
                .subscribe(s -> {
                    LOGGER.info("Client: {}", s);
                    waiter.assertNotNull(s);
                    waiter.resume();
                });
        waiter.await(3000, TimeUnit.MILLISECONDS, 9);
    }
   
}

4. Back Pressure

One of the main term related to reactive programming is back pressure. Backpressure is resistance or force opposing the desired flow of data through software. In simple words, if a producer sends more events than a consumer is able to handle in a specific period of time, the consumer should be able to regulate the frequency of sending events on the producer side. Of course, Micronaut Reactive supports backpressure. Let’s analyze how we may control it for Micronaut application on some simple examples.
We may “somehow” control back pressure on the server-side and also on the client-side. However, the client is not able to regulate emission parameters on the server side due to the TCP transport layer. Ok, now let’s implement additional methods in our sample controller. I’m using fromCallable for emitting elements within Flowable stream. We are producing 9 elements by calling method repeat method. The second newly created method is findAllStreamWithCallableDelayed, which additionally delay each element on the stream 100 milliseconds. In this way, we can control back pressure on the server-side.

@Controller("/persons/reactive")
@Secured(SecurityRule.IS_ANONYMOUS)
@Validated
public class PersonReactiveController {

    private static final Logger LOGGER = LoggerFactory.getLogger(PersonReactiveController.class);

    List<Person> persons = new ArrayList<>();
   
    @Get(value = "/stream/callable", produces = MediaType.APPLICATION_JSON_STREAM)
    public Flowable<Person> findAllStreamWithCallable() {
        return Flowable.fromCallable(() -> {
            int r = new Random().nextInt(100);
            Person p = new Person(r, "Name"+r, "Surname"+r, r, Gender.MALE);
            return p;
        }).doOnNext(person -> LOGGER.info("Server: {}", person))
                .repeat(9);
    }

    @Get(value = "/stream/callable/delayed", produces = MediaType.APPLICATION_JSON_STREAM)
    public Flowable<Person> findAllStreamWithCallableDelayed() {
        return Flowable.fromCallable(() -> {
            int r = new Random().nextInt(100);
            Person p = new Person(r, "Name"+r, "Surname"+r, r, Gender.MALE);
            return p;
        }).doOnNext(person -> LOGGER.info("Server: {}", person))
                .repeat(9).delay(100, TimeUnit.MILLISECONDS);
    }
   
}

We can also control back pressure on the client-side. However, as I have mentioned before it does not have any effect on the emission on the server-side. Now, let’s consider the following test that verifies the delayed stream on the server-side.

@Test
public void testFindAllStreamDelayed() throws MalformedURLException, TimeoutException, InterruptedException {
   final Waiter waiter = new Waiter();
   RxStreamingHttpClient client = RxStreamingHttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
   client.jsonStream(HttpRequest.GET("/persons/reactive/stream/callable/delayed"), Person.class)
      .subscribe(s -> {
         LOGGER.info("Client: {}", s);
         waiter.assertNotNull(s);
         waiter.resume();
      });
   waiter.await(3000, TimeUnit.MILLISECONDS, 9);
}

Here’s the result of the test. Since the server delay every element 100 milliseconds, the client receives and prints the element just after emission.

micronaut-tut-4-1

For a comparison let’s consider the following test. This time we are implementing our custom Subscriber that requests a single element just after processing the previous one. The following test verifies not delayed stream.

@Test
public void testFindAllStreamWithCallable() throws MalformedURLException, TimeoutException, InterruptedException {
   final Waiter waiter = new Waiter();
   RxStreamingHttpClient client = RxStreamingHttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
   client.jsonStream(HttpRequest.GET("/persons/reactive/stream/callable"), Person.class)
      .subscribe(new Subscriber<Person>() {

         Subscription s;

         @Override
         public void onSubscribe(Subscription subscription) {
            subscription.request(1);
            s = subscription;
         }

         @Override
         public void onNext(Person person) {
            LOGGER.info("Client: {}", person);
            waiter.assertNotNull(person);
            waiter.resume();
            s.request(1);
         }

         @Override
         public void onError(Throwable throwable) {

         }

         @Override
         public void onComplete() {

         }
      });
   waiter.await(3000, TimeUnit.MILLISECONDS, 9);
}

Here’s the result of our test. It finishes succesfully, so the subscriber works properly. However, as you can see the back pressure is controlled just on the client side, since server emits all the elements, and then client retrieves them.

micronaut-tut-4-2

5. Declarative HTTP Client

Besides low-level HTTP client Micronaut Reactive allows to create declarative clients via the @Client annotation. If you are familiar with for example OpenFeign project or you have read the second part of my tutorial to Micronaut Framework you probably understand the concept of declarative client. In fact, we just need to create an interface with similar methods to the methods defined inside the controller and annotate them properly. The client interface should be annotated with @Client as shown below. The interface is placed inside src/test/java since it is used just in JUnit tests.

@Client("/persons/reactive")
public interface PersonReactiveClient {

    @Post
    Single<Person> add(@Body Person person);

    @Get("/{id}")
    Maybe<Person> findById(@PathVariable Integer id);

    @Get(value = "/stream", produces = MediaType.APPLICATION_JSON_STREAM)
    Flowable<Person> findAllStream();

}

The declarative client need to be injected into the test. Here are the same test methods as implemented for low-level client, but now with declarative client.

@MicronautTest
public class PersonReactiveControllerTests {

    private static final Logger LOGGER = LoggerFactory.getLogger(PersonReactiveControllerTests.class);

    @Inject
    EmbeddedServer server;
    @Inject
    PersonReactiveClient client;

    @Test
    public void testAddDeclarative() throws TimeoutException, InterruptedException {
        final Waiter waiter = new Waiter();
        final Person person = new Person(null, "Name100", "Surname100", 22, Gender.MALE);
        Single<Person> s = client.add(person);
        s.subscribe(person1 -> {
            LOGGER.info("Retrieved: {}", person1);
            waiter.assertNotNull(person1);
            waiter.assertNotNull(person1.getId());
            waiter.resume();
        });
        waiter.await(3000, TimeUnit.MILLISECONDS);
    }

    @Test
    public void testFindByIdDeclarative() throws TimeoutException, InterruptedException {
        final Waiter waiter = new Waiter();
        Maybe<Person> s = client.findById(1);
        s.subscribe(person1 -> {
            LOGGER.info("Retrieved: {}", person1);
            waiter.assertNotNull(person1);
            waiter.assertEquals(1, person1.getId());
            waiter.resume();
        });
        waiter.await(3000, TimeUnit.MILLISECONDS);
    }

    @Test
    public void testFindAllStreamDeclarative() throws MalformedURLException, TimeoutException, InterruptedException {
        final Waiter waiter = new Waiter();
        Flowable<Person> persons = client.findAllStream();
        persons.subscribe(s -> {
            LOGGER.info("Client: {}", s);
            waiter.assertNotNull(s);
            waiter.resume();
        });
        waiter.await(3000, TimeUnit.MILLISECONDS, 9);
    }

}

Source Code

We were using the same repository as for two previous parts of my Micronaut tutorial: https://github.com/piomin/sample-micronaut-applications.git.

The post Micronaut Tutorial: Reactive appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/11/12/micronaut-tutorial-reactive/feed/ 0 7451
Part 1: Testing Kafka Microservices With Micronaut https://piotrminkowski.com/2019/10/09/part-1-testing-kafka-microservices-with-micronaut/ https://piotrminkowski.com/2019/10/09/part-1-testing-kafka-microservices-with-micronaut/#respond Wed, 09 Oct 2019 09:08:26 +0000 https://piotrminkowski.wordpress.com/?p=7305 I have already described how to build microservices architecture entirely based on message-driven communication through Apache Kafka in one of my previous articles Kafka In Microservices With Micronaut. As you can see in the article title the sample applications and integration with Kafka has been built on top of Micronaut Framework. I described some interesting […]

The post Part 1: Testing Kafka Microservices With Micronaut appeared first on Piotr's TechBlog.

]]>
I have already described how to build microservices architecture entirely based on message-driven communication through Apache Kafka in one of my previous articles Kafka In Microservices With Micronaut. As you can see in the article title the sample applications and integration with Kafka has been built on top of Micronaut Framework. I described some interesting features of Micronaut, that can be used for building message-driven microservices, but I didn’t specifically write anything about testing. In this article I’m going to show you example of testing your Kafka microservices using Micronaut Test core features (Component Tests), Testcontainers (Integration Tests) and Pact (Contract Tests).

Generally, automated testing is one of the biggest challenges related to microservices architecture. Therefore the most popular microservice frameworks like Micronaut or Spring Boot provide some useful features for that. There are also some dedicated tools which help you to use Docker containers in your tests or provide mechanisms for verifying the contracts between different applications. For the purpose of current article demo applications I’m using the same repository as for the previous article: https://github.com/piomin/sample-kafka-micronaut-microservices.git.

Sample Architecture

The architecture of sample applications has been described in the previous article but let me perform a quick recap. We have 4 microservices: order-service, trip-service, driver-service and passenger-service. The implementation of these applications is very simple. All of them have in-memory storage and connect to the same Kafka instance.
A primary goal of our system is to arrange a trip for customers. The order-service application also acts as a gateway. It is receiving requests from customers, saving history and sending events to orders topic. All the other microservices are listening on this topic and processing orders sent by order-service. Each microservice has its own dedicated topic, where it sends events with information about changes. Such events are received by some other microservices. The architecture is presented in the picture below.

micronaut-kafka-1

Embedded Kafka – Component Testing with Micronaut

After a short description of the architecture we may proceed to the key point of this article – testing. Micronaut allows you to start an embedded Kafka instance for the purpose of testing. To do that you should first include the following dependencies to your Maven pom.xml:

<dependency>
   <groupId>org.apache.kafka</groupId>
   <artifactId>kafka-clients</artifactId>
   <version>2.3.0</version>
   <classifier>test</classifier>
</dependency>
<dependency>
   <groupId>org.apache.kafka</groupId>
   <artifactId>kafka_2.12</artifactId>
   <version>2.3.0</version>
</dependency>
<dependency>
   <groupId>org.apache.kafka</groupId>
   <artifactId>kafka_2.12</artifactId>
   <version>2.3.0</version>
   <classifier>test</classifier>
</dependency>

To enable embedded Kafka for a test class we have to set property kafka.embedded.enabled to true. Because I have run Kafka on Docker container, which is by default available on address 192.168.99.100 I also need to change dynamically the value of property kafka.bootstrap.servers to localhost:9092 for a given test. The test implementation class uses embedded Kafka for testing three basic scenarios for order-service: sending orders with new trip, and receiving orders for trip cancellation and completion from other microservices. Here’s the full code of my OrderKafkaEmbeddedTest

@MicronautTest
@Property(name = "kafka.embedded.enabled", value = "true")
@Property(name = "kafka.bootstrap.servers", value = "localhost:9092")
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class OrderKafkaEmbeddedTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(OrderKafkaEmbeddedTest.class);

    @Inject
    OrderClient client;
    @Inject
    OrderInMemoryRepository repository;
    @Inject
    OrderHolder orderHolder;
    @Inject
    KafkaEmbedded kafkaEmbedded;

    @BeforeAll
    public void init() {
        LOGGER.info("Topics: {}", kafkaEmbedded.getKafkaServer().get().zkClient().getAllTopicsInCluster());
    }

    @Test
    @org.junit.jupiter.api.Order(1)
    public void testAddNewTripOrder() throws InterruptedException {
        Order order = new Order(OrderType.NEW_TRIP, 1L, 50, 30);
        order = repository.add(order);
        client.send(order);
        Order orderSent = waitForOrder();
        Assertions.assertNotNull(orderSent);
        Assertions.assertEquals(order.getId(), orderSent.getId());
    }

    @Test
    @org.junit.jupiter.api.Order(2)
    public void testCancelTripOrder() throws InterruptedException {
        Order order = new Order(OrderType.CANCEL_TRIP, 1L, 50, 30);
        client.send(order);
        Order orderReceived = waitForOrder();
        Optional<Order> oo = repository.findById(1L);
        Assertions.assertTrue(oo.isPresent());
        Assertions.assertEquals(OrderStatus.REJECTED, oo.get().getStatus());
    }

    @Test
    @org.junit.jupiter.api.Order(3)
    public void testPaymentTripOrder() throws InterruptedException {
        Order order = new Order(OrderType.PAYMENT_PROCESSED, 1L, 50, 30);
        order.setTripId(1L);
        order = repository.add(order);
        client.send(order);
        Order orderSent = waitForOrder();
        Optional<Order> oo = repository.findById(order.getId());
        Assertions.assertTrue(oo.isPresent());
        Assertions.assertEquals(OrderStatus.COMPLETED, oo.get().getStatus());
    }

    private Order waitForOrder() throws InterruptedException {
        Order orderSent = null;
        for (int i = 0; i < 10; i++) {
            orderSent = orderHolder.getCurrentOrder();
            if (orderSent != null)
                break;
            Thread.sleep(1000);
        }
        orderHolder.setCurrentOrder(null);
        return orderSent;
    }

}

At that stage some things require clarification – especially the mechanism of verifying sending and receiving messages. I’ll describe it in the example of driver-service. When a message is incoming to the order topic it is received by OrderListener, which is annotated with @KafkaListener as shown below. It gets the order type and forwards the NEW_TRIP request to DriverService bean.

@KafkaListener(groupId = "driver")
public class OrderListener {

    private static final Logger LOGGER = LoggerFactory.getLogger(OrderListener.class);

    private DriverService service;

    public OrderListener(DriverService service) {
        this.service = service;
    }

    @Topic("orders")
    public void receive(@Body Order order) {
        LOGGER.info("Received: {}", order);
        switch (order.getType()) {
            case NEW_TRIP -> service.processNewTripOrder(order);
        }
    }
}

The DriverService is processing order. It is trying to find the driver located closest to the customer, changing found driver’s status to unavailable and sending events with change with the current driver state.

@Singleton
public class DriverService {

    private static final Logger LOGGER = LoggerFactory.getLogger(DriverService.class);

    private DriverClient client;
    private OrderClient orderClient;
    private DriverInMemoryRepository repository;

    public DriverService(DriverClient client, OrderClient orderClient, DriverInMemoryRepository repository) {
        this.client = client;
        this.orderClient = orderClient;
        this.repository = repository;
    }

    public void processNewTripOrder(Order order) {
        LOGGER.info("Processing: {}", order);
        Optional<Driver> driver = repository.findNearestDriver(order.getCurrentLocationX(), order.getCurrentLocationY());
        if (driver.isPresent()) {
            Driver driverLocal = driver.get();
            driverLocal.setStatus(DriverStatus.UNAVAILABLE);
            repository.updateDriver(driverLocal);
            client.send(driverLocal, String.valueOf(order.getId()));
            LOGGER.info("Message sent: {}", driverLocal);
        }
    }
   
   // OTHER METHODS ...
}

To verify that a final message with change notification has been sent to the drivers topic we have to create our own listener for the test purposes. It receives the message and writes it in @Singleton holder class which is then accessed by a single-thread test class. The described process is visualized in the picture below.
kafka-micronaut-testing-1.png
Here’s the implementation of test listener which is responsible just for receiving the message sent to drivers topic and writing it to DriverHolder bean.

@KafkaListener(groupId = "driverTest")
public class DriverConfirmListener {

   private static final Logger LOGGER = LoggerFactory.getLogger(DriverConfirmListener.class);

   @Inject
   DriverHolder driverHolder;

   @Topic("orders")
   public void receive(@Body Driver driver) {
      LOGGER.info("Confirmed: {}", driver);
      driverHolder.setCurrentDriver(driver);
   }

}

Here’s the implementation of DriverHolder class.

@Singleton
public class DriverHolder {

   private Driver currentDriver;

   public Driver getCurrentDriver() {
      return currentDriver;
   }

   public void setCurrentDriver(Driver currentDriver) {
      this.currentDriver = currentDriver;
   }

}

No matter if you are using embedded Kafka, Testcontainers or just manually started a Docker container you can use the verification mechanism described above.

Kafka with Testcontainers

We will use the Testcontainers framework for running Docker containers of Zookeeper and Kafka during JUnit tests. Testcontainers is a Java library that provides lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. To use it in your project together with JUnit 5, which is already used for our sample Micronaut application, you have to add the following dependencies to your Maven pom.xml:

<dependency>
   <groupId>org.testcontainers</groupId>
   <artifactId>kafka</artifactId>
   <version>1.12.2</version>
   <scope>test</scope>
</dependency>
<dependency>
   <groupId>org.testcontainers</groupId>
   <artifactId>junit-jupiter</artifactId>
   <version>1.12.2</version>
   <scope>test</scope>
</dependency>

The declared library org.testcontainers:kafka:1.12.2 provides KafkaContainer class that allows to define and start a Kafka container with embedded Zookeeper in your tests. However, I decided to use GenericContainer class and run two containers wurstmeister/zookeeper and wurstmeister/kafka. Because Kafka needs to communicate with Zookeeper both containers should be run in the same network. We will also have to override Zookeeper container’s name and host name to allow Kafka to call it by the hostname.
When running a Kafka container we need to set some important environment variables. Variable KAFKA_ADVERTISED_HOST_NAME sets the hostname under which Kafka is visible for external client and KAFKA_ZOOKEEPER_CONNECT Zookeeper lookup address. Although it is not recommended we should disable dynamic exposure port generation by setting static port number equal to the container binding port 9092. That helps us to avoid some problems with setting Kafka advertised port and injecting it into Micronaut configuration.

@MicronautTest
@Testcontainers
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class OrderKafkaContainerTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(OrderKafkaContainerTest.class);

    static Network network = Network.newNetwork();

   @Container
   public static final GenericContainer ZOOKEEPER = new GenericContainer("wurstmeister/zookeeper")
      .withCreateContainerCmdModifier(it -> ((CreateContainerCmd) it).withName("zookeeper").withHostName("zookeeper"))
      .withExposedPorts(2181)
      .withNetworkAliases("zookeeper")
      .withNetwork(network);

   @Container
   public static final GenericContainer KAFKA_CONTAINER = new GenericContainer("wurstmeister/kafka")
      .withCreateContainerCmdModifier(it -> ((CreateContainerCmd) it).withName("kafka").withHostName("kafka")
         .withPortBindings(new PortBinding(Ports.Binding.bindPort(9092), new ExposedPort(9092))))
      .withExposedPorts(9092)
      .withNetworkAliases("kafka")
      .withEnv("KAFKA_ADVERTISED_HOST_NAME", "192.168.99.100")
      .withEnv("KAFKA_ZOOKEEPER_CONNECT", "zookeeper:2181")
      .withNetwork(network);
      
   // TESTS ...
   
}

The test scenarios may be the same as for embedded Kafka or we may attempt to define some more advanced integration tests. To do that we first create a Docker image of every microservice during the build. We can use io.fabric8:docker-maven-plugin for that. Here’s the example for driver-service.

<plugin>
   <groupId>io.fabric8</groupId>
   <artifactId>docker-maven-plugin</artifactId>
   <version>0.31.0</version>
   <configuration>
      <images>
         
      </images>
   </configuration>
   <executions>
      <execution>
         <id>start</id>
         <phase>pre-integration-test</phase>
         <goals>
            <goal>build</goal>
            <goal>start</goal>
         </goals>
      </execution>
      <execution>
         <id>stop</id>
         <phase>post-integration-test</phase>
         <goals>
            <goal>stop</goal>
         </goals>
      </execution>
   </executions>
</plugin>

If we have a Docker image of every microservice we can easily run it using Testcontainers during our integration tests. In the fragment of test class visible below I’m running the container with driver-service in addition to Kafka and Zookeeper containers. The test is implemented inside order-service. We are building the same scenario as in the test with embedded Kafka – sending the NEW_TRIP order. But this time we are verifying if the message has been received and processed by the driver-service. This verification is performed by listening for notification events sent by driver-service started on Docker container to the drivers topic. Normally, order-service does not listen for messages incoming to drivers topic, but we created such integration just for the integration test purpose.

@Container
public static final GenericContainer DRIVER_CONTAINER = new GenericContainer("piomin/driver-service")
   .withNetwork(network);

@Inject
OrderClient client;
@Inject
OrderInMemoryRepository repository;
@Inject
DriverHolder driverHolder;

@Test
@org.junit.jupiter.api.Order(1)
public void testNewTrip() throws InterruptedException {
   Order order = new Order(OrderType.NEW_TRIP, 1L, 50, 30);
   order = repository.add(order);
   client.send(order);
   Driver driverReceived = null;
   for (int i = 0; i < 10; i++) {
      driverReceived = driverHolder.getCurrentDriver();
      if (driverReceived != null)
         break;
      Thread.sleep(1000);
   }
   driverHolder.setCurrentDriver(null);
   Assertions.assertNotNull(driverReceived);
}

Summary

In this article, I have described an approach to component testing with embedded Kafka, and Micronaut, and also integration tests with Docker and Testcontainers. This is the first part of the article, in the second I’m going to show you how to build contract tests for Micronaut applications with Pact.

The post Part 1: Testing Kafka Microservices With Micronaut appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/10/09/part-1-testing-kafka-microservices-with-micronaut/feed/ 0 7305
Kafka In Microservices With Micronaut https://piotrminkowski.com/2019/08/06/kafka-in-microservices-with-micronaut/ https://piotrminkowski.com/2019/08/06/kafka-in-microservices-with-micronaut/#respond Tue, 06 Aug 2019 07:14:19 +0000 https://piotrminkowski.wordpress.com/?p=7207 Today we are going to build an example of microservices that communicates with each other asynchronously through Apache Kafka topics. We use the Micronaut Framework, which provides a dedicated library for integration with Kafka. Let’s take a brief look at the architecture of our sample system. We have 4 microservices: order-service, trip-service, driver-service, and passenger-service. […]

The post Kafka In Microservices With Micronaut appeared first on Piotr's TechBlog.

]]>
Today we are going to build an example of microservices that communicates with each other asynchronously through Apache Kafka topics. We use the Micronaut Framework, which provides a dedicated library for integration with Kafka. Let’s take a brief look at the architecture of our sample system. We have 4 microservices: order-service, trip-service, driver-service, and passenger-service. The implementation of these applications is very simple. All of them have in-memory storage and connect to the same Kafka instance.

A primary goal of our system is to arrange a trip for customers. The order-service application also acts as a gateway. It is receiving requests from customers, saving history, and sending events to orders topic. All the other microservices are listening on this topic and processing orders sent by order-service. Each microservice has its own dedicated topic, where it sends events with information about changes. Such events are received by some other microservices. The architecture is presented in the picture below.

micronaut-kafka-1.png

Before reading this article it is worth familiarizing yourself with Micronaut Framework. You may read one of my previous articles describing a process of building microservices communicating via REST API: Quick Guide to Microservices with Micronaut Framework

1. Running Kafka

To run Apache Kafka on the local machine we may use its Docker image. It seems that the most up-to-date image is shared by https://hub.docker.com/u/wurstmeister. Before starting Kafka containers we have to start the ZooKeeper server, which is used by Kafka. If you run Docker on Windows the default address of its virtual machine is 192.168.99.100. It also has to be set as an environment for a Kafka container.
Both Zookeeper and Kafka containers will be started in the same network kafka. Zookeeper is available under the name zookeeper, and is exposed on port 2181. Kafka container requires that address under env variable KAFKA_ZOOKEEPER_CONNECT.

$ docker network create kafka
$ docker run -d --name zookeeper --network kafka -p 2181:2181 wurstmeister/zookeeper
$ docker run -d --name kafka -p 9092:9092 --network kafka --env KAFKA_ADVERTISED_HOST_NAME=192.168.99.100 --env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 wurstmeister/kafka

2. Including Micronaut Kafka

Micronaut example applications built with Kafka can be started with or without the presence of an HTTP server. To enable Micronaut Kafka you need to include the micronaut-kafka library to your dependencies. In case you would like to expose HTTP API you should also include micronaut-http-server-netty:

<dependency>
   <groupId>io.micronaut.configuration</groupId>
   <artifactId>micronaut-kafka</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-http-server-netty</artifactId>
</dependency>

3. Building microservice order-service

The application order-service as the only one starts embedded HTTP server and exposes REST API. That’s why we may enable built-in Micronaut health checks for Kafka. To do that we should first include micronaut-management dependency:

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-management</artifactId>
</dependency>

For convenience, we will enable all management endpoints and disable HTTP authentication for them by defining the following configuration inside application.yml:

endpoints:
  all:
    enabled: true
    sensitive: false

Now, a health check is available under address http://localhost:8080/health. Our sample application will also expose a simple REST API for adding new orders and listing all previously created orders. Here’s the Micronaut controller implementation responsible for exposing those endpoints:

@Controller("orders")
public class OrderController {

    @Inject
    OrderInMemoryRepository repository;
    @Inject
    OrderClient client;

    @Post
    public Order add(@Body Order order) {
        order = repository.add(order);
        client.send(order);
        return order;
    }

    @Get
    public Set<Order> findAll() {
        return repository.findAll();
    }

}

Each microservice uses an in-memory repository implementation. Here’s repository implementation inside order-service:

@Singleton
public class OrderInMemoryRepository {

    private Set<Order> orders = new HashSet<>();

    public Order add(Order order) {
        order.setId((long) (orders.size() + 1));
        orders.add(order);
        return order;
    }

    public void update(Order order) {
        orders.remove(order);
        orders.add(order);
    }

    public Optional<Order> findByTripIdAndType(Long tripId, OrderType type) {
        return orders.stream().filter(order -> order.getTripId().equals(tripId) && order.getType() == type).findAny();
    }

    public Optional<Order> findNewestByUserIdAndType(Long userId, OrderType type) {
        return orders.stream().filter(order -> order.getUserId().equals(userId) && order.getType() == type)
                .max(Comparator.comparing(Order::getId));
    }

    public Set<Order> findAll() {
        return orders;
    }

}

In-memory repository stores Order object instances. Order object is also sent to Kafka topic named orders. Here’s an implementation of Order class:

public class Order {

    private Long id;
    private LocalDateTime createdAt;
    private OrderType type;
    private Long userId;
    private Long tripId;
    private float currentLocationX;
    private float currentLocationY;
    private OrderStatus status;
   
    // ... GETTERS AND SETTERS
}

4. Example of asynchronous communication with Kafka and Micronaut

Now, let’s consider one of the use cases possible to realize by our sample system – adding a new trip. In the first step (1) we are adding a new order of type OrderType.NEW_TRIP. After that order-service creates an order and send it to the orders topic. The order is received by three microservices: driver-service, passenger-service and order-service (2). A new order is processed by all these applications. The passenger-service application checks if there are sufficient funds on the passenger account. If not it cancels the trip, otherwise it does not do anything. The driver-service is looking for the nearest available driver, while trip-service creates and stores new trips. Both driver-service and trip-service sends events to their topics (drivers, trips) with information about changes (3) Every event can be accessed by other microservices, for example trip-service listen for event from driver-service in order to assign a new driver to the trip (4). The following picture illustrates the communication between our microservices when adding a new trip.

micronaut-kafka-3.png

Now, let’s proceed to the implementation details.

Step 1: Sending order

First we need to create a Kafka client responsible for sending messages to a topic. To achieve that we should create an interface annotated with @KafkaClient and declare one or more methods for sending messages. Every method should have a target topic name set through @Topic annotation. For method parameters we may use three annotations @KafkaKey, @Body or @Header. @KafkaKey is used for partitioning, which is required by our sample applications. In the client implementation visible below we just use @Body annotation.

@KafkaClient
public interface OrderClient {

    @Topic("orders")
    void send(@Body Order order);

}

Step 2: Receiving order

Once an order has been sent by the client it is received by all other microservices listening on the orders topic. Here’s a listener implementation in the driver-service. A listener class should be annotated with @KafkaListener. We may declare groupId as an annotation field to prevent from receiving the same message by more than one instance of a single application. Then we are declaring a method for processing incoming messages. The same as a client method it should be annotated with @Topic, to set the name of a target topic. Because we are listening for Order objects it should be annotated with @Body – the same as the corresponding client method.

@KafkaListener(groupId = "driver")
public class OrderListener {

    private static final Logger LOGGER = LoggerFactory.getLogger(OrderListener.class);

    private DriverService service;

    public OrderListener(DriverService service) {
        this.service = service;
    }

    @Topic("orders")
    public void receive(@Body Order order) {
        LOGGER.info("Received: {}", order);
        switch (order.getType()) {
            case NEW_TRIP -> service.processNewTripOrder(order);
        }
    }

}

Step 3: Sending to other Kafka topic

Now, let’s take a look on the processNewTripOrder method inside driver-service. DriverService injects two different Kafka client beans: OrderClient and DriverClient. When processing a new order it tries to find the available driver, which is the closest to the customer who sent the order. After finding him it changes the status to UNAVAILABLE and sends the message with Driver object to the drivers topic.

@Singleton
public class DriverService {

    private static final Logger LOGGER = LoggerFactory.getLogger(DriverService.class);

    private DriverClient client;
    private OrderClient orderClient;
    private DriverInMemoryRepository repository;

    public DriverService(DriverClient client, OrderClient orderClient, DriverInMemoryRepository repository) {
        this.client = client;
        this.orderClient = orderClient;
        this.repository = repository;
    }

    public void processNewTripOrder(Order order) {
        LOGGER.info("Processing: {}", order);
        Optional<Driver> driver = repository.findNearestDriver(order.getCurrentLocationX(), order.getCurrentLocationY());
        driver.ifPresent(driverLocal -> {
            driverLocal.setStatus(DriverStatus.UNAVAILABLE);
            repository.updateDriver(driverLocal);
            client.send(driverLocal, String.valueOf(order.getId()));
            LOGGER.info("Message sent: {}", driverLocal);
        });
    }
   
    // ...
}

Here’s an implementation of Kafka client inside driver-service used for sending messages to the drivers topic. Because we need to link the instance of Driver with order we annotate orderId parameter with @Header. There is no sense to include it to Driver class just to assign it to the right trip on the listener side.

@KafkaClient
public interface DriverClient {

    @Topic("drivers")
    void send(@Body Driver driver, @Header("Order-Id") String orderId);

}

Step 4: Inter-service communication example with Micronaut Kafka

The message sent by DriverClient is received by @Listener declared inside trip-service. It listens for messages incoming to the trips topic. The signature of receiving method is pretty similar to the client sending method as shown below:

@KafkaListener(groupId = "trip")
public class DriverListener {

    private static final Logger LOGGER = LoggerFactory.getLogger(OrderListener.class);

    private TripService service;

    public DriverListener(TripService service) {
        this.service = service;
    }

    @Topic("drivers")
    public void receive(@Body Driver driver, @Header("Order-Id") String orderId) {
        LOGGER.info("Received: driver->{}, header->{}", driver, orderId);
        service.processNewDriver(driver);
    }

}

A new driver with given id is being assigned to the trip searched by orderId. That’s a final step of our communication process when adding a new trip.

@Singleton
public class TripService {

    private static final Logger LOGGER = LoggerFactory.getLogger(TripService.class);

    private TripInMemoryRepository repository;
    private TripClient client;

    public TripService(TripInMemoryRepository repository, TripClient client) {
        this.repository = repository;
        this.client = client;
    }


    public void processNewDriver(Driver driver, String orderId) {
        LOGGER.info("Processing: {}", driver);
        Optional<Trip> trip = repository.findByOrderId(Long.valueOf(orderId));
        trip.ifPresent(tripLocal -> {
            tripLocal.setDriverId(driver.getId());
            repository.update(tripLocal);
        });
    }
   
   // ... OTHER METHODS

}

5. Tracing

We may easily enable distributed tracing with Micronaut Kafka. First, we need to enable and configure Micronaut Tracing. To do that you should first add some dependencies:

<dependency>
    <groupId>io.micronaut</groupId>
    <artifactId>micronaut-tracing</artifactId>
</dependency>
<dependency>
    <groupId>io.zipkin.brave</groupId>
    <artifactId>brave-instrumentation-http</artifactId>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>io.zipkin.reporter2</groupId>
    <artifactId>zipkin-reporter</artifactId>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>io.opentracing.brave</groupId>
    <artifactId>brave-opentracing</artifactId>
</dependency>
<dependency>
    <groupId>io.opentracing.contrib</groupId>
    <artifactId>opentracing-kafka-client</artifactId>
    <version>0.0.16</version>
    <scope>runtime</scope>
</dependency>

We also need to configure some application settings inside application.yml including an address of our tracing tool. In that case, it is Zipkin.

tracing:
  zipkin:
    enabled: true
    http:
      url: http://192.168.99.100:9411
    sampler:
      probability: 1

Before starting our application we have to run Zipkin container:

$ docker run -d --name zipkin -p 9411:9411 openzipkin/zipkin

Conclusion

In this article you were guided through the process of building microservice architecture using asynchronous communication via Apache Kafka. I have shown you ea example with the most important features of the Micronaut Kafka library that allows you to easily declare producer and consumer of Kafka topics, enable health checks, and distributed tracing for your microservices. I have described an implementation of a single scenario for our system, that covers adding a new trip at the customer’s request. In order to see the full implementation of the sample system described in this article please check out the source code available on GitHub: https://github.com/piomin/sample-kafka-micronaut-microservices.git.

The post Kafka In Microservices With Micronaut appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/08/06/kafka-in-microservices-with-micronaut/feed/ 0 7207
JPA Data Access with Micronaut Data https://piotrminkowski.com/2019/07/25/jpa-data-access-with-micronaut-predator/ https://piotrminkowski.com/2019/07/25/jpa-data-access-with-micronaut-predator/#respond Thu, 25 Jul 2019 11:58:39 +0000 https://piotrminkowski.wordpress.com/?p=7196 When I have been writing some articles comparing Spring and Micronaut frameworks recently, I have taken note of many comments about the lack of built-in ORM and data repositories supported by Micronaut. Spring provides this feature for a long time through the Spring Data project. The good news is that the Micronaut team is close […]

The post JPA Data Access with Micronaut Data appeared first on Piotr's TechBlog.

]]>
When I have been writing some articles comparing Spring and Micronaut frameworks recently, I have taken note of many comments about the lack of built-in ORM and data repositories supported by Micronaut. Spring provides this feature for a long time through the Spring Data project. The good news is that the Micronaut team is close to complete work on the first version of their project with ORM support. The project called Micronaut Data (old Micronaut Predator) (short for Precomputed Data Repositories) is still under active development, and currently we may access just the snapshot version. However, the authors are introducing it as more efficient with reduced memory consumption than competitive solutions like Spring Data or Grails GORM. In short, this could be achieved thanks to Ahead of Time (AoT) compilation to pre-compute queries for repository interfaces that are then executed by a thin, lightweight runtime layer, and avoiding usage of reflection or runtime proxies.

Currently, Micronaut Predator provides runtime support for JPA (Hibernate) and SQL (JDBC). Some other implementations are planned in the future. In this article I’m going to show you how to include Micronaut Data in your application and use its main features for providing JPA data access.

1. Dependencies

The snapshot dependency of Micronaut Predator is available at https://oss.sonatype.org/content/repositories/snapshots/, so first we need to include it to the repository list in our pom.xml together with jcenter:

<repositories>
   <repository>
      <id>jcenter.bintray.com</id>
      <url>https://jcenter.bintray.com</url>
   </repository>
   <repository>
      <id>sonatype-snapshots</id>
      <url>https://oss.sonatype.org/content/repositories/snapshots/</url>
   </repository>
</repositories>

In addition to the standard libraries included for building a web application with Micronaut, we have to add the following dependencies: database driver (we will use PostgreSQL as the database for our sample application) and micronaut-predator-hibernate-jpa.

<dependency>
   <groupId>io.micronaut.data</groupId>
   <artifactId>micronaut-predator-hibernate-jpa</artifactId>
   <version>${predator.version}</version>
   <scope>compile</scope>
</dependency>
<dependency>
   <groupId>io.micronaut.configuration</groupId>
   <artifactId>micronaut-jdbc-tomcat</artifactId>
   <scope>runtime</scope>
</dependency>    
<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <version>42.2.6</version>
</dependency> 

Some Micronaut libraries including micronaut-predator-processor have to be added to the annotation processor path. Such a configuration should be provided inside Maven Compiler Plugin configuration:

<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-compiler-plugin</artifactId>
   <version>3.7.0</version>
   <configuration>
      <source>${jdk.version}</source>
      <target>${jdk.version}</target>
      <encoding>UTF-8</encoding>
      <compilerArgs>
         <arg>-parameters</arg>
      </compilerArgs>
      <annotationProcessorPaths>
         <path>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-inject-java</artifactId>
            <version>${micronaut.version}</version>
         </path>
         <path>
            <groupId>io.micronaut.data</groupId>
            <artifactId>micronaut-predator-processor</artifactId>
            <version>${predator.version}</version>
         </path>
         <path>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-validation</artifactId>
            <version>${micronaut.version}</version>
         </path>
      </annotationProcessorPaths>
   </configuration>
</plugin>

The current newest RC version of Micronaut is 1.2.0.RC:

<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>io.micronaut</groupId>
         <artifactId>micronaut-bom</artifactId>
         <version>1.2.0.RC2</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

2. Domain Model

Our database model consists of four tables as shown below. The same database model has been used for some of my previous examples including those for Spring Data usage. We have employee table. Each employee is assigned to the exactly one department and one organization. Each department is assigned to exactly one organization. There is also table employment, which provides a history of employment for every single employee.

micronaut-data-jpa-1

Here is the implementation of entity classes corresponding to the database model. Let’s start from Employee class:

@Entity
public class Employee {

   @Id
   @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "employee_id_seq")
   @SequenceGenerator(name = "employee_id_seq", sequenceName = "employee_id_seq", allocationSize = 1)
   private Long id;
   private String name;
   private int age;
   private String position;
   private int salary;
   @ManyToOne
   private Organization organization;
   @ManyToOne
   private Department department;
   @OneToMany
   private Set<Employment> employments;
   
   // ... GETTERS AND SETTERS
}

Here’s the implementation of Department class:


@Entity
public class Department {

   @Id
   @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "department_id_seq")
   @SequenceGenerator(name = "department_id_seq", sequenceName = "department_id_seq", allocationSize = 1)
   private Long id;
   private String name;
   @OneToMany
   private Set<Employee> employees;
   @ManyToOne
   private Organization organization;
   
   // ... GETTERS AND SETTERS
}

And here’s Organization entity:


@Entity
public class Organization {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "organization_id_seq")
    @SequenceGenerator(name = "organization_id_seq", sequenceName = "organization_id_seq", allocationSize = 1)
    private Long id;
    private String name;
    private String address;
    @OneToMany
    private Set<Department> departments;
    @OneToMany
    private Set<Employee> employees;
   
   // ... GETTERS AND SETTERS
}

And the last entity Employment:

@Entity
public class Employment {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "employment_id_seq")
    @SequenceGenerator(name = "employment_id_seq", sequenceName = "employment_id_seq", allocationSize = 1)
    private Long id;
    @ManyToOne
    private Employee employee;
    @ManyToOne
    private Organization organization;
    @Temporal(TemporalType.DATE)
    private Date start;
    @Temporal(TemporalType.DATE)
    private Date end;
   
   // ... GETTERS AND SETTERS
}

3. Creating JPA repositories with Micronaut Data

If you are familiar with the Spring Data repositories pattern, you won’t have any problems when using Micronaut repositories. The approach to declaring repositories and building queries is the same as in Spring Data. You need to declare an interface (or an abstract class) annotated with @Repository that extends interface CrudRepository. CrudRepository is not the only one interface that can be extended. You can also use GenericRepository, AsyncCrudRepository for asynchronous operations, ReactiveStreamsCrudRepository for reactive CRUD execution or PageableRepository that adds methods for pagination. The typical repository declaration looks like as shown below.

@Repository
public interface EmployeeRepository extends CrudRepository<Employee, Long> {

    Set<EmployeeDTO> findBySalaryGreaterThan(int salary);

    Set<EmployeeDTO> findByOrganization(Organization organization);

    int findAvgSalaryByAge(int age);

    int findAvgSalaryByOrganization(Organization organization);

}

I have declared there some additional find methods. The most common query prefix is found, but you can also use search, query, get, read, or retrieve. The first two queries return all employees with a salary greater than a given value and all employees assigned to a given organization. The Employee entity is in many-to-one relation with Organization, so we may also use relational fields as query parameters. It is noteworthy that both two queries return DTO objects as a result inside the collection. That’s possible because Micronaut Predator supports reflection-free Data Transfer Object (DTO) projections if the return type is annotated with @Introspected. Here’s the declaration of EmployeeDTO.

@Introspected
public class EmployeeDTO {

    private String name;
    private int age;
    private String position;
    private int salary;
   
    // ... GETTERS AND SETTERS
}

The EmployeeRepository contains two other methods using aggregation expressions. Method findAvgSalaryByAge counts average salary by a given age of employees, while findAvgSalaryByOrganization counts avarage salary by a given organization.
For comparison, let’s take a look on another repository implementation EmploymentRepository. We need two additional find methods. First findByEmployeeOrderByStartDesc for searching employment history for a given employee ordered by start date. The second method finds employment without an end date set, which in fact means that’s the employment for a current job.

@Repository
public interface EmploymentRepository extends CrudRepository<Employment, Long> {

    Set<EmploymentDTO> findByEmployeeOrderByStartDesc(Employee employee);

    Employment findByEmployeeAndEndIsNull(Employee employee);

}

Micronaut Predator is able to automatically manage transactions. You just need to annotate your method with @Transactional. In the source code fragment visible below you may see the method used for changing a job by an employee. We are performing a bunch of save operations inside that method. First, we change the target department and organization for a given employee, then we are creating new employment history record for a new job, and also setting end date for the previous employment entity (found using repository method findByEmployeeAndEndIsNull).

@Inject
DepartmentRepository departmentRepository;
@Inject
EmployeeRepository employeeRepository;
@Inject
EmploymentRepository employmentRepository;

@Transactional
public void changeJob(Long employeeId, Long targetDepartmentId) {
   Optional<Employee> employee = employeeRepository.findById(employeeId);
   employee.ifPresent(employee1 -> {
      Optional<Department> department = departmentRepository.findById(targetDepartmentId);
      department.ifPresent(department1 -> {
         employee1.setDepartment(department1);
         employee1.setOrganization(department1.getOrganization());
         Employment employment = new Employment(employee1, department1.getOrganization(), new Date());
         employmentRepository.save(employment);
         Employment previousEmployment = employmentRepository.findByEmployeeAndEndIsNull(employee1);
         previousEmployment.setEnd(new Date());
         employmentRepository.save(previousEmployment);
      });
   });
}

Ok, now let’s move on to the last repository implementation discussed in this section – OrganizationRepository. Since Organization entity is in lazy load one-to-many relation with Employee and Department, we need to fetch data to present dependencies in the output. To achieve that we can use @Join annotation on the repository interface with specifying JOIN FETCH. Since the @Join annotation is repeatable it can be specified multiple times for different associations as shown below.

@Repository
public interface OrganizationRepository extends CrudRepository<Organization, Long> {

    @Join(value = "departments", type = Join.Type.LEFT_FETCH)
    @Join(value = "employees", type = Join.Type.LEFT_FETCH)
    Optional<Organization> findByName(String name);

}

4. Batch operations

Micronaut Predator repositories support batch operations. It can be sometimes useful, for example in automatic tests. Here’s my simple JUnit test that adds multiple employees into a single department inside an organization:

@Test
public void addMultiple() {
   List<Employee> employees = Arrays.asList(
      new Employee("Test1", 20, "Developer", 5000),
      new Employee("Test2", 30, "Analyst", 15000),
      new Employee("Test3", 40, "Manager", 25000),
      new Employee("Test4", 25, "Developer", 9000),
      new Employee("Test5", 23, "Analyst", 8000),
      new Employee("Test6", 50, "Developer", 12000),
      new Employee("Test7", 55, "Architect", 25000),
      new Employee("Test8", 43, "Manager", 15000)
   );

   Organization organization = new Organization("TestWithEmployees", "TestAddress");
   Organization organizationSaved = organizationRepository.save(organization);
   Assertions.assertNotNull(organization.getId());
   Department department = new Department("TestWithEmployees");
   department.setOrganization(organization);
   Department departmentSaved = departmentRepository.save(department);
   Assertions.assertNotNull(department.getId());
   employeeRepository.saveAll(employees.stream().map(employee -> {
      employee.setOrganization(organizationSaved);
      employee.setDepartment(departmentSaved);
      return employee;
   }).collect(Collectors.toList()));
}

5. Controllers

Finally, the last implementation step – building REST controllers. OrganizationController is pretty simple. It injects OrganizationRepository and uses it for saving entities and searching their by name. Here’s the implementation:

@Controller("organizations")
public class OrganizationController {

    @Inject
    OrganizationRepository repository;

    @Post("/organization")
    public Long addOrganization(@Body Organization organization) {
        Organization organization1 = repository.save(organization);
        return organization1.getId();
    }

    @Get("/organization/name/{name}")
    public Optional<Organization> findOrganization(@NotNull String name) {
        return repository.findByName(name);
    }

}

EmployeeController is a little bit more complicated. We have an implementation that exposes four additional find methods defined in EmployeeRepository. There is also a method for adding a new employee and assigning it to the department, and changing the job implemented inside SampleService bean.

@Controller("employees")
public class EmployeeController {

    @Inject
    EmployeeRepository repository;
    @Inject
    OrganizationRepository organizationRepository;
    @Inject
    SampleService service;

    @Get("/salary/{salary}")
    public Set<EmployeeDTO> findEmployeesBySalary(int salary) {
        return repository.findBySalaryGreaterThan(salary);
    }

    @Get("/organization/{organizationId}")
    public Set<EmployeeDTO> findEmployeesByOrganization(Long organizationId) {
        Optional<Organization> organization = organizationRepository.findById(organizationId);
        return repository.findByOrganization(organization.get());
    }

    @Get("/salary-avg/age/{age}")
    public int findAvgSalaryByAge(int age) {
        return repository.findAvgSalaryByAge(age);
    }

    @Get("/salary-avg/organization/{organizationId}")
    public int findAvgSalaryByAge(Long organizationId) {
        Optional<Organization> organization = organizationRepository.findById(organizationId);
        return repository.findAvgSalaryByOrganization(organization.get());
    }

    @Post("/{departmentId}")
    public void addNewEmployee(@Body Employee employee, Long departmentId) {
        service.hireEmployee(employee, departmentId);
    }

    @Put("/change-job")
    public void changeJob(@Body ChangeJobRequest request) {
        service.changeJob(request.getEmployeeId(), request.getTargetOrganizationId());
    }

}

6. Configuring database connection

As usual we use Docker image for running database instances locally. Here’s the command that runs container with Postgres and expose it on port 5432:

$ docker run -d --name postgres -p 5432:5432 -e POSTGRES_USER=predator -e POSTGRES_PASSWORD=predator123 -e POSTGRES_DB=predator postgres

After startup my Postgres instance is available on the virtual address 192.168.99.100, so I have to set it in the Micronaut application.yml. Besides database connection settings we will also set some JPA properties, that enable SQL logging and automatically applies model changes into a database schema. Here’s full configuration of our sample application inside application.yml:

micronaut:
  application:
    name: sample-micronaut-jpa

jackson:
  bean-introspection-module: true

datasources:
  default:
    url: jdbc:postgresql://192.168.99.100:5432/predator?ssl=false
    driverClassName: org.postgresql.Driver
    username: predator
    password: predator123

jpa:
  default:
    properties:
      hibernate:
        hbm2ddl:
          auto: update
        show_sql: true

Conclusion

The support for ORM was one of the most expected features for the Micronaut Framework. Not only it will be available in the release version soon, but it is almost 1.5x faster than Spring Data JPA – following this article https://objectcomputing.com/news/2019/07/18/unleashing-predator-precomputed-data-repositories created by the leader of Micronaut Project Graeme Rocher. In my opinion, the support for ORM via project Predator may be the reason that developers decide to use Micronaut instead of Spring Boot.
In this article, I have demonstrated the most interesting features of Micronaut Data JPA. I think that it will be continuously improved, and we see some new useful features soon. The sample application source code snippet is, as usual, available on GitHub: https://github.com/piomin/sample-micronaut-jpa.git. Before starting with Micronaut Data it is worth reading about the basics: Micronaut Tutorial: Beans and scopes.

The post JPA Data Access with Micronaut Data appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/07/25/jpa-data-access-with-micronaut-predator/feed/ 0 7196
Micronaut Tutorial: Security https://piotrminkowski.com/2019/04/25/micronaut-tutorial-security/ https://piotrminkowski.com/2019/04/25/micronaut-tutorial-security/#respond Thu, 25 Apr 2019 14:34:16 +0000 https://piotrminkowski.wordpress.com/?p=7122 This is the third part of my tutorial to Micronaut Framework. This time we will discuss the most interesting Micronaut security features. I have already described core mechanisms for IoC and dependency injection in the first part of my tutorial, and I have also created a guide to building a simple REST server-side application in […]

The post Micronaut Tutorial: Security appeared first on Piotr's TechBlog.

]]>
This is the third part of my tutorial to Micronaut Framework. This time we will discuss the most interesting Micronaut security features. I have already described core mechanisms for IoC and dependency injection in the first part of my tutorial, and I have also created a guide to building a simple REST server-side application in the second part.

For more details you may refer to:

Security is an essential part of every web application. Easily configurable, built-in web security mechanisms is something that every single modern micro-framework must have. It is no different with Micronaut. In this part of my tutorial you will learn how to:

  • Build custom authentication provider
  • Configure and test basic authentication for your HTTP API
  • Secure your HTTP API using JSON Web Tokens
  • Enable communication over HTTPS

Enabling security

To enable security for Micronaut application you should first include the following dependency into your pom.xml:

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-security</artifactId>
</dependency>

The next step is to enable security feature through application properties:

micronaut:
  security:
    enabled: true

Setting the property micronaut.security.enabled to true causes enabling security for all the existing controllers. Because we already have the controller that has been used as an example for the previous part of the tutorial, we should disable security for it. To do that I have annotated with @Secured(SecurityRule.IS_ANONYMOUS). It allows anonymous access to all endpoints implemented inside the controller.

@Controller("/persons")
@Secured(SecurityRule.IS_ANONYMOUS)
@Validated
public class PersonController { ... }

Basic Authentication Provider

Once you enable Micronaut security, Basic Auth is enabled by default. All you need to do is to implement your custom authentication provider. It has to implement an AuthenticationProvider interface. In fact, you just need to verify your username and password, which are both passed inside HTTP Authorization header. Our sample authentication provider uses configuration properties as a user repository. Here’s the fragment of application.yml file that contains list of user passwords and assigned roles:

credentials:
  users:
    smith: smith123
    scott: scott123
    piomin: piomin123
    test: test123
  roles:
    smith: ADMIN
    scott: VIEW
    piomin: VIEW
    test: ADMIN

The configuration properties are injected into UsersStore configuration bean which is annotated with @ConfigurationProperties. User passwords are stored inside users map, while roles inside roles map. They are both annotated with @MapFormat and have username as a key.

@ConfigurationProperties("credentials")
public class UsersStore {

   @MapFormat
   Map<String, String> users;
   @MapFormat
   Map<String, String> roles;

   public String getUserPassword(String username) {
      return users.get(username);
   }

   public String getUserRole(String username) {
      return roles.get(username);
   }
}

Finally, we may proceed to the authentication provider implementation. It injects a UsersStore bean that contains a list of users with passwords and roles. The overridden method should return UserDetails object. The username and password are automatically decoded from base64 taken from Authentication header and bind to fields identity and secret in AuthenticationRequest method parameter. If input password is the same as stored password it returns UserDetails object with roles, otherwise throws an exception.

@Singleton
public class UserPasswordAuthProvider implements AuthenticationProvider {

    @Inject
    UsersStore store;

    @Override
    public Publisher<AuthenticationResponse> authenticate(AuthenticationRequest req) {
        String username = req.getIdentity().toString();
        String password = req.getSecret().toString();
        if (password.equals(store.getUserPassword(username))) {
            UserDetails details = new UserDetails(username, Collections.singletonList(store.getUserRole(username)));
            return Flowable.just(details);
        } else {
            return Flowable.just(new AuthenticationFailed());
        }
    }
}

Secured Controller

Now, we may create our sample secure REST controller. The following controller is just a copy of previously described controller PersonController, but it also contains some Micronaut Security annotation. Through @Secured(SecurityRule.IS_AUTHENTICATED) used on the whole controller it is available only for succesfully authenticated users. This annotation may be overridden on the method level. The method for adding a new person is available only for users having ADMIN role.

@Controller("/secure/persons")
@Secured(SecurityRule.IS_AUTHENTICATED)
public class SecurePersonController {

   List<Person> persons = new ArrayList<>();

   @Post
   @Secured("ADMIN")
   public Person add(@Body @Valid Person person) {
      person.setId(persons.size() + 1);
      persons.add(person);
      return person;
   }

   @Get("/{id:4}")
   public Optional<Person> findById(@NotNull Integer id) {
      return persons.stream()
            .filter(it -> it.getId().equals(id))
            .findFirst();
   }

   @Version("1")
   @Get("{?max,offset}")
   public List<Person> findAll(@Nullable Integer max, @Nullable Integer offset) {
      return persons.stream()
            .skip(offset == null ? 0 : offset)
            .limit(max == null ? 10000 : max)
            .collect(Collectors.toList());
   }

   @Version("2")
   @Get("?max,offset")
   public List<Person> findAllV2(@NotNull Integer max, @NotNull Integer offset) {
      return persons.stream()
            .skip(offset == null ? 0 : offset)
            .limit(max == null ? 10000 : max)
            .collect(Collectors.toList());
   }

}

To test Micronaut security features used in our controller we will create a JUnit test class containing three methods. All these methods use Micronaut HTTP client for calling target endpoints. It provides basicAuth method, that allows you to easily pass user credentials. The first test method testAdd verifies a positive scenario of adding a new person. The test user smith has ADMIN role, which is required for calling this HTTP endpoint. In contrast, method testAddFailed calls the same HTTP endpoint, but with different user scott, which has VIEW role. We expect that HTTP 401 is returned by the endpoint. The same user scott has an access to GET endpoints, so we expect that test method testFindById is finished with success.

@MicronautTest
public class SecurePersonControllerTests {

   @Inject
   EmbeddedServer server;

   @Test
   public void testAdd() throws MalformedURLException {
      HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
      Person person = new Person();
      person.setFirstName("John");
      person.setLastName("Smith");
      person.setAge(33);
      person.setGender(Gender.MALE);
      person = client.toBlocking()
            .retrieve(HttpRequest.POST("/secure/persons", person).basicAuth("smith", "smith123"), Person.class);
      Assertions.assertNotNull(person);
      Assertions.assertEquals(Integer.valueOf(1), person.getId());
   }

   @Test
   public void testAddFailed() throws MalformedURLException {
      HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
      Person person = new Person();
      person.setFirstName("John");
      person.setLastName("Smith");
      person.setAge(33);
      person.setGender(Gender.MALE);
      Assertions.assertThrows(HttpClientResponseException.class,
            () -> client.toBlocking().retrieve(HttpRequest.POST("/secure/persons", person).basicAuth("scott", "scott123"), Person.class),
            "Forbidden");
   }

   @Test
   public void testFindById() throws MalformedURLException {
      HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
      Person person = client.toBlocking()
            .retrieve(HttpRequest.GET("/secure/persons/1").basicAuth("scott", "scott123"), Person.class);
      Assertions.assertNotNull(person);
   }
}

Enable HTTPS

Our controller is secured, but not the HTTP server. Micronaut by default starts the server with disabled SSL. However, it supports HTTPS out of the box. To enable HTTPS support you should first set property micronaut.ssl.enabled to true. By default Micronaut with HTTPS enabled starts on port 8443, but you can override it using property micronaut.ssl.port.
We will enable HTTPS only for the single JUnit test class. To do that we first create file src/test/resources/ssl.yml with the following configuration:

micronaut:
  ssl:
    enabled: true
    buildSelfSigned: true

Micronaut simplifies SSL configuration build for test purposes. It turns out, we don’t have to generate any keystore or certificate if we use property micronaut.ssl.buildSelfSigned. Otherwise you would have to generate a keystore by yourself. It is not difficult, if you are creating a self-signed certificate. You may use openssl or keytool for that. Here’s the appropriate keytool command for generating keystore, however you should point out that it is recommended tool by Micronaut, which recommend using openssl:

$ keytool -genkey -alias server -keystore server.jks

If you decide to generate self-signed certificate by yourself you have configure them:

micronaut:
  ssl:
    enabled: true
    keyStore:
      path: classpath:server.keystore
      password: 123456
      type: JKS

The last step is to create a JUnit test that uses configuration provided in file ssl.yml.

@MicronautTest(propertySources = "classpath:ssl.yml")
public class SecureSSLPersonControllerTests {

   @Inject
   EmbeddedServer server;
   
   @Test
   public void testFindById() throws MalformedURLException {
      HttpClient client = HttpClient.create(new URL(server.getScheme() + "://" + server.getHost() + ":" + server.getPort()));
      Person person = client.toBlocking()
            .retrieve(HttpRequest.GET("/secure/persons/1").basicAuth("scott", "scott123"), Person.class);
      Assertions.assertNotNull(person);
   }
   
   // other tests ...

}

JWT Authentication

To enable JWT token based authentication we first need to include the following dependency into pom.xml:

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-security-jwt</artifactId>
</dependency>

Token authentication is enabled by default through TokenConfigurationProperties properties (micronaut.security.token.enabled). However, we should enable JWT based authentication by setting property micronaut.security.token.jwt.enabled to true. This change allows us to use JWT authentication for our sample application. We also need to be able to generate an authentication token used for authorization. To do that we should enable /login endpoint and set some configuration properties for the JWT token generator. In the following fragment of application.yml I set HMAC with SHA-256 as hash algorithm for JWT signature generator:

micronaut:
  security:
    enabled: true
    endpoints:
      login:
        enabled: true
    token:
      jwt:
        enabled: true
        signatures:
          secret:
            generator:
              secret: pleaseChangeThisSecretForANewOne
              jws-algorithm: HS256

Now, we can call endpoint POST /login with username and password in JSON body as shown below:

$ curl -X "POST" "http://localhost:8100/login" -H 'Content-Type: application/json; charset=utf-8' -d '{"username":"smith","password":"smith123"}'
{
   "username": "smith",
   "roles": [
      "ADMIN"
   ],
   "access_token": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJzbWl0aCIsIm5iZiI6MTU1NjE5ODAyMCwicm9sZXMiOlsiQURNSU4iXSwiaXNzIjoic2FtcGxlLW1pY3JvbmF1dC1hcHBsaWNhdGlvbiIsImV4cCI6MTU1NjIwMTYyMCwiaWF0IjoxNTU2MTk4MDIwfQ.by0Dx73QIZeF4MDM4A5nHgw8xm4haPJjsu9z45psQrY",
   "refresh_token": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJzbWl0aCIsIm5iZiI6MTU1NjE5ODAyMCwicm9sZXMiOlsiQURNSU4iXSwiaXNzIjoic2FtcGxlLW1pY3JvbmF1dC1hcHBsaWNhdGlvbiIsImlhdCI6MTU1NjE5ODAyMH0.2BrdZzuvJNymZlOv56YpUPHYLDdnVAW5UXXNuz3a7xU",
   "token_type": "Bearer",
   "expires_in": 3600
}

The value of field access_token returned in the response should be passed as bearer token in the Authorization header of requests sent to HTTP endpoints. We can any endpoint, for example GET /persons

$ curl -X "GET" "http://localhost:8100/persons" -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJzbWl0aCIsIm5iZiI6MTU1NjE5ODAyMCwicm9sZXMiOlsiQURNSU4iXSwiaXNzIjoic2FtcGxlLW1pY3JvbmF1dC1hcHBsaWNhdGlvbiIsImV4cCI6MTU1NjIwMTYyMCwiaWF0IjoxNTU2MTk4MDIwfQ.by0Dx73QIZeF4MDM4A5nHgw8xm4haPJjsu9z45psQrY"

We can easily test automatically the scenario described above. I have created UserCredentials and UserToken objects for serializing request and deserializing response from /login endpoint. The token retrieved from response is then passed as bearer token by calling bearerAuth method on Micronaut HTTP client instance.

@MicronautTest
public class SecurePersonControllerTests {

   @Inject
   EmbeddedServer server;
   
   @Test
   public void testFindByIdUsingJWTToken() throws MalformedURLException {
      HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
      UserToken token = client.toBlocking().retrieve(HttpRequest.POST("/login", new User Credentials("scott", "scott123")), UserToken.class);
      Person person = client.toBlocking()
            .retrieve(HttpRequest.GET("/secure/persons/1").bearerAuth(token.getAccessToken()), Person.class);
      Assertions.assertNotNull(person);
   }
}

Source Code

We were using the same repository as for two previous parts of my Micronaut tutorial: https://github.com/piomin/sample-micronaut-applications.git.

The post Micronaut Tutorial: Security appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/04/25/micronaut-tutorial-security/feed/ 0 7122
Micronaut Tutorial: Server Application https://piotrminkowski.com/2019/04/23/micronaut-tutorial-server-application/ https://piotrminkowski.com/2019/04/23/micronaut-tutorial-server-application/#comments Tue, 23 Apr 2019 07:30:34 +0000 https://piotrminkowski.wordpress.com/?p=7115 In this part of my tutorial to Micronaut framework we are going to create a simple HTTP server-side application running on Netty. We have already discussed the most interesting core features of Micronaut like beans, scopes or unit testing in the first part of that tutorial. For more details you may refer to my article […]

The post Micronaut Tutorial: Server Application appeared first on Piotr's TechBlog.

]]>
In this part of my tutorial to Micronaut framework we are going to create a simple HTTP server-side application running on Netty. We have already discussed the most interesting core features of Micronaut like beans, scopes or unit testing in the first part of that tutorial. For more details you may refer to my article Micronaut Tutorial: Beans and Scopes.

Assuming we have a basic knowledge about core mechanisms of Micronaut we may proceed to the key part of that framework and discuss how to build a simple microservice application exposing REST API over HTTP.

Embedded Micronaut HTTP Server

First, we need to include dependency to our pom.xml responsible for running an embedded server during application startup. By default, Micronaut starts on Netty server, so we only need to include the following dependency:

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-http-server-netty</artifactId>
</dependency>

Assuming, we have the following main class defined, we only need to run it:

public class MainApp {

    public static void main(String[] args) {
        Micronaut.run(MainApp.class);
    }

}

By default Netty server runs on port 8080. You may override it to force the server to run on a specific port by setting the following property in your application.yml or bootstrap.yml. You can also set the value of this property to -1 to run the server on a randomly generated port.

micronaut:
  server:
    port: 8100

Creating HTTP Web Application

If you are already familiar with Spring Boot you should not have any problems with building a simple REST server-side application using Micronaut. The approach is almost identical. We just have to create a controller class and annotate it with @Controller. Micronaut supports all HTTP method types. You will probably use: @Get, @Post, @Delete, @Put or @Patch. Here’s our sample controller class that implements methods for adding new Person object, finding all persons or a single person by id:

@Controller("/persons")
public class PersonController {

    List<Person> persons = new ArrayList<>();

    @Post
    public Person add(Person person) {
        person.setId(persons.size() + 1);
        persons.add(person);
        return person;
    }

    @Get("/{id}")
    public Optional<Person> findById(Integer id) {
        return persons.stream()
                .filter(it -> it.getId().equals(id))
                .findFirst();
    }

    @Get
    public List<Person> findAll() {
        return persons;
    }

}

Request variables are resolved automatically and bind to the variable with the same name. Micronaut populates methods arguments from URI variables like /{variableName} and GET query parameters like ?paramName=paramValue. If the request contains JSON body you should annotate it with @Body. Our sample controller is very simple. It does not perform any input data validation. Let’s change it.

Validation

To be able to perform HTTP requests validation we should first include the following dependencies to our pom.xml:

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-validation</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut.configuration</groupId>
   <artifactId>micronaut-hibernate-validator</artifactId>
</dependency>

Validation in Micronaut is based on JSR-380, also known as Bean Validation 2.0. We can use javax.validation annotations such as @NotNull, @Min or @Max. Micronaut uses an implementation provided by Hibernate Validator, so even if you don’t use any JPA in your project, you have to include micronaut-hibernate-validator to your dependencies. After that we may add a validation to our model class using some javax.validation annotations. Here’s a Person model class with validation. All the fields are required: firstName and lastName cannot be blank, id cannot be greater than 10000, age cannot be lower than 0.

public class Person {

    @Max(10000)
    private Integer id;
    @NotBlank
    private String firstName;
    @NotBlank
    private String lastName;
    @PositiveOrZero
    private int age;
    @NotNull
    private Gender gender;

    public Integer getId() {
        return id;
    }

    public void setId(Integer id) {
        this.id = id;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }

    public int getAge() {
        return age;
    }

    public void setAge(int age) {
        this.age = age;
    }

    public Gender getGender() {
        return gender;
    }

    public void setGender(Gender gender) {
        this.gender = gender;
    }
   
}

Now, we need to modify the code of our controller. First, it needs to be annotated with @Validated. Also @Body parameter of POST method should be annotated with @Valid. The REST method argument may also be validated using JSR-380 annotation. Alternatively, we may configure validation using URI templates. The annotation @Get("/{id:4}") indicates that a variable can contain 4 characters max (is lower than 10000) or a query parameter is optional as shown here: @Get("{?max,offset}").
Here’s the current implementation of our controller. Besides validation, I have also implemented pagination for findAll based on offset and limit optional parameters:

@Controller("/persons")
@Validated
public class PersonController {

    List<Person> persons = new ArrayList<>();

    @Post
    public Person add(@Body @Valid Person person) {
        person.setId(persons.size() + 1);
        persons.add(person);
        return person;
    }

    @Get("/{id:4}")
    public Optional<Person> findById(@NotNull Integer id) {
        return persons.stream()
                .filter(it -> it.getId().equals(id))
                .findFirst();
    }

    @Get("{?max,offset}")
    public List<Person> findAll(@Nullable Integer max, @Nullable Integer offset) {
        return persons.stream()
                .skip(offset == null ? 0 : offset)
                .limit(max == null ? 10000 : max)
                .collect(Collectors.toList());
    }

}

Since we have finished the implementation of our controller, it is the right time to test it.

Testing Micronaut with embedded HTTP server

We have already discussed testing with Micronaut in the first part of my tutorial. The only difference in comparison to those tests is the necessity of running an embedded server and call endpoint via HTTP. To do that we have to include the dependency with Micronaut HTTP client:

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-http-client</artifactId>
   <scope>test</scope>
</dependency>

We should also inject an instance of embedded server in order to be able to detect its address (for example if a port number is generated automatically):

@MicronautTest
public class PersonControllerTests {

    @Inject
    EmbeddedServer server;
   
   // tests implementation ...
   
}

We are building Micronaut HTTP Client programmatically by calling static method create. It is also possible to obtain a reference to HttpClient by annotating it with @Client.
The following test implementation is based on JUnit 5. I have provided the positive test for all the exposed endpoints and one negative scenario with not valid input data (age field lower than zero). Micronaut HTTP client can be used in both asynchronous non-blocking mode and synchronous blocking mode. In that case we force it to work in blocking mode.

@MicronautTest
public class PersonControllerTests {

    @Inject
    EmbeddedServer server;

    @Test
    public void testAdd() throws MalformedURLException {
        HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        Person person = new Person();
        person.setFirstName("John");
        person.setLastName("Smith");
        person.setAge(33);
        person.setGender(Gender.MALE);
        person = client.toBlocking().retrieve(HttpRequest.POST("/persons", person), Person.class);
        Assertions.assertNotNull(person);
        Assertions.assertEquals(Integer.valueOf(1), person.getId());
    }

    @Test
    public void testAddNotValid() throws MalformedURLException {
        HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        final Person person = new Person();
        person.setFirstName("John");
        person.setLastName("Smith");
        person.setAge(-1);
        person.setGender(Gender.MALE);

        Assertions.assertThrows(HttpClientResponseException.class,
                () -> client.toBlocking().retrieve(HttpRequest.POST("/persons", person), Person.class),
                "person.age: must be greater than or equal to 0");
    }

    @Test
    public void testFindById() throws MalformedURLException {
        HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        Person person = client.toBlocking().retrieve(HttpRequest.GET("/persons/1"), Person.class);
        Assertions.assertNotNull(person);
    }

    @Test
    public void testFindAll() throws MalformedURLException {
        HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        Person[] persons = client.toBlocking().retrieve(HttpRequest.GET("/persons"), Person[].class);
        Assertions.assertEquals(1, persons.length);
    }

}

We have already built the simple web application that exposes some methods over REST API, validates input data and includes JUnit API tests. Now, we may discuss some more advanced, interesting Micronaut features. First of them is built-in support for API versioning.

API versioning

Since 1.1, Micronaut supports API versioning via a dedicated @Version annotation. To test this feature we will add a new version of findAll method to our controller class. The new version of this method requires to set input parameters max and offset, which were optional for the first version of the method.

@Version("1")
@Get("{?max,offset}")
public List<Person> findAll(@Nullable Integer max, @Nullable Integer offset) {
   return persons.stream()
         .skip(offset == null ? 0 : offset)
         .limit(max == null ? 10000 : max)
         .collect(Collectors.toList());
}

@Version("2")
@Get("?max,offset")
public List<Person> findAllV2(@NotNull Integer max, @NotNull Integer offset) {
   return persons.stream()
         .skip(offset == null ? 0 : offset)
         .limit(max == null ? 10000 : max)
         .collect(Collectors.toList());
}

Versioning feature is not enabled by default. To do that, you need to set property micronaut.router.versioning.enabled to true in application.yml. We will also set default version to 1, which is compatible with tests created in the previous section that does not use versioning feature:

micronaut:
  router:
    versioning:
      enabled: true
      default-version: 1

Micronaut automatic versioning is supported by a declarative HTTP client. To create such a client we need to define an interface that contains signature of target server-side method, and is annotated with @Client. Here’s declarative client interface responsible only for communicating with version 2 of findAll method:

@Client("/persons")
public interface PersonClient {

    @Version("2")
    @Get("?max,offset")
    List<Person> findAllV2(Integer max, Integer offset);

}

The PersonClient declared above may be injected into the test and used for calling API method exposed by server-side application:


@Inject
PersonClient client;

@Test
public void testFindAllV2() {
   List<Person> persons = client.findAllV2(10, 0);
   Assertions.assertEquals(1, persons.size());
}

API Documentation with Swagger

Micronaut provides built-in support for generating Open API / Swagger YAML documentation at compilation time. We can customize produced documentation using standard Swagger annotations. To enable this support for our application we should add the following swagger-annotations dependency to pom.xml, and enable annotation processing for micronaut-openapi module inside Maven compiler plugin configuration:

<dependency>
   <groupId>io.swagger.core.v3</groupId>
   <artifactId>swagger-annotations</artifactId>
</dependency>
...
<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-compiler-plugin</artifactId>
   <version>3.7.0</version>
   <configuration>
      <source>${jdk.version}</source>
      <target>${jdk.version}</target>
      <compilerArgs>
         <arg>-parameters</arg>
      </compilerArgs>
      <annotationProcessorPaths>
         <path>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-inject-java</artifactId>
            <version>${micronaut.version}</version>
         </path>
         <path>
            <groupId>io.micronaut.configuration</groupId>
            <artifactId>micronaut-openapi</artifactId>
            <version>${micronaut.version}</version>
         </path>
      </annotationProcessorPaths>
   </configuration>
   ...
</plugin>

We have to include some basic information to the generated Swagger YAML like application name, description, version number or author name using @OpenAPIDefinition annotation:

@OpenAPIDefinition(
   info = @Info(
      title = "Sample Application",
      version = "1.0",
      description = "Sample API",
      contact = @Contact(url = "https://piotrminkowski.wordpress.com", name = "Piotr Mińkowski", email = "piotr.minkowski@gmail.com")
   )
)
public class MainApp {

    public static void main(String[] args) {
        Micronaut.run(MainApp.class);
    }

}

Micronaut generates the Swagger manifest based on title and version fields inside @Info annotation. In that case our YAML definition file is available under name sample-application-1.0.yml, and will be generated to the META-INF/swagger directory. We can expose it outside the application using HTTP endpoint. Here’s the appropriate configuration provided inside application.yml file.

micronaut:
  static-resources:
    swagger:
     paths: classpath:META-INF/swagger
     mapping: /swagger/**

Assuming our application is running on port 8100 Swagger definition is available under the path http://localhost:8100/swagger/sample-application-1.0.yml. You can call this endpoint and copy the response to any Swagger editor as shown below.

micronaut-6

Management and Monitoring Endpoints

Micronaut provides some built-in HTTP endpoints used for management and monitoring. To enable them for the application we first need to include the following dependency:

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-management</artifactId>
</dependency>

There are no endpoints exposed by default outside application. If you would like to expose them all you should set property endpoints.all.enabled to true. Alternatively, you can enable or disable the single endpoint just by using its id instead of all in the name of property. Also, some of built-in endpoints require authentication, and some not. You may enable/disable it for all endpoints using property endpoints.all.enabled. The following configuration inside application.yaml enables all built-in endpoints except stop endpoints using for graceful shutdown of application, and disables authentication for all the enabled endpoints:

endpoints:
  all:
    enabled: true
    sensitive: false
  stop:
    enabled: false

You may use one of the following:

  • GET /beans – returns information about the loaded bean definitions
  • GET /info – returns static information from the state of the application
  • GET /health – exposes “healthcheck”
  • POST /refresh – it is refresh the application state, all the beans annotated with @Refreshable will be reloaded
  • GET /routes – returns information about URIs exposed by the application
  • GET /logger – returns information about the available loggers
  • GET /caches – returns information about the caches
  • POST /stop – it shuts down the application server

Summary

In this tutorial you have learned how to:

  • Build a simple application that exposes some HTTP endpoints
  • Validate input data inside controller
  • Test your controller with JUnit 5 on embedded Netty using Micronaut HTTP client
  • Use built-in API versioning
  • Generate Swagger API documentation automatically
  • Using build-in management and monitoring endpoints

The first part of my tutorial is available here: https://piotrminkowski.wordpress.com/2019/04/15/micronaut-tutorial-beans-and-scopes/. It uses the same repository as the current part: https://github.com/piomin/sample-micronaut-applications.git.

The post Micronaut Tutorial: Server Application appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/04/23/micronaut-tutorial-server-application/feed/ 3 7115