service discovery Archives - Piotr's TechBlog https://piotrminkowski.com/tag/service-discovery/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 28 Dec 2020 22:35:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 service discovery Archives - Piotr's TechBlog https://piotrminkowski.com/tag/service-discovery/ 32 32 181738725 Quarkus Microservices with Consul Discovery https://piotrminkowski.com/2020/11/24/quarkus-microservices-with-consul-discovery/ https://piotrminkowski.com/2020/11/24/quarkus-microservices-with-consul-discovery/#comments Tue, 24 Nov 2020 07:59:51 +0000 https://piotrminkowski.com/?p=9118 In this article, I’ll show you how to run Quarkus microservices outside Kubernetes with Consul service discovery and a KV store. Firstly, we are going to create a custom integration with Consul discovery, since Quarkus does not offer it. On the other hand, we may take advantage of built-in support for configuration properties from the […]

The post Quarkus Microservices with Consul Discovery appeared first on Piotr's TechBlog.

]]>
In this article, I’ll show you how to run Quarkus microservices outside Kubernetes with Consul service discovery and a KV store. Firstly, we are going to create a custom integration with Consul discovery, since Quarkus does not offer it. On the other hand, we may take advantage of built-in support for configuration properties from the Consul KV store. We will also learn how to customize the Quarkus REST client to integrate it with an external service discovery mechanism. The client will follow a load balancing pattern based on a round-robin algorithm.

If you feel you need to enhance your knowledge about the Quarkus framework visit the site with guides. For more advanced information you may read the articles Guide to Quarkus with Kotlin and Guide to Quarkus on Kubernetes.

The Architecture

Before proceeding to the implementation, let’s take a look at the diagram with the architecture of our system. There are three microservices: employee-service, departament-service, and organization-service. They are communicating with each other through a REST API. They use the Consul KV store as a distributed configuration backend. Every single instance of microservice is registering itself in Consul. A load balancer is on the client-side. It reads a list of registered instances of a target service from Consul. Then it chooses a single instance using a round-robin algorithm.

quarkus-consul-arch

Source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my repository sample-quarkus-microservices-consul. Then you should just follow my instructions. 🙂

Run the Consul instance

In order to run Consul on the local machine, we use its Docker image. By default, Consul exposes API and a web console on port 8500. We just need to expose that port outside the container.

$ docker run -d --name=consul \
   -e CONSUL_BIND_INTERFACE=eth0 \
   -p 8500:8500 \
   consul

Register Quarkus Microservice in Consul

Our application exposes a REST API on the HTTP server and connects to an in-memory database H2. It also uses the Java Consul client to interact with a Consul API. Therefore, we need to include at least the following dependencies.

<dependencies>
   <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-resteasy-jackson</artifactId>
   </dependency>
   <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-hibernate-orm-panache</artifactId>
   </dependency>
   <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-jdbc-h2</artifactId>
   </dependency>
   <dependency>
      <groupId>com.h2database</groupId>
      <artifactId>h2</artifactId>
      <scope>runtime</scope>
   </dependency>
   <dependency>
      <groupId>com.orbitz.consul</groupId>
      <artifactId>consul-client</artifactId>
      <version>${consul-client.version}</version>
   </dependency>
</dependencies>

Since we will run all our applications locally it is worth enabling an HTTP random port feature. To do that we should set the property quarkus.http.port to 0.

quarkus.http.port=0

Then we create the Consul client bean. By default, it is trying to connect with a server on the localhost and port 8500. So, we don’t need to provide any additional configuration.

@ApplicationScoped
public class EmployeeBeansProducer {

   @Produces
   Consul consulClient = Consul.builder().build();

}

Every single instance of a Quarkus application should register itself in Consul just after startup. Consequently, it needs to be able to deregister itself on shutdown. Therefore, you should first implement a bean responsible for intercepting startup and shutdown events. It is not hard with Quarkus.

The bean responsible for catching the startup and shutdown events is annotated with @ApplicationScoped. It defines two methods: onStart and onStop. It also injects the Consul client bean. Quarkus generates the number of the HTTP listen port on startup and saves it in the quarkus.http.port property. Therefore, the startup task needs to wait a moment to ensure that the application is running. We will run it 5 seconds after receiving the startup event. In order to register an application in Consul, we need to use the ConsulAgent object. Every single instance of the application needs to have a unique id in Consul. Therefore, we retrieve the number of running instances and use that number as the id suffix. The name of the service is taken from the quarkus.application.name property. The instance of the application should save id in order to be able to deregister itself on shutdown.

@ApplicationScoped
public class EmployeeLifecycle {

   private static final Logger LOGGER = LoggerFactory
         .getLogger(EmployeeLifecycle.class);
   private String instanceId;

   @Inject
   Consul consulClient;
   @ConfigProperty(name = "quarkus.application.name")
   String appName;
   @ConfigProperty(name = "quarkus.application.version")
   String appVersion;

   void onStart(@Observes StartupEvent ev) {
      ScheduledExecutorService executorService = Executors
            .newSingleThreadScheduledExecutor();
      executorService.schedule(() -> {
         HealthClient healthClient = consulClient.healthClient();
         List<ServiceHealth> instances = healthClient
               .getHealthyServiceInstances(appName).getResponse();
         instanceId = appName + "-" + instances.size();
         ImmutableRegistration registration = ImmutableRegistration.builder()
               .id(instanceId)
               .name(appName)
               .address("127.0.0.1")
               .port(Integer.parseInt(System.getProperty("quarkus.http.port")))
               .putMeta("version", appVersion)
               .build();
         consulClient.agentClient().register(registration);
         LOGGER.info("Instance registered: id={}", registration.getId());
      }, 5000, TimeUnit.MILLISECONDS);
   }

   void onStop(@Observes ShutdownEvent ev) {
      consulClient.agentClient().deregister(instanceId);
      LOGGER.info("Instance de-registered: id={}", instanceId);
   }

}

Run Quarkus microservices locally

Thanks to the HTTP random port feature we don’t have to care about port conflicts between applications. So, we can run as many instances as we need. To run a single instance of application we should use the quarkus:dev Maven command.

$ mvn compile quarkus:dev

Let’s see at the logs after employee-service startup. The application successfully called the Consul API using a Consul agent. With 5 second delay is sends an instance id and a port number.

Let’s take a look at the list of services registered in Consul.

quarkus-consul-services

I run two instances of every microservice. We may take a look at list of instances registered, for example by employee-service.

quarkus-consul-instances

Integrate Quarkus REST client with Consul discovery

Both departament-service and organization-service applications use the Quarkus REST module to communicate with other microservices.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-rest-client</artifactId>
</dependency>

Let’s take a look at the EmployeeClient interface inside the departament-service. We won’t use @RegisterRestClient on it. It is just annotated with @Path and contains a single @GET method.

@Path("/employees")
public interface EmployeeClient {

    @GET
    @Path("/department/{departmentId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId);

}

We won’t provide a target address of the service, but just its name from the discovery server. The base URI is available in the application.properties file.

client.employee.uri=http://employee

The REST client uses a filter to detect a list of running instances registered in Consul. The filter implements a round-robin load balancer. Consequently, it replaces the name of service in the target URI with a particular IP address and a port number.

public class LoadBalancedFilter implements ClientRequestFilter {

   private static final Logger LOGGER = LoggerFactory
         .getLogger(LoadBalancedFilter.class);

   private Consul consulClient;
   private AtomicInteger counter = new AtomicInteger();

   public LoadBalancedFilter(Consul consulClient) {
      this.consulClient = consulClient;
   }

   @Override
   public void filter(ClientRequestContext ctx) {
      URI uri = ctx.getUri();
      HealthClient healthClient = consulClient.healthClient();
      List<ServiceHealth> instances = healthClient
            .getHealthyServiceInstances(uri.getHost()).getResponse();
      instances.forEach(it ->
            LOGGER.info("Instance: uri={}:{}",
                  it.getService().getAddress(),
                  it.getService().getPort()));
      ServiceHealth instance = instances.get(counter.getAndIncrement());
      URI u = UriBuilder.fromUri(uri)
            .host(instance.getService().getAddress())
            .port(instance.getService().getPort())
            .build();
      ctx.setUri(u);
   }

}

Finally, we need to inject the filter bean into the REST client builder. After that, our Quarkus application is fully integrated with the Consul discovery.

@ApplicationScoped
public class DepartmentBeansProducer {

   @ConfigProperty(name = "client.employee.uri")
   String employeeUri;
   @Produces
   Consul consulClient = Consul.builder().build();

   @Produces
   LoadBalancedFilter filter = new LoadBalancedFilter(consulClient);

   @Produces
   EmployeeClient employeeClient() throws URISyntaxException {
      URIBuilder builder = new URIBuilder(employeeUri);
      return RestClientBuilder.newBuilder()
            .baseUri(builder.build())
            .register(filter)
            .build(EmployeeClient.class);
   }

}

Read configuration properties from Consul

Although Quarkus does not provide built-in integration with a Consul discovery, it is able to read configuration properties from there. Firstly, we need to include the Quarkus Consul Config module to the Maven dependencies.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-consul-config</artifactId>
</dependency>

Then, we enable the mechanism with the quarkus.consul-config.enable property.

quarkus.application.name=employee
quarkus.consul-config.enabled=true
quarkus.consul-config.properties-value-keys=config/${quarkus.application.name}

The Quarkus Config client reads properties from a KV store based on the location set in quarkus.consul-config.properties-value-keys property. Let’s create the settings responsible for a database connection and for enabling a random HTTP port feature.

quarkus-consul-config

Finally, we can run the application. The effect is the same as they would be stored in the standard application.properties file. The configuration for departament-service and organization-service looks pretty similar, but it also contains URLs used by the HTTP clients to call other microservices. For some reasons the property quarkus.datasource.db-kind=h2 always needs to be set inside application.properties file.

Testing Quarkus Consul discovery with gateway

All the applications listen on the random HTTP port. In order to simplify testing, we should run an API gateway. It will listen on a defined port. Since Quarkus does not provide any implementation of an API gateway, we are going to use Spring Cloud Gateway. We can easily integrate it with Consul using a Spring Cloud discovery client.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-loadbalancer</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-consul-discovery</artifactId>
</dependency>

The configuration of Spring Cloud Gateway contains a list of routes. We need to create three routes for all our sample applications.

spring:
  application:
    name: gateway-service
  cloud:
    gateway:
      discovery:
        locator:
          enabled: true
      routes:
        - id: employee-service
          uri: lb://employee
          predicates:
            - Path=/api/employees/**
          filters:
            - StripPrefix=1
        - id: department-service
          uri: lb://department
          predicates:
            - Path=/api/departments/**
          filters:
            - StripPrefix=1
        - id: organization-service
          uri: lb://organization
          predicates:
            - Path=/api/organizations/**
          filters:
            - StripPrefix=1
    loadbalancer:
      ribbon:
        enabled: false

Now, you may perform some test calls by yourself. The API gateway is available on port 8080. It uses prefix /api. Here are some curl commands to list all available employees, departments and organizations.

$ http://localhost:8080/api/employees
$ http://localhost:8080/api/departments
$ http://localhost:8080/api/organizations

Conclusion

Although Quarkus is a Kubernetes-native framework, we can use it to run microservices outside Kubernetes. The only problem we may encounter is a lack of support for external discovery. This article shows how to solve it. As a result, we created microservices architecture based on our custom discovery mechanism and built-in support for configuration properties in Consul. It is worth saying that Quarkus also provides integration with other third-party configuration solutions like Vault or Spring Cloud Config. If you are interested in a competitive solution based on Spring Boot and Spring Cloud you should read the article Microservices with Spring Boot, Spring Cloud Gateway and Consul Cluster.

The post Quarkus Microservices with Consul Discovery appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/11/24/quarkus-microservices-with-consul-discovery/feed/ 15 9118
Using Spring Cloud Kubernetes External Library https://piotrminkowski.com/2020/03/16/using-spring-cloud-kubernetes-external-library/ https://piotrminkowski.com/2020/03/16/using-spring-cloud-kubernetes-external-library/#respond Mon, 16 Mar 2020 16:57:51 +0000 http://piotrminkowski.com/?p=7824 In this article I’m going to introduce my newest library for registering Spring Boot applications running outside the Kubernetes cluster. The motivation for creating this library has already been described in the details in my article Spring Cloud Kubernetes for Hybrid Microservices Architecture. Since Spring Cloud Kubernetes doesn’t implement registration in the service registry in […]

The post Using Spring Cloud Kubernetes External Library appeared first on Piotr's TechBlog.

]]>
In this article I’m going to introduce my newest library for registering Spring Boot applications running outside the Kubernetes cluster. The motivation for creating this library has already been described in the details in my article Spring Cloud Kubernetes for Hybrid Microservices Architecture. Since Spring Cloud Kubernetes doesn’t implement registration in the service registry in any way, and just delegates it to the platform, it will not provide many benefits to applications running outside the Kubernetes cluster. To take an advantage of Spring Cloud Kubernetes Discovery you may just include library spring-cloud-kubernetes-discovery-ext-client to your Spring Boot application running externally.
The current stable version of this library is 1.0.1.RELEASE.


<dependency>
  <groupId>com.github.piomin</groupId>
  <artifactId>spring-cloud-kubernetes-discovery-ext-client</artifactId>
  <version>1.0.1.RELEASE</version>
</dependency>

The registration feature is still disabled, since we won’t set property spring.cloud.kubernetes.discovery.register to true.

spring:
  cloud:
    kubernetes:
      discovery:
        register: true

If you are running an application you need to set a target namespace in Kubernetes, where it will be registered after startup. Here we are using a mechanism provided by Spring Cloud Kubernetes, which allows to set a default namespace for Fabric8 Kubernetes Client by setting environment variable KUBERNETES_NAMESPACE.

A registration mechanism is based on Kubernetes objects: Service and Endpoints. It creates a service with the name taken from property spring.application.name. In the annotations field of Service object it puts a path of health check endpoint. By default it is /actuator/health. The new Service object is created only if it does not exist. The following screen shows the details about Service created by library for application api-test.

spring-cloud-kubernetes-external-library-service

The path of the health check endpoint may be overridden using property spring.cloud.kubernetes.discovery.healthUrl.

spring:
  cloud:
    kubernetes:
      discovery:
        healthUrl: /actuator/liveness 

The next step is to create an Endpoints object. Normally, you don’t have a lot to deal with Endpoints, since it just tracks the IP addresses of the pods the service send traffic to. The name of Endpoints is the same as the name of Service. The IP address of the application is stored in the Subset section. To distinguish Endpoints created by the library for external applications from Endpoints registered by the platform automatically each of them is labeled with external flag with value true.

spring-cloud-kubernetes-external-library-endpoints

We may display details about selected Endpoints by command kubectl describe to see the structure of this object.

spring-cloud-kubernetes-external-library-endpoints-describe

The IP address of Spring Boot application is automatically detected by the library by calling Java method InetAddress.getLocalHost().getHostAddress(). You may set a static IP address by using property spring.cloud.kubernetes.discovery.ipAddress as shown below.

spring:
  cloud:
    kubernetes:
      discovery:
        ipAddress: 192.168.99.1

The library spring-cloud-kubernetes-discovery-ext-client is based on the Spring Cloud Kubernetes project. It uses the Kubernetes API client provided within this library. The version of Spring Cloud Release Train used by the library is Hoxton.RELEASE.


<dependencies>
   <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-kubernetes</artifactId>
   </dependency>
</dependencies>
<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-dependencies</artifactId>
         <version>Hoxton.RELEASE</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

Assuming you have currently run a local instance of your Kubernetes cluster, you should at least provide an address of master API, and set property spring.cloud.kubernetes.client.trustCerts just for the development purposes. Here’s bootstrap.yml for my Spring Boot demo application.

spring:
  application:
    name: api-test
  cloud:
    kubernetes:
      discovery:
        register: true
      client:
        masterUrl: 192.168.99.100:8443
        trustCerts: true

If you shutdown your Spring Boot application gracefully spring-cloud-kubernetes-discovery-ext-client will unregister it from Kubernetes API. However, we always have to consider situations like forceful kill of application or network problems, that may cause unregistered instances in our Kubernetes API. Such situations should be handled on the platform side. Since Kubernetes Discovery does provide any built-in mechanisms for that (like heartbeat for applications running outside cluster), you may provide your implementation within Kubernetes Job or you can just use my library spring-cloud-kubernetes-discovery-ext-watcher responsible for detecting and removing inactive Endpoints.
The main idea behind that library is illustrated on the picture below. The module spring-cloud-kubernetes-discovery-ext-watcher is in fact Spring Boot application that needs to be run on Kubernetes. It periodically queries Kubernetes API in order to fetch the current list of external Endpoints registered by applications using spring-cloud-kubernetes-discovery-ext-client library. Then it is trying to call health endpoints registered for each application using IP address and ports taken from master API. If it won’t receive any response or receive response with HTTP status 5XX several times in row, it removes IP address from Subset section of Endpoints object.

spring-cloud-kubernetes-external-library-diagram.png

By default, spring-cloud-kubernetes-discovery-ext-watcher application checks out endpoints registered in the same Kubernetes namespace as that application. This behaviour may be customized using configuration properties. We can set the default target namespace by setting property spring.cloud.kubernetes.watcher.targetNamespace or just enable watching for Endpoints labeled with external=true across all the namespace by setting property spring.cloud.kubernetes.watcher.allNamespaces to true. We can also override some default retry properties for calling application health endpoints like number of retries (by default 3) or connect timeout (by default 1000ms). The configuration settings need to be delivered to the Watcher application as application.yml or bootstrap.yml file.

spring:
  cloud:
    kubernetes:
      watcher:
        targetNamespace: test
        retries: 5
        retryTimeout: 5000

To deploy spring-cloud-kubernetes-discovery-ext-watcher application on your Kubernetes cluster you just need to apply the following Deployment definition using kubectl apply command.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: spring-cloud-discovery-watcher
spec:
  selector:
    matchLabels:
      app: spring-cloud-discovery-watcher
  template:
    metadata:
      labels:
        app: spring-cloud-discovery-watcher
    spec:
      containers:
      - name: watcher
        image: piomin/spring-cloud-discovery-watcher
        ports:
        - containerPort: 8080

Since that application uses Spring Cloud Kubernetes for accessing Master API you need to grant privileges to read objects like Service, Endpoints or ConfigMap. For development purposes you can just assign cluster-admin to the default ServiceAccount in target namespace.

$ kubectl create clusterrolebinding admin-external --clusterrole=cluster-admin --serviceaccount=external:default

In case you would like to override some default configuration settings you should define application.yml file and place it inside ConfigMap. Since spring-cloud-kubernetes-discovery-ext-watcher uses Spring Cloud Kubernetes Config we may take an advantage of its integration with ConfigMap. To do that just create ConfigMap with the same metadata.name as the application name.

apiVersion: v1
kind: ConfigMap
metadata:
  name: api-test
data:
  application.yaml: |-
    spring:
      cloud:
        kubernetes:
          watcher:
            allNamespaces: true
            retries: 5
            retryTimeout: 5000

Here’s our sample deployment in the external namespace.

spring-boot-admin-on-kubernetes-watcher-deployment

Now, let’s take a look on the logs generated by the spring-cloud-kubernetes-discovery-ext-watcher application. Before running it I have started my sample application that uses spring-cloud-kubernetes-discovery-ext-client outside Kubernetes. I has been registered under address 192.168.99.1:8080. As you in the following logs in the beginning Watcher Application was able to communicate with sample application Actuator endpoint. Then I killed the sample application. Since Watcher was unable to call Actuator endpoint of previously checked application it finally remove that address from Subset section of api-test Endpoints

spring-cloud-kubernetes-external-library-watcher-logs

Here’s the Endpoints object after removal of address 192.168.99.1:8080.

spring-cloud-kubernetes-external-library-endpoint-after-remove

Summary

The repository with Client library and Watcher application is available on GitHub: https://github.com/piomin/spring-cloud-kubernetes-discovery-ext.git. The library spring-cloud-kubernetes-discovery-ext-client is available in Maven Central Repository. The Docker image with Watcher application is available on Docker Hub: https://hub.docker.com/repository/docker/piomin/spring-cloud-discovery-watcher. You can also run it directly from a source code using Skaffold.

The post Using Spring Cloud Kubernetes External Library appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/03/16/using-spring-cloud-kubernetes-external-library/feed/ 0 7824
Guide To Micronaut Kubernetes https://piotrminkowski.com/2020/01/07/guide-to-micronaut-kubernetes/ https://piotrminkowski.com/2020/01/07/guide-to-micronaut-kubernetes/#respond Tue, 07 Jan 2020 10:24:11 +0000 http://piotrminkowski.com/?p=7597 Micronaut provides a library that eases the development of applications deployed on Kubernetes or on a local single-node cluster like Minikube. The project Micronaut Kubernetes is relatively new in the Micronaut family, its current release version is 1.0.3. It allows you to integrate a Micronaut application with Kubernetes discovery, and use Micronaut Configuration Client to […]

The post Guide To Micronaut Kubernetes appeared first on Piotr's TechBlog.

]]>
Micronaut provides a library that eases the development of applications deployed on Kubernetes or on a local single-node cluster like Minikube. The project Micronaut Kubernetes is relatively new in the Micronaut family, its current release version is 1.0.3. It allows you to integrate a Micronaut application with Kubernetes discovery, and use Micronaut Configuration Client to read Kubernetes ConfigMap and Secret as a property sources. Additionally, it provides a health check indicator based on communication with Kubernetes API.
Thanks to that module you can simplify and speed up your Micronaut application deployment on Kubernetes during development. In this article I’m going to show how to use Micronaut Kubernetes together with some other interesting tools to simplify local development with Minikube. The topics covered in this article are:

  • Using Skaffold together with Jib Maven Plugin to automatically publish application to Minikube after source code change
  • Providing communication between applications using Micronaut HTTP Client basing on Kubernetes Endpoints name
  • Enabling Kubernetes ConfigMap and Secret as Micronaut Property Sources
  • Using application health check
  • Integrating application with MongoDB running on Minikube

Micronaut Kubernetes example on GitHub

The source code with Micronaut Kubernetes example is as usual available on GitHub: https://github.com/piomin/sample-micronaut-kubernetes.git. Here’s the architecture of our example system consisting of three microservices built on top of Micronaut Framework.

guide-to-micronaut-kubernetes-architecture.png

Using Skaffold and Jit

Development with Minikube may be a little bit complicated in comparison to the standard approach when you are testing an application locally without running it on the platform. First you need to build your application from source code, then build its Docker image and finally redeploy application on Kubernetes using the newest image. Skaffold performs all these steps automatically for you. The only thing you need to do is to install it on a machine and enable it for your maven project using command skaffold init. The command skaffold init just creates a file skaffold.yaml in the root of the project. Of course, you can create such a manifest by yourself, especially if you would like to use Skaffold together with Jib. Here’s my skaffold.yaml manifest. We set the name of Docker image, tagging policy to Git commit id and also enabled Jib.

apiVersion: skaffold/v2alpha1
kind: Config
build:
  artifacts:
    - image: piomin/employee
      jib: {}
  tagPolicy:
    gitCommit: {}

Why do we need to use Jib? By default, Skaffold is based on Dockerfile, so each change will be published to Kubernetes only after the JAR file changes. With Jib it is watching for changes in the source code and first automatically rebuilding your Maven projects.

<plugin>
   <groupId>com.google.cloud.tools</groupId>
   <artifactId>jib-maven-plugin</artifactId>
   <version>1.8.0</version>
</plugin>

Now you just need to run command skaffold dev on a selected Maven project, and your application will be automatically deployed to Kubernetes on every change in the source code. Additionally, Skaffold may apply Kubernetes manifest file if it is located in k8s directory.

k8s

Implementation of Micronaut Kubernetes example

Let’s begin from implementation. Each of our applications uses MongoDB as a backend store. We are using a synchronous Java client for integration with MongoDB. Micronaut comes with project micronaut-mongo-reactive that provides auto-configuration for both reactive and non-reactive drivers.

<dependency>
   <groupId>io.micronaut.configuration</groupId>
   <artifactId>micronaut-mongo-reactive</artifactId>
</dependency>
<dependency>
   <groupId>org.mongodb</groupId>
   <artifactId>mongo-java-driver</artifactId>
</dependency>

It is based on mongodb.uri property and allows you to inject preconfigured MongoClient bean. Then, we use MongoClient for save and find operations. When using it we first need to set a current database and collection name. All required parameters uri, database and collection are taken from external configuration.

@Singleton
public class EmployeeRepository {

   private MongoClient mongoClient;

   @Property(name = "mongodb.database")
   private String mongodbDatabase;
   @Property(name = "mongodb.collection")
   private String mongodbCollection;

   EmployeeRepository(MongoClient mongoClient) {
      this.mongoClient = mongoClient;
   }

   public Employee add(Employee employee) {
      employee.setId(repository().countDocuments() + 1);
      repository().insertOne(employee);
      return employee;
   }

   public Employee findById(Long id) {
      return repository().find().first();
   }

   public List<Employee> findAll() {
      final List<Employee> employees = new ArrayList<>();
      repository()
            .find()
            .iterator()
            .forEachRemaining(employees::add);
      return employees;
   }

   public List<Employee> findByDepartment(Long departmentId) {
      final List<Employee> employees = new ArrayList<>();
      repository()
            .find(Filters.eq("departmentId", departmentId))
            .iterator()
            .forEachRemaining(employees::add);
      return employees;
   }

   public List<Employee> findByOrganization(Long organizationId) {
      final List<Employee> employees = new ArrayList<>();
      repository()
            .find(Filters.eq("organizationId", organizationId))
            .iterator()
            .forEachRemaining(employees::add);
      return employees;
   }

   private MongoCollection<Employee> repository() {
      return mongoClient.getDatabase(mongodbDatabase).getCollection(mongodbCollection, Employee.class);
   }

}

Each application exposes REST endpoints for CRUD operations. Here’s controller implementation for employee-service.

@Controller("/employees")
public class EmployeeController {

   private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);

   @Inject
   EmployeeRepository repository;

   @Post
   public Employee add(@Body Employee employee) {
      LOGGER.info("Employee add: {}", employee);
      return repository.add(employee);
   }

   @Get("/{id}")
   public Employee findById(Long id) {
      LOGGER.info("Employee find: id={}", id);
      return repository.findById(id);
   }

   @Get
   public List<Employee> findAll() {
      LOGGER.info("Employees find");
      return repository.findAll();
   }

   @Get("/department/{departmentId}")
   public List<Employee> findByDepartment(Long departmentId) {
      LOGGER.info("Employees find: departmentId={}", departmentId);
      return repository.findByDepartment(departmentId);
   }

   @Get("/organization/{organizationId}")
   public List<Employee> findByOrganization(Long organizationId) {
      LOGGER.info("Employees find: organizationId={}", organizationId);
      return repository.findByOrganization(organizationId);
   }

}

We may use Micronaut declarative HTTP client for communication with REST endpoints. We just need to create an interface annotated with @Client that declares calling methods.

@Client(id = "employee", path = "/employees")
public interface EmployeeClient {

   @Get("/department/{departmentId}")
   List<Employee> findByDepartment(Long departmentId);

}

It allows you to integrate Micronaut HTTP Clients with Kubernetes discovery in order to use the name of Kubernetes Endpoints as a service id. Then the client is injected into the controller. In the following code you may see the implementation of a controller in the department-service that uses EmployeeClient.

@Controller("/departments")
public class DepartmentController {

   private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);

   private DepartmentRepository repository;
   private EmployeeClient employeeClient;

   DepartmentController(DepartmentRepository repository, EmployeeClient employeeClient) {
      this.repository = repository;
      this.employeeClient = employeeClient;
   }

   @Post
   public Department add(@Body Department department) {
      LOGGER.info("Department add: {}", department);
      return repository.add(department);
   }

   @Get("/{id}")
   public Department findById(Long id) {
      LOGGER.info("Department find: id={}", id);
      return repository.findById(id);
   }

   @Get
   public List<Department> findAll() {
      LOGGER.info("Department find");
      return repository.findAll();
   }

   @Get("/organization/{organizationId}")
   public List<Department> findByOrganization(Long organizationId) {
      LOGGER.info("Department find: organizationId={}", organizationId);
      return repository.findByOrganization(organizationId);
   }

   @Get("/organization/{organizationId}/with-employees")
   public List<Department> findByOrganizationWithEmployees(Long organizationId) {
      LOGGER.info("Department find: organizationId={}", organizationId);
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
      return departments;
   }

}

Discovery with Micronaut Kubernetes

Using serviceId for communication with Micronaut HTTP Client requires integration with service discovery. Since we are running our applications on Kubernetes we are going to use its service registry. Here comes Micronaut Kubernetes. It integrates Micronaut application and Kubernetes discovery via Endpoints object. First, let’s add the required dependency.

<dependency>
   <groupId>io.micronaut.kubernetes</groupId>
   <artifactId>micronaut-kubernetes-discovery-client</artifactId>
</dependency>

In fact we don’t have to do anything else, because after adding the required dependency integration with Kubernetes discovery is enabled. We may proceed to the deployment. In Kubernetes Service definition the field metadata.name should be the same as field id inside @Client annotation.


apiVersion: v1
kind: Service
metadata:
  name: employee
  labels:
    app: employee
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: employee
  type: NodePort

Here’s a YAML deployment manifest for Service employee. The container is exposed on port 8080 and uses the latest tag of image piomin/employee, which is set in Skaffold manifest.


apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee
  labels:
    app: employee
spec:
  replicas: 1
  selector:
    matchLabels:
      app: employee
  template:
    metadata:
      labels:
        app: employee
    spec:
      containers:
        - name: employee
          image: piomin/employee
          ports:
            - containerPort: 8080

We can also increase log level for Kubernetes API client calls and for the whole Micronaut Kubernetes project to DEBUG. Here’s the fragment of our logback.xml.

<logger name="io.micronaut.http.client" level="DEBUG"/>
<logger name="io.micronaut.kubernetes" level="DEBUG"/>

Micronaut Kubernetes Discovery additionally allows us to filter the list of registered services. We may define the list of included or excluded services using property kubernetes.client.discovery.includes or kubernetes.client.discovery.excludes. Assuming we have many services registered in the same namespace, this feature may be applicable. Here’s the list of services registered in the default namespace after deploying all our sample microservices and MongoDB.

guide-to-micronaut-kubernetes-services

Since one of our applications department-service is communicating only with employee-service we may reduce the list of discovered services only to employee.


kubernetes:
  client:
    discovery:
      includes:
        - employee

Configuration Client

The Configuration client is reading Kubernetes ConfigMaps and Secrets, and making them available as PropertySources for your application. Since configuration parsing happens in the bootstrap phase, we need to define the following property in bootstrap.yml in order to enable distributed configuration clients.


micronaut:
  application:
    name: employee
  config-client:
    enabled: true

By default, the configuration client is reading all the ConfigMaps and Secrets for the configured namespace. You can filter the list of config map names by defining kubernetes.client.config-maps.includes or kubernetes.client.config-maps.excludes. Alternatively we may use Kubernetes labels, which give us more flexibility. This configuration also needs to be provided in the bootstrap phase. Reading Secrets is disabled by default. Therefore, we also need to enable it. Here’s the configuration for department-service, which is similar for all other apps.


kubernetes:
  client:
    config-maps:
      labels:
        - app: department
    secrets:
      enabled: true
      labels:
        - app: department

Kubernetes ConfigMap and Secret also need to be labeled with app=department.


apiVersion: v1
kind: ConfigMap
metadata:
  name: department
  labels:
    app: department
data:
  application.yaml: |-
    mongodb:
      collection: department
      database: admin
    kubernetes:
      client:
        discovery:
          includes:
            - employee

Here’s Secret definition for department-service. We configure there mongodb.uri property, which contains sensitive data like username or password. It is used by MongoClient for establishing connection with the server.


apiVersion: v1
kind: Secret
metadata:
  name: department
  labels:
    app: department
type: Opaque
data:
  mongodb.uri: bW9uZ29kYjovL21pY3JvbmF1dDptaWNyb25hdXRfMTIzQG1vbmdvZGI6MjcwMTcvYWRtaW4=

Running sample applications

Before running any application in default namespace we need to set the appropriate permissions. Micronaut Kubernetes requires read access to pods, endpoints, secrets, services and config maps. For development needs we may set the highest level of permissions by creating ClusterRoleBinding pointing to cluster-admin role.

$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default

One of useful Skaffold features is an ability to print standard output of the started container to a console. Thanks to that you don’t have to execute command kubectl logs on a pod. Let’s take a closer look on the logs during application startup. After increasing a level of logging we may find here some interesting information, for example client calls od Kubernetes API. As you see on the screen below our application tries to find ConfigMap and Secret with the label departament following configuration provided in bootstrap.yaml.

guide-to-micronaut-kubernetes-config.PNG

Let’s add some test data to our database by calling endpoints exposed by our applications running on Kubernetes. Each of them is exposed outside the node thanks to NodePort service type.

$ curl http://192.168.99.100:32356/employees -d '{"name":"John Smith","age":30,"position":"director","departmentId":2,"organizationId":2}' -H "Content-Type: application/json"
{"id":1,"organizationId":2,"departmentId":2,"name":"John Smith","age":30,"position":"director"}
$ curl http://192.168.99.100:32356/employees -d '{"name":"Paul Walker","age":50,"position":"director","departmentId":2,"organizationId":2}' -H "Content-Type: application/json"
{"id":2,"organizationId":2,"departmentId":2,"name":"Paul Walker","age":50,"position":"director"}
$ curl http://192.168.99.100:31144/departments -d '{"name":"Test2","organizationId":2}' -H "Content-Type: application/json"
{"id":2,"organizationId":2,"name":"Test2"}

Now, we can test HTTP communication between department-service and employee by calling method GET /organization/{organizationId}/with-employees that finds all departments with employees belonging to a given organization.

$ curl http://192.168.99.100:31144/departments/organization/2/with-employees

Here’s the current list of endpoints registered in the namespace default.

guide-to-micronaut-kubernetes-endpoints

Let’s take a look on the Micronaut HTTP Client logs from department-service. As you see below when it tries to call endpoint GET /employees/department/{departmentId} it finds the container under IP 172.17.0.11.

guide-to-micronaut-kubernetes-client

Health checks

To enable health checks for Micronaut applications we first need to add the following dependency to Maven pom.xml.

<dependency>
    <groupId>io.micronaut</groupId>
    <artifactId>micronaut-management</artifactId>
</dependency>

Micronaut configuration module provides a health check that probes communication with the Kubernetes API, and shows some information about the pod and application. To enable a detailed view for unauthenticated users we need to set the following property.


endpoints:
  health:
    details-visible: ANONYMOUS

After that we can take advantage of quite detailed information about an application including MongoDB connection status or HTTP Client status as shown below. By default, a health check is available under path /health.

guide-to-micronaut-kubernetes-health

Conclusion

Our Micronaut Kubernetes example integrates with Kubernetes API in order to allow applications to read components responsible for discovery and configuration. Integration between Micronaut HTTP Client and Kubernetes Endpoints or between Micronaut Configuration Client and Kubernetes ConfigMap or Secret are useful features. I’m looking for some other interesting features which may be included in Micronaut Kubernetes, since it is a relatively new project within Micronaut. Before starting with Micronaut Kubernetes example you should learn about Micronaut basics: Micronaut Tutorial – Beans and Scopes.

The post Guide To Micronaut Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/01/07/guide-to-micronaut-kubernetes/feed/ 0 7597
Guide to Microservices with Micronaut and Consul https://piotrminkowski.com/2019/01/25/quick-guide-to-microservices-with-micronaut-framework/ https://piotrminkowski.com/2019/01/25/quick-guide-to-microservices-with-micronaut-framework/#comments Fri, 25 Jan 2019 08:21:22 +0000 https://piotrminkowski.wordpress.com/?p=6969 Micronaut framework has been introduced as an alternative to Spring Boot for building microservices using such tools as Consul. At first glance, it is very similar to Spring. It also implements such patterns as dependency injection and inversion of control based on annotations, however, it uses JSR-330 (java.inject) for doing it. It has been designed […]

The post Guide to Microservices with Micronaut and Consul appeared first on Piotr's TechBlog.

]]>
Micronaut framework has been introduced as an alternative to Spring Boot for building microservices using such tools as Consul. At first glance, it is very similar to Spring. It also implements such patterns as dependency injection and inversion of control based on annotations, however, it uses JSR-330 (java.inject) for doing it. It has been designed specially in order to build serverless functions, Android applications, and low memory-footprint microservices. This means that it should have faster startup time, lower memory usage, or easier unit testing than competitive frameworks. However, today I don’t want to focus on those characteristics of Micronaut. I’m going to show you how to build a simple microservices-based system using this framework. You can easily compare it with Spring Boot and Spring Cloud by reading my previous article about the same subject Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud. Does Micronaut have a chance to gain the same popularity as Spring Boot? Let’s find out.

Our sample system consists of three independent microservices that communicate with each other. All of them integrate with Consul in order to fetch shared configuration. After startup, every single service will register itself in Consul. Applications organization-service and department-service call endpoints exposed by other microservices using Micronaut declarative HTTP client. The traces from communication are sent to Zipkin. The source code of sample applications is available on GitHub in repository sample-micronaut-microservices.

micronaut-arch (1).png

Step 1. Creating Micronaut application

We need to start by including some dependencies to our Maven pom.xml. First, let’s define BOM with the newest stable Micronaut version.

<properties>
   <exec.mainClass>pl.piomin.services.employee.EmployeeApplication</exec.mainClass>
   <micronaut.version>1.0.3</micronaut.version>
   <jdk.version>1.8</jdk.version>
</properties>
<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>io.micronaut</groupId>
         <artifactId>micronaut-bom</artifactId>
         <version>${micronaut.version}</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

The list of required dependencies isn’t very long. Also not all of them are required, but they will be useful in our demo. For example micronaut-management need to be included in case we would like to expose some built-in management and monitoring endpoints.

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-http-server-netty</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-inject</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-runtime</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-management</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-inject-java</artifactId>
   <scope>provided</scope>
</dependency>

To build an application uber-jar we need a configure plugin responsible for packaging a JAR file with dependencies. It can be for example maven-shade-plugin. When building a new application it is also worth to expose basic information about it under /info endpoint. As I have already mentioned Micronaut adds support for monitoring your app via HTTP endpoints after including artifact micronaut-management. Management endpoints are integrated with the Micronaut security module, which means that you need to authenticate yourself to be able to access them. To simplify we can disable authentication for /info endpoint.

endpoints:
  info:
    enabled: true
    sensitive: false

We can customize /info endpoint by adding some supported info sources. This mechanism is very similar to the Spring Boot Actuator approach. If git.properties file is available on the classpath, all the values inside file will be exposed by /info endpoint. The same situation applies to build-info.properties file, that needs to be placed inside META-INF directory. However, in comparison with Spring Boot we need to provide more configuration in pom.xml to generate and package those to application JAR. The following Maven plugins are responsible for generating required properties files.

<plugin>
   <groupId>pl.project13.maven</groupId>
   <artifactId>git-commit-id-plugin</artifactId>
   <version>2.2.6</version>
   <executions>
      <execution>
         <id>get-the-git-infos</id>
         <goals>
            <goal>revision</goal>
         </goals>
      </execution>
   </executions>
   <configuration>
      <verbose>true</verbose>
      <dotGitDirectory>${project.basedir}/.git</dotGitDirectory>
      <dateFormat>MM-dd-yyyy '@' HH:mm:ss Z</dateFormat>
      <generateGitPropertiesFile>true</generateGitPropertiesFile>
      <generateGitPropertiesFilename>src/main/resources/git.properties</generateGitPropertiesFilename>
      <failOnNoGitDirectory>true</failOnNoGitDirectory>
   </configuration>
</plugin>
<plugin>
   <groupId>com.rodiontsev.maven.plugins</groupId>
   <artifactId>build-info-maven-plugin</artifactId>
   <version>1.2</version>
   <configuration>
      <filename>classes/META-INF/build-info.properties</filename>
      <projectProperties>
         <projectProperty>project.groupId</projectProperty>
         <projectProperty>project.artifactId</projectProperty>
         <projectProperty>project.version</projectProperty>
      </projectProperties>
   </configuration>
   <executions>
      <execution>
         <phase>prepare-package</phase>
         <goals>
            <goal>extract</goal>
         </goals>
      </execution>
   </executions>
</plugin>
</plugins>

Now, our /info endpoint is able to print the most important information about our app including Maven artifact name, version, and last Git commit id.

micronaut-2

Step 2. Exposing HTTP endpoints

Micronaut provides their own annotations for pointing out HTTP endpoints and methods. As I have mentioned in the preface it also uses JSR-330 (java.inject) for dependency injection. Our controller class should be annotated with @Controller. We also have annotations for every HTTP method type. The path parameter is automatically mapped to the class method parameter by its name, what is a nice simplification in comparison to Spring MVC where we need to use @PathVariable annotation. The repository bean used for CRUD operations is injected into the controller using @Inject annotation.

@Controller("/employees")
public class EmployeeController {

    private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);

    @Inject
    EmployeeRepository repository;

    @Post
    public Employee add(@Body Employee employee) {
        LOGGER.info("Employee add: {}", employee);
        return repository.add(employee);
    }

    @Get("/{id}")
    public Employee findById(Long id) {
        LOGGER.info("Employee find: id={}", id);
        return repository.findById(id);
    }

    @Get
    public List<Employee> findAll() {
        LOGGER.info("Employees find");
        return repository.findAll();
    }

    @Get("/department/{departmentId}")
    @ContinueSpan
    public List<Employee> findByDepartment(@SpanTag("departmentId") Long departmentId) {
        LOGGER.info("Employees find: departmentId={}", departmentId);
        return repository.findByDepartment(departmentId);
    }

    @Get("/organization/{organizationId}")
    @ContinueSpan
    public List<Employee> findByOrganization(@SpanTag("organizationId") Long organizationId) {
        LOGGER.info("Employees find: organizationId={}", organizationId);
        return repository.findByOrganization(organizationId);
    }

}

Our repository bean is pretty simple. It just provides an in-memory store for Employee instances. We will mark it with @Singleton annotation.

@Singleton
public class EmployeeRepository {

   private List<Employee> employees = new ArrayList<>();
   
   public Employee add(Employee employee) {
      employee.setId((long) (employees.size()+1));
      employees.add(employee);
      return employee;
   }
   
   public Employee findById(Long id) {
      Optional<Employee> employee = employees.stream().filter(a -> a.getId().equals(id)).findFirst();
      if (employee.isPresent())
         return employee.get();
      else
         return null;
   }
   
   public List<Employee> findAll() {
      return employees;
   }
   
   public List<Employee> findByDepartment(Long departmentId) {
      return employees.stream().filter(a -> a.getDepartmentId().equals(departmentId)).collect(Collectors.toList());
   }
   
   public List<Employee> findByOrganization(Long organizationId) {
      return employees.stream().filter(a -> a.getOrganizationId().equals(organizationId)).collect(Collectors.toList());
   }
   
}

Micronaut is able to automatically generate Swagger YAML definition from our controller and methods based on annotations. To achieve this, we first need to include the following dependency to our pom.xml.

<dependency>
   <groupId>io.swagger.core.v3</groupId>
   <artifactId>swagger-annotations</artifactId>
</dependency>

Then we should annotate the application main class with @OpenAPIDefinition and provide some basic information like title or version number. Here’s the employee application main class.

@OpenAPIDefinition(
    info = @Info(
        title = "Employees Management",
        version = "1.0",
        description = "Employee API",
        contact = @Contact(url = "https://piotrminkowski.wordpress.com", name = "Piotr Mińkowski", email = "piotr.minkowski@gmail.com")
    )
)
public class EmployeeApplication {

    public static void main(String[] args) {
        Micronaut.run(EmployeeApplication.class);
    }

}

Micronaut generates Swagger file basing on title and version fields inside @Info annotation. In that case our YAML definition file is available under name employees-management-1.0.yml, and will be generated to the META-INF/swagger directory. We can expose it outside the application using HTTP endpoint. Here’s the appropriate configuration provided inside application.yml file.

micronaut:
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**

Now, our file is available under path http://localhost:8080/swagger/employees-management-1.0.yml if run it on default 8080 port (we won’t do that, what I’m going to describe in the next part of this article). In comparison to Spring Boot, we don’t have such a project like Swagger SpringFox for Micronaut, so we need to copy the content to an online editor in order to see the graphical representation of Swagger YAML. Here’s it.

micronaut-1.PNG

Ok, since we have finished implementation of single microservice we may proceed to cloud-native features provided by Micronaut.

Step 3. Distributed configuration with Consul

Micronaut comes with built in APIs for doing distributed configuration. In fact, the only one available solution for now is microservices distributed configuration based on Micronaut integration with HashiCorp’s Consul. Micronaut features for externalizing and adapting configuration to the environment are very similar to the Spring Boot approach. We also have application.yml and bootstrap.yml files, which can be used for application environment configuration. When using distributed configuration we first need to provide a bootstrap.yml file on the classpath. It should contain an address of remote configuration server and preferred configuration store format. Of course, we first need to enable distributed configuration clients by setting property micronaut.config-client.enabled to true. Here’s bootstrap.yml file for department-service.

micronaut:
  application:
    name: department-service
  config-client:
    enabled: true
consul:
  client:
    defaultZone: "192.168.99.100:8500"
    config:
      format: YAML

We can choose between properties, JSON, YAML and FILES (git2consul) configuration formats. I decided to use YAML. To apply this configuration on Consul we first need to start it locally in development mode. Because I’m using Docker Toolbox the default address of Consul is 192.168.99.100. The following Docker command will start a single-node Consul instance and expose it on port 8500.

$ docker run -d --name consul -p 8500:8500 consul

Now, you can navigate to the tab Key/Value in the Consul web console and create a new file in YAML format /config/application.yml as shown below. Besides configuration for Swagger and /info management endpoint it also enables dynamic HTTP generation on startup by setting property micronaut.server.port to -1. Because the name of the file is application.yml it is by default shared between all Micronaut microservices that use the Consul config client.

micronaut-2

Step 4. Service discovery with Consul

Micronaut gives you more options when configuring service discovery, than for distributed configuration. You can use Eureka, Consul, Kubernetes or just manually configure a list of available services. However, I have observed that using the Eureka discovery client together with the Consul config client causes some errors on startup. In this example we will use Consul discovery for our Micronaut microservices. Because Consul address has been already provided in bootstrap.yml for all Micronaut microservices, we just need to enable service discovery by adding the following lines to application.yml stored in Consul KV.

consul:
  client:
    registration:
      enabled: true

We should also include the following dependency to Maven pom.xml of every single application.

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-discovery-client</artifactId>
</dependency>

Finally, you can just run every microservice (you may run more than one instance locally, since HTTP port is generated dynamically). Here’s my list of running Micronaut microservices registered in Consul.

micronaut-3

I have run two instances of employee-service as shown below.

micronaut-4

Step 5. Inter-service communication

Micronaut uses a built-in HTTP client for load balancing between multiple instances of a single microservice. By default it leverages the Round Robin algorithm. We may choose between low-level HTTP client and declarative HTTP client with @Client. Micronaut declarative HTTP client concept is very similar to Spring Cloud OpenFeign. To use a built-in client we first need to include the following dependency to project pom.xml.

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-http-client</artifactId>
</dependency>

Declarative client automatically integrates with a discovery client. It tries to find the service registered in Consul under the same name as the value provided inside id field.

@Client(id = "employee-service", path = "/employees")
public interface EmployeeClient {

   @Get("/department/{departmentId}")
   List<Employee> findByDepartment(Long departmentId);
   
}

Now, the client bean needs to be injected into the controller.

@Controller("/departments")
public class DepartmentController {

   private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);
   
   @Inject
   DepartmentRepository repository;
   @Inject
   EmployeeClient employeeClient;
   
   @Post
   public Department add(@Body Department department) {
      LOGGER.info("Department add: {}", department);
      return repository.add(department);
   }
   
   @Get("/{id}")
   public Department findById(Long id) {
      LOGGER.info("Department find: id={}", id);
      return repository.findById(id);
   }
   
   @Get
   public List<Department> findAll() {
      LOGGER.info("Department find");
      return repository.findAll();
   }
   
   @Get("/organization/{organizationId}")
   @ContinueSpan
   public List<Department> findByOrganization(@SpanTag("organizationId") Long organizationId) {
      LOGGER.info("Department find: organizationId={}", organizationId);
      return repository.findByOrganization(organizationId);
   }
   
   @Get("/organization/{organizationId}/with-employees")
   @ContinueSpan
   public List<Department> findByOrganizationWithEmployees(@SpanTag("organizationId") Long organizationId) {
      LOGGER.info("Department find: organizationId={}", organizationId);
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
      return departments;
   }
   
}

Step 6. Distributed tracing

Micronaut applications can be easily integrated with Zipkin to send traces with HTTP traffic there automatically. To enable this feature we first need to include the following dependencies to pom.xml.

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-tracing</artifactId>
</dependency>
<dependency>
   <groupId>io.zipkin.brave</groupId>
   <artifactId>brave-instrumentation-http</artifactId>
   <scope>runtime</scope>
</dependency>
<dependency>
   <groupId>io.zipkin.reporter2</groupId>
   <artifactId>zipkin-reporter</artifactId>
   <scope>runtime</scope>
</dependency>
<dependency>
   <groupId>io.opentracing.brave</groupId>
   <artifactId>brave-opentracing</artifactId>
</dependency>

Then, we have to provide some configuration settings inside application.yml including Zipkin URL and sampler options. By setting property tracing.zipkin.sampler.probability to 1 we are forcing micronaut to send traces for every single request. Here’s our final configuration.

micronaut-5

During the tests of my application I have observed that using distributed configuration together with Zipkin tracing results in the problems in communication between microservice and Zipkin. The traces just do not appear in Zipkin. So, if you would like to test this feature now you must provide application.yml on the classpath and disable Consul distributed configuration for all your applications.

We can add some tags to the spans by using @ContinueSpan or @NewSpan annotations on methods.

After making some test calls of GET methods exposed by organization-service and department-service we may take a look on Zipkin web console, available under address http://192.168.99.100:9411. The following picture shows the list of all the traces sent to Zipkin by our microservices in 1 hour.

micronaut-7

We can check out the details of every trace by clicking on the element from the list. The following picture illustrates the timeline for HTTP method exposed by organization-service GET /organizations/{id}/with-departments-and-employees. This method finds the organization in the in-memory repository, and then calls HTTP method exposed by department-service GET /departments/organization/{organizationId}/with-employees. This method is responsible for finding all departments assigned to the given organization. It also needs to return employees within the department, so it calls method GET /employees/department/{departmentId} from employee-service.

micronaut-8

We can also take a look at the details of every single call from the timeline.

micronaut-9

Conclusion

In comparison to Spring Boot Micronaut is still in the early stage of development. For example, I was not able to implement any application that could act as an API gateway to our system, which can easily be achieved with Spring using Spring Cloud Gateway or Spring Cloud Netflix Zuul. There are still some bugs that need to be fixed. But above all that, Micronaut is now probably the most interesting micro-framework on the market. It implements most popular microservice patterns, provides integration with several third-party solutions like Consul, Eureka, Zipkin or Swagger, consumes less memory and starts faster than similar Spring Boot apps. I will definitely follow the progress in Micronaut development closely.

The post Guide to Microservices with Micronaut and Consul appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/01/25/quick-guide-to-microservices-with-micronaut-framework/feed/ 3 6969
Microservices with Spring Cloud Alibaba https://piotrminkowski.com/2018/11/15/microservices-with-spring-cloud-alibaba/ https://piotrminkowski.com/2018/11/15/microservices-with-spring-cloud-alibaba/#respond Thu, 15 Nov 2018 08:45:12 +0000 https://piotrminkowski.wordpress.com/?p=6890 Some days ago Spring Cloud announced a support for several Alibaba components used for building microservices-based architecture. The project is still under the incubation stage, but there is a plan for graduating it from incubation to officially join a Spring Cloud Release Train in 2019. The currently released version 0.0.2.RELEASE is compatible with Spring Boot […]

The post Microservices with Spring Cloud Alibaba appeared first on Piotr's TechBlog.

]]>
Some days ago Spring Cloud announced a support for several Alibaba components used for building microservices-based architecture. The project is still under the incubation stage, but there is a plan for graduating it from incubation to officially join a Spring Cloud Release Train in 2019. The currently released version 0.0.2.RELEASE is compatible with Spring Boot 2, while older version 0.0.1.RELEASE is compatible with Spring Boot 1.x. This project seems to be very interesting, and currently it is the most popular repository amongst Spring Cloud Incubator repositories (around 1.5k likes on GitHub).

Currently, the most commonly used Spring Cloud project for building microservices architecture is Spring Cloud Netflix. As you probably know this project provides Netflix OSS integrations for Spring Boot apps, including service discovery (Eureka), circuit breaker (Hystrix), intelligent routing (Zuul), and client-side load balancing (Ribbon). The first question that came to my mind when I was reading about Spring Cloud Alibaba was: ’Can Spring Cloud Alibaba be an alternative for Spring Cloud Netflix ?’. The answer is yes, but not entirely. Spring Cloud Alibaba still integrates with Ribbon, which is used for load balancing based on service discovery. Netflix Eureka server is replaced in that case by Nacos.
Nacos (Dynamic Naming and Configuration Service) is an easy-to-use platform designed for dynamic service discovery and configuration and service management. It helps you to build cloud native applications and microservices platforms easily. Following that definition you can use Nacos for:

  • Service Discovery – you can register your microservice and discover other microservices via a DNS or HTTP interface. It also provides real-time healthchecks for registered services
  • Distributed Configuration – dynamic configuration service provided by Nacos allows you to manage configurations of all services in a centralized and dynamic manner across all environments. In fact, you can replace Spring Cloud Config Server using it
  • Dynamic DNS – it supports weighted routing, making it easier to implement mid-tier load balancing, flexible routing policies, flow control, and simple DNS resolution services

Spring Cloud supports another popular Alibaba component – Sentinel. Sentinel is responsible for flow control, concurrency, circuit breaking and load protection.

Our sample system consisting of three microservices and API gateway is very similar to the architecture described in the article Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud. The only difference is in tools used for configuration management and service discovery. Microservice organization-service calls some endpoints exposed by department-service, while department-service calls endpoints exposed by employee-service. Inter-service communication is realized using an OpenFeign client. The complexity of the whole system is hidden behind an API gateway implemented using Netflix Zuul.

spring-cloud-alibaba-example

1. Running Nacos server

You can run Nacos on both Windows and Linux systems. First, you should download the latest stable release provided on the site https://github.com/alibaba/nacos/releases. After unzipping you have to run it in standalone mode by executing the following command.

$ cmd nacos/bin/startup.cmd -m standalone

By default, Nacos is starting on port 8848. It provides HTTP API under context /nacos/v1, and admin web console under address http://localhost:8848/nacos. If you take a look on the logs you will find out that it is just an application written using Spring Framework.

2. Dependencies

As I have mentioned before Spring Cloud Alibaba is still under the incubation stage, therefore it is not included in Spring Cloud Release Train. That’s why we need to include a special BOM for Alibaba inside the dependency management section in pom.xml. We will also use the newest stable version of Spring Cloud, which is now Finchley.SR2.

<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-dependencies</artifactId>
         <version>Finchley.SR2</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
      <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-alibaba-dependencies</artifactId>
         <version>0.2.0.RELEASE</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

Spring Cloud Alibaba provides three starters for the currently supported components. These are spring-cloud-starter-alibaba-nacos-discovery for service discovery with Nacos, spring-cloud-starter-alibaba-nacos-config for distributed configuration Nacos, and spring-cloud-starter-alibaba-sentinel for Sentinel dependencies.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-alibaba-sentinel</artifactId>
</dependency>

3. Distributed configuration with Spring Cloud Alibaba Nacos

To enable configuration management with Nacos we only need to include starter spring-cloud-starter-alibaba-nacos-config. It does not provide an auto-configured address of Nacos server, so we need to explicitly set it for the application inside bootstrap.yml file.

spring:
  application:
    name: employee-service
  cloud:
    nacos:
      config:
        server-addr: localhost:8848

Our application tries to connect with Nacos and fetch configuration provided inside a file with the same name as value of property spring.application.name. Currently, Spring Cloud Alibaba supports only .properties file, so we need to create configuration inside file employee-service.properties. Nacos comes with an elegant way of creating and managing configuration properties. We can use web admin console for that. The field Data ID visible on the picture below is in fact the name of our configuration file. The list of configuration properties should be placed inside the Configuration Content field.

spring-cloud-alibaba-config-service

The good news related with Spring Cloud Alibaba is that it dynamically refreshes application configuration after modifications on Nacos. The only thing you have to do in your application is to annotate the beans that should be refreshed with @RefreshScope or @ConfigurationProperties. Now, let’s consider the following situation. We will modify our configuration a little to add some properties with test data as shown below.

alibaba-4

Here’s the implementation of our repository bean. It injects all configuration properties with prefix repository.employees into the list of employees.

@Repository
@ConfigurationProperties(prefix = "repository")
public class EmployeeRepository {

   private List<Employee> employees = new ArrayList<>();
   
   public List<Employee> getEmployees() {
      return employees;
   }

   public void setEmployees(List<Employee> employees) {
      this.employees = employees;
   }
   
   public Employee add(Employee employee) {
      employee.setId((long) (employees.size()+1));
      employees.add(employee);
      return employee;
   }
   
   public Employee findById(Long id) {
      Optional<Employee> employee = employees.stream().filter(a -> a.getId().equals(id)).findFirst();
      if (employee.isPresent())
         return employee.get();
      else
         return null;
   }
   
   public List<Employee> findAll() {
      return employees;
   }
   
   public List<Employee> findByDepartment(Long departmentId) {
      return employees.stream().filter(a -> a.getDepartmentId().equals(departmentId)).collect(Collectors.toList());
   }
   
   public List<Employee> findByOrganization(Long organizationId) {
      return employees.stream().filter(a -> a.getOrganizationId().equals(organizationId)).collect(Collectors.toList());
   }

}

Now, you can change some values of properties as shown on the picture below. Then, if you call employee-service, that is available on port 8090 (http://localhost:8090) you should see the full list of employees with modified values.

alibaba-3

The same configuration properties should be created for our two other microservices department-service and organization-service. Assuming you have already done it, your should have the following configuration entries on Nacos.

alibaba-5

4. Service discovery with Spring Cloud Alibaba Nacos

To enable service discovery with Nacos you first need to include starter spring-cloud-starter-alibaba-nacos-discovery. The same as for the configuration server you also need to set the address of Nacos server inside bootstrap.yml file.

spring:
  application:
    name: employee-service
  cloud:
    nacos:
      discovery:
        server-addr: localhost:8848

The last step is to enable the discovery client for the application by annotating the main class with @EnableDiscoveryClient.

@SpringBootApplication
@EnableDiscoveryClient
@EnableSwagger2
public class EmployeeApplication {

   public static void main(String[] args) {
      SpringApplication.run(EmployeeApplication.class, args);
   }
   
}

If you provide the same implementation for all our microservices and run them you will see the following list of the registered application in Nacos web console.

spring-cloud-alibaba-service-discovery

5. Inter-service communication

Communication between microservices is realized using the standard Spring Cloud components: RestTemplate or OpenFeign client. By default, load balancing is realized by the Ribbon client. The only difference in comparison to Spring Cloud Netflix is the discovery server used as service registry in the communication process. Here’s the implementation of Feign client in department-service responsible for integration with endpoint GET /department/{departmentId} exposed by employee-service.

@FeignClient(name = "employee-service")
public interface EmployeeClient {

   @GetMapping("/department/{departmentId}")
   List<Employee> findByDepartment(@PathVariable("departmentId") Long departmentId);
   
}

Don’t forget to enable Feign clients for Spring Boot application.

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
@EnableSwagger2
public class DepartmentApplication {

   public static void main(String[] args) {
      SpringApplication.run(DepartmentApplication.class, args);
   }
   
}

We should also run multiple instances of employee-service in order to test load balancing on the client side. Before doing that we could enable dynamic generation of port number by setting property server.port to 0 inside configuration stored on Nacos. Now, we can run many instances of single service using the same configuration settings without risk of the port number conflict for a single microservice. Let’s scale up the number of employee-service instances.

alibaba-8

If you would like to test an inter-service communication you can call the following methods that uses OpenFeign client for calling endpoints exposed by other microservices: GET /organization/{organizationId}/with-employees from department-service, and GET /{id}/with-departments, GET /{id}/with-departments-and-employees, GET /{id}/with-employees from organization-service.

6. Running API Gateway

Now it is a time to run the last component in our architecture – an API Gateway. It is built on top of Spring Cloud Netflix Zuul. It also uses Nacos as a discovery and configuration server.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-netflix-zuul</artifactId>
</dependency>

After including required dependencies we need to enable Zuul proxy and discovery client for the application.

@SpringBootApplication
@EnableDiscoveryClient
@EnableZuulProxy
@EnableSwagger2
public class ProxyApplication {

   public static void main(String[] args) {
      SpringApplication.run(ProxyApplication.class, args);
   }
   
}

Here’s the configuration of Zuul routes defined for our three sample microservices.

zuul:
  routes:
    department:
      path: /department/**
      serviceId: department-service
    employee:
      path: /employee/**
      serviceId: employee-service
    organization:
      path: /organization/**
      serviceId: organization-service

After running gateway exposes Swagger2 specification for API exposed by all defined microservices. Assuming you have run it on port 8080, you can access it under address http://localhost:8080/swagger-ui.html. Thanks to that you can all the methods from one, single location.

spring-cloud-3

Conclusion

Sample applications source code is available on GitHub under repository sample-spring-microservices-new in branch alibaba: https://github.com/piomin/sample-spring-microservices-new/tree/alibaba. The main purpose of this article was to show you how to replace some popular Spring Cloud components with Alibaba Nacos used for service discovery and configuration management. Spring Cloud Alibaba project is at an early stage of development, so we could probably expect some new interesting features near the future. You can find some other examples on Spring Cloud Alibaba Github site here https://github.com/spring-cloud-incubator/spring-cloud-alibaba/tree/master/spring-cloud-alibaba-examples.

The post Microservices with Spring Cloud Alibaba appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2018/11/15/microservices-with-spring-cloud-alibaba/feed/ 0 6890
GraphQL – The Future of Microservices? https://piotrminkowski.com/2018/08/16/graphql-the-future-of-microservices/ https://piotrminkowski.com/2018/08/16/graphql-the-future-of-microservices/#comments Thu, 16 Aug 2018 07:34:42 +0000 https://piotrminkowski.wordpress.com/?p=6783 Often, GraphQL is presented as a revolutionary way of designing web APIs in comparison to REST. However, if you would take a closer look at that technology you will see that there are so many differences between them. GraphQL is a relatively new solution that has been open-sourced by Facebook in 2015. Today, REST is […]

The post GraphQL – The Future of Microservices? appeared first on Piotr's TechBlog.

]]>
Often, GraphQL is presented as a revolutionary way of designing web APIs in comparison to REST. However, if you would take a closer look at that technology you will see that there are so many differences between them. GraphQL is a relatively new solution that has been open-sourced by Facebook in 2015. Today, REST is still the most popular paradigm used for exposing APIs and inter-service communication between microservices. Is GraphQL going to overtake REST in the future? Let’s take a look at how to create microservices communicating through GraphQL API using Spring Boot and Apollo client.

Let’s begin with the Spring Boot GraphQL microservices architecture of our sample system. We have three microservices that communicate to each other using URLs taken from Eureka service discovery.

spring-boot-microservices-graphql-arch"

1. Enabling Spring Boot support for GraphQL

We can easily enable support for GraphQL on the server-side Spring Boot application just by including some starters. After including graphql-spring-boot-starter the GraphQL servlet would be automatically accessible under path /graphql. We can override that default path by settings property graphql.servlet.mapping in application.yml file. We should also enable GraphiQL – an in-browser IDE for writing, validating, and testing GraphQL queries, and GraphQL Java Tools library, which contains useful components for creating queries and mutations. Thanks to that library any files on the classpath with .graphqls extension will be used to provide the schema definition.

<dependency>
   <groupId>com.graphql-java</groupId>
   <artifactId>graphql-spring-boot-starter</artifactId>
   <version>5.0.2</version>
</dependency>
<dependency>
   <groupId>com.graphql-java</groupId>
   <artifactId>graphiql-spring-boot-starter</artifactId>
   <version>5.0.2</version>
</dependency>
<dependency>
   <groupId>com.graphql-java</groupId>
   <artifactId>graphql-java-tools</artifactId>
   <version>5.2.3</version>
</dependency>

2. Building GraphQL schema definition

Every schema definition contains data types declaration, relationships between them, and a set of operations including queries for searching objects and mutations for creating, updating or deleting data. Usually we will start from creating type declarations, which is responsible for domain object definition. You can specify if the field is required using ! char or if it is an array using [...]. The definition has to contain type declaration or reference to other types available in the specification.

type Employee {
  id: ID!
  organizationId: Int!
  departmentId: Int!
  name: String!
  age: Int!
  position: String!
  salary: Int!
}

Here’s an equivalent Java class to GraphQL definition visible above. GraphQL type Int can be also mapped to Java Long. The ID scalar type represents a unique identifier – in that case it also would be Java Long.

public class Employee {

   private Long id;
   private Long organizationId;
   private Long departmentId;
   private String name;
   private int age;
   private String position;
   private int salary;
   
   // constructor
   
   // getters
   // setters
   
}

The next part of schema definition contains queries and mutations declaration. Most of the queries return list of objects – what is marked with [Employee]. Inside EmployeeQueries type we have declared all find methods, while inside EmployeeMutations type methods for adding, updating and removing employees. If you pass the whole object to that method you need to declare it as an input type.

schema {
  query: EmployeeQueries
  mutation: EmployeeMutations
}

type EmployeeQueries {
  employees: [Employee]
  employee(id: ID!): Employee!
  employeesByOrganization(organizationId: Int!): [Employee]
  employeesByDepartment(departmentId: Int!): [Employee]
}

type EmployeeMutations {
  newEmployee(employee: EmployeeInput!): Employee
  deleteEmployee(id: ID!) : Boolean
  updateEmployee(id: ID!, employee: EmployeeInput!): Employee
}

input EmployeeInput {
  organizationId: Int
  departmentId: Int
  name: String
  age: Int
  position: String
  salary: Int
}

3. Queries and mutation implementation

Thanks to GraphQL Java Tools and Spring Boot GraphQL auto-configuration we don’t need to do much to implement queries and mutations in our application. The EmployeesQuery bean has to GraphQLQueryResolver interface. Based on that Spring would be able to automatically detect and call the right method as a response to one of the GraphQL queries declared inside the schema. Here’s a class containing an implementation of queries.

@Component
public class EmployeeQueries implements GraphQLQueryResolver {

   private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeQueries.class);
   
   @Autowired
   EmployeeRepository repository;
   
   public List<Employee> employees() {
      LOGGER.info("Employees find");
      return repository.findAll();
   }
   
   public List<Employee> employeesByOrganization(Long organizationId) {
      LOGGER.info("Employees find: organizationId={}", organizationId);
      return repository.findByOrganization(organizationId);
   }

   public List<Employee> employeesByDepartment(Long departmentId) {
      LOGGER.info("Employees find: departmentId={}", departmentId);
      return repository.findByDepartment(departmentId);
   }
   
   public Employee employee(Long id) {
      LOGGER.info("Employee find: id={}", id);
      return repository.findById(id);
   }
   
}

If you would like to call, for example method employee(Long id) you should build the following query. You can easily test it in your application using the GraphiQL tool available under path /graphiql.

graphql-1
The bean responsible for implementation of mutation methods needs to implement GraphQLMutationResolver. Despite declaration of EmployeeInput we still use the same domain object as returned by queries – Employee.

@Component
public class EmployeeMutations implements GraphQLMutationResolver {

   private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeQueries.class);
   
   @Autowired
   EmployeeRepository repository;
   
   public Employee newEmployee(Employee employee) {
      LOGGER.info("Employee add: employee={}", employee);
      return repository.add(employee);
   }
   
   public boolean deleteEmployee(Long id) {
      LOGGER.info("Employee delete: id={}", id);
      return repository.delete(id);
   }
   
   public Employee updateEmployee(Long id, Employee employee) {
      LOGGER.info("Employee update: id={}, employee={}", id, employee);
      return repository.update(id, employee);
   }
   
}

We can also use GraphiQL to test mutations. Here’s the command that adds a new employee, and receives response with employee’s id and name.

graphql-2

4. Generating client-side classes

Ok, we have successfully created a server-side application. We have already tested some queries using GraphiQL. But our main goal is to create some other microservices that communicate with employee-service application through GraphQL API. Here are most of the tutorials about Spring Boot and GraphQL ending.
To be able to communicate with our first application through GraphQL API we have two choices. We can get a standard REST client and implement GraphQL API by ourselves with HTTP GET requests or use one of existing Java clients. Surprisingly, there are not many GraphQL Java client implementations available. The most serious choice is Apollo GraphQL Client for Android. Of course it is not designed only for Android devices, and you can successfully use it in your microservice Java application.
Before using the client we need to generate classes from schema and .grapql files. The recommended way to do it is through the Apollo Gradle Plugin. There are also some Maven plugins, but none of them provide the level of automation as Gradle plugin, for example it automatically downloads node.js required for generating client-side classes. So, the first step is to add Apollo plugin and runtime to the project dependencies.

buildscript {
  repositories {
    jcenter()
    maven { url 'https://oss.sonatype.org/content/repositories/snapshots/' }
  }
  dependencies {
    classpath 'com.apollographql.apollo:apollo-gradle-plugin:1.0.1-SNAPSHOT'
  }
}

apply plugin: 'com.apollographql.android'

dependencies {
  compile 'com.apollographql.apollo:apollo-runtime:1.0.1-SNAPSHOT'
}

GraphQL Gradle plugin tries to find files with .graphql extension and schema.json inside src/main/graphql directory. GraphQL JSON schema can be obtained from your Spring Boot application by calling resource /graphql/schema.json. File .graphql contains queries definition. Query employeesByOrganization will be called by organization-service, while employeesByDepartment by both department-service and organization-service. Those two applications need a little different set of data in the response. Application department-service requires more detailed information about every employee than organization-service. GraphQL is an excellent solution in that case, because we can define the required set of data in the response on the client side. Here’s the query definition of employeesByOrganization called by organization-service.

query EmployeesByOrganization($organizationId: Int!) {
  employeesByOrganization(organizationId: $organizationId) {
    id
    name
  }
}

Application organization-service would also call employeesByDepartment query.

query EmployeesByDepartment($departmentId: Int!) {
  employeesByDepartment(departmentId: $departmentId) {
    id
    name
  }
}

The query employeesByDepartment is also called by department-service, which requires not only id and name fields, but also position and salary.

query EmployeesByDepartment($departmentId: Int!) {
  employeesByDepartment(departmentId: $departmentId) {
    id
    name
    position
    salary
  }
}

All the generated classes are available under build/generated/source/apollo directory.

5. Building Apollo client with discovery

After generating all required classes and including them into calling microservices we may proceed to the client implementation. Apollo client has two important features that will affect our development:

  • It provides only asynchronous methods based on callback
  • It does not integrate with service discovery based on Spring Cloud Netflix Eureka

Here’s an implementation of employee-service client inside department-service. I used EurekaClient directly (1). It gets all running instances registered as EMPLOYEE-SERVICE. Then it selects one instance form the list of available instances randomly (2). The port number of that instance is passed to ApolloClient (3). Before calling asynchronous method enqueue provided by ApolloClient we create lock (4), which waits max. 5 seconds for releasing (8). Method enqueue returns response in the callback method onResponse (5). We map the response body from GraphQL Employee object to the returned object (6) and then release the lock (7).

@Component
public class EmployeeClient {

   private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeClient.class);
   private static final int TIMEOUT = 5000;
   private static final String SERVICE_NAME = "EMPLOYEE-SERVICE"; 
   private static final String SERVER_URL = "http://localhost:%d/graphql";
   
   Random r = new Random();
   
   @Autowired
   private EurekaClient discoveryClient; // (1)
   
   public List<Employee> findByDepartment(Long departmentId) throws InterruptedException {
      List<Employee> employees = new ArrayList<>();
      Application app = discoveryClient.getApplication(SERVICE_NAME); // (2)
      InstanceInfo ii = app.getInstances().get(r.nextInt(app.size()));
      ApolloClient client = ApolloClient.builder().serverUrl(String.format(SERVER_URL, ii.getPort())).build(); // (3)
      CountDownLatch lock = new CountDownLatch(1); // (4)
      client.query(EmployeesByDepartmentQuery.builder().build()).enqueue(new Callback<EmployeesByDepartmentQuery.Data>() {

         @Override
         public void onFailure(ApolloException ex) {
            LOGGER.info("Err: {}", ex);
            lock.countDown();
         }

         @Override
         public void onResponse(Response<EmployeesByDepartmentQuery.Data> res) { // (5)
            LOGGER.info("Res: {}", res);
            employees.addAll(res.data().employees().stream().map(emp -> new Employee(Long.valueOf(emp.id()), emp.name(), emp.position(), emp.salary())).collect(Collectors.toList())); // (6)
            lock.countDown(); // (7)
         }

      });
      lock.await(TIMEOUT, TimeUnit.MILLISECONDS); // (8)
      return employees;
   }
   
}

Finally, EmployeeClient is injected into the query resolver class – DepartmentQueries, and used inside query departmentsByOrganizationWithEmployees.

@Component
public class DepartmentQueries implements GraphQLQueryResolver {

   private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentQueries.class);
   
   @Autowired
   EmployeeClient employeeClient;
   @Autowired
   DepartmentRepository repository;

   public List<Department> departmentsByOrganizationWithEmployees(Long organizationId) {
      LOGGER.info("Departments find: organizationId={}", organizationId);
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> {
         try {
            d.setEmployees(employeeClient.findByDepartment(d.getId()));
         } catch (InterruptedException e) {
            LOGGER.error("Error calling employee-service", e);
         }
      });
      return departments;
   }
   
   // other queries
   
}

Before calling the target query we should take a look at the schema created for department-service. Every Department object can contain the list of assigned employees, so we also define type Employee referenced by Department type.

schema {
  query: DepartmentQueries
  mutation: DepartmentMutations
}

type DepartmentQueries {
  departments: [Department]
  department(id: ID!): Department!
  departmentsByOrganization(organizationId: Int!): [Department]
  departmentsByOrganizationWithEmployees(organizationId: Int!): [Department]
}

type DepartmentMutations {
  newDepartment(department: DepartmentInput!): Department
  deleteDepartment(id: ID!) : Boolean
  updateDepartment(id: ID!, department: DepartmentInput!): Department
}

input DepartmentInput {
  organizationId: Int!
  name: String!
}

type Department {
  id: ID!
  organizationId: Int!
  name: String!
  employees: [Employee]
}

type Employee {
  id: ID!
  name: String!
  position: String!
  salary: Int!
}

Now, we can call our test query with a list of required fields using GraphiQL. An application department-service is by default available under port 8091, so we may call it using address http://localhost:8091/graphiql.

graphql-3

Conclusion

GraphQL seems to be an interesting alternative to standard REST APIs. However, we should not consider it as a replacement to REST. There are some use cases where GraphQL may be a better choice, and some use cases where REST is a better choice. If your clients do not need the full set of fields returned by the server-side, and moreover you have many clients with different requirements to the single endpoint – GraphQL is a good choice. When it comes to Spring Boot microservices there are no solutions based on Java that allow you to use GraphQL together with service discovery, load balancing or API gateway out-of-the-box. In this article, I have shown an example of usage of Apollo GraphQL client together with Spring Cloud Eureka for inter-service communication. Sample applications source code is available on GitHub https://github.com/piomin/sample-graphql-microservices.git.

The post GraphQL – The Future of Microservices? appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2018/08/16/graphql-the-future-of-microservices/feed/ 13 6783
Secure Discovery with Spring Cloud Netflix Eureka https://piotrminkowski.com/2018/05/21/secure-discovery-with-spring-cloud-netflix-eureka/ https://piotrminkowski.com/2018/05/21/secure-discovery-with-spring-cloud-netflix-eureka/#comments Mon, 21 May 2018 08:47:54 +0000 https://piotrminkowski.wordpress.com/?p=6600 Building a standard, not secure discovery mechanism with Spring Cloud Netflix Eureka is rather an easy thing to do. The same solution built over secure SSL communication between discovery client and server maybe a slightly more advanced challenge. I haven’t found any complete example of such an application on the web. Let’s try to implement […]

The post Secure Discovery with Spring Cloud Netflix Eureka appeared first on Piotr's TechBlog.

]]>
Building a standard, not secure discovery mechanism with Spring Cloud Netflix Eureka is rather an easy thing to do. The same solution built over secure SSL communication between discovery client and server maybe a slightly more advanced challenge. I haven’t found any complete example of such an application on the web. Let’s try to implement it beginning from the server-side application.

1. Generate certificates

If you develop Java applications for some years you have probably heard about keytool. This tool is available in your ${JAVA_HOME}\bin directory and is designed for managing keys and certificates. We begin by generating a keystore for the server-side Spring Boot application. Here’s the appropriate keytool command that generates a certficate stored inside JKS keystore file named eureka.jks.

secure-discovery-2

2. Setting up a secure Spring Eureka server

Since the Eureka server is embedded to the Spring Boot application, we need to secure it using standard Spring Boot properties. I placed generated keystore file eureka.jks on the application’s classpath. Now, the only thing that has to be done is to prepare some configuration settings inside application.yml that point to keystore file location, type, and access password.

server:
  port: 8761
  ssl:
    enabled: true
    key-store: classpath:eureka.jks
    key-store-password: 123456
    trust-store: classpath:eureka.jks
    trust-store-password: 123456
    key-alias: eureka

3. Setting up two-way SSL authentication

We will complicate our example a little. A standard SSL configuration assumes that only the client verifies the server certificate. We will force client’s certificate authentication on the server-side. It can be achieved by setting the property server.ssl.client-auth to need.

server:
  ssl:
    client-auth: need

It’s not all, because we also have to add client’s certficate to the list of trusted certificates on the server-side. So, first let’s generate client’s keystore using the same keytool command as for server’s keystore.

secure-spring-eureka-discovery-1

Now, we need to export certficates from generated keystores for both client and server sides.

secure-spring-eureka-secure-discovery-3

Finally, we import the client’s certificate to the server’s keystore and the server’s certificate to the client’s keystore.

secure-spring-eureka-secure-discovery-4

4. Running secure Spring Eureka server

The sample applications are available on GitHub in repository sample-secure-eureka-discovery (https://github.com/piomin/sample-secure-eureka-discovery.git). After running discovery-service application, Eureka is available under address https://localhost:8761. If you try to visit its web dashboard you get the following exception in your web browser. It means Eureka server is secured.

hqdefault

Well, Eureka dashboard is sometimes an useful tool, so let’s import client’s keystore to our web browser to be able to access it. We have to convert client’s keystore from JKS to PKCS12 format. Here’s the command that performs mentioned operation.

$ keytool -importkeystore -srckeystore client.jks -destkeystore client.p12 -srcstoretype JKS -deststoretype PKCS12 -srcstorepass 123456 -deststorepass 123456 -srcalias client -destalias client -srckeypass 123456 -destkeypass 123456 -noprompt

5. Client’s secure application configuration

When implementing a secure connection on the client-side, we generally need to do the same as in the previous step – import a keystore. However, it is not a very simple thing to do, because Spring Cloud does not provide any configuration property that allows you to pass the location of the SSL keystore to a discovery client. What’s worth mentioning Eureka client leverages the Jersey client to communicate with the server-side application. It may be surprising a little it is not Spring RestTemplate, but we should remember that Spring Cloud Eureka is built on top of Netflix OSS Eureka client, which does not use Spring libraries.
HTTP basic authentication is automatically added to your eureka client if you include security credentials to connection URL, for example http://piotrm:12345@localhost:8761/eureka. For more advanced configuration, like passing SSL keystore to HTTP client we need to provide @Bean of type DiscoveryClientOptionalArgs.
The following fragment of code shows how to enable SSL connection for discovery client. First, we set location of keystore and truststore files using javax.net.ssl.* Java system property. Then, we provide custom implementation of Jersey client based on Java SSL settings, and set it for DiscoveryClientOptionalArgs bean.

@Bean
public DiscoveryClient.DiscoveryClientOptionalArgs discoveryClientOptionalArgs() throws NoSuchAlgorithmException {
   DiscoveryClient.DiscoveryClientOptionalArgs args = new DiscoveryClient.DiscoveryClientOptionalArgs();
   System.setProperty("javax.net.ssl.keyStore", "src/main/resources/client.jks");
   System.setProperty("javax.net.ssl.keyStorePassword", "123456");
   System.setProperty("javax.net.ssl.trustStore", "src/main/resources/client.jks");
   System.setProperty("javax.net.ssl.trustStorePassword", "123456");
   EurekaJerseyClientBuilder builder = new EurekaJerseyClientBuilder();
   builder.withClientName("account-client");
   builder.withSystemSSLConfiguration();
   builder.withMaxTotalConnections(10);
   builder.withMaxConnectionsPerHost(10);
   args.setEurekaJerseyClient(builder.build());
   return args;
}

6. Enabling HTTPS on the client side

The configuration provided in the previous step applies only to communication between the discovery client and the Eureka server. What if we also would like to secure HTTP endpoints exposed by the client-side application? The first step is pretty the same as for the discovery server: we need to generate keystore and set it using Spring Boot properties inside application.yml.

server:
  port: ${PORT:8090}
  ssl:
    enabled: true
    key-store: classpath:client.jks
    key-store-password: 123456
    key-alias: client

During registration we need to “inform” Eureka server that our application’s endpoints are secured. To achieve it we should set property eureka.instance.securePortEnabled to true, and also disable non secure port, which is enabled by default.with nonSecurePortEnabled property.

eureka:
  instance:
    nonSecurePortEnabled: false
    securePortEnabled: true
    securePort: ${server.port}
    statusPageUrl: https://localhost:${server.port}/info
    healthCheckUrl: https://localhost:${server.port}/health
    homePageUrl: https://localhost:${server.port}
  client:
    securePortEnabled: true
    serviceUrl:
      defaultZone: https://localhost:8761/eureka/

7. Running secure Spring client’s application

Finally, we can run client-side application. After launching the application should be visible in Eureka Dashboard.

secure-discovery-5

All the client application’s endpoints are registred in Eureka under HTTPS protocol. I have also override default implementation of actuator endpoint /info, as shown on the code fragment below.

@Component
public class SecureInfoContributor implements InfoContributor {

   @Override
   public void contribute(Builder builder) {
      builder.withDetail("hello", "I'm secure app!");
   }

}

Now, we can try to visit /info endpoint one more time. You should see the same information as below.

secure-discovery-6

Alternatively, if you try to set on the client-side the certificate, which is not trusted by server-side, you will see the following exception while starting your client application.

secure-discovery-7

Conclusion

Securing connection between microservices and Eureka server is only the first step of securing the whole system. We need to thing about secure connection between microservices and config server, and also between all microservices during inter-service communication with @LoadBalanced RestTemplate or OpenFeign client. You can find the examples of such implementations and many more in my book “Mastering Spring Cloud” (https://www.packtpub.com/application-development/mastering-spring-cloud).

The post Secure Discovery with Spring Cloud Netflix Eureka appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2018/05/21/secure-discovery-with-spring-cloud-netflix-eureka/feed/ 9 6600
Envoy Proxy with Microservices https://piotrminkowski.com/2017/10/25/envoy-proxy-with-microservices/ https://piotrminkowski.com/2017/10/25/envoy-proxy-with-microservices/#comments Wed, 25 Oct 2017 08:48:27 +0000 https://piotrminkowski.wordpress.com/?p=6200 Introduction I came across Envoy Proxy for the first time a couple of weeks ago, when one of my blog readers suggested that I write an article about it. I had never heard about it before and my first thought was that it is not my area of experience. In fact, this tool is not […]

The post Envoy Proxy with Microservices appeared first on Piotr's TechBlog.

]]>
Introduction

I came across Envoy Proxy for the first time a couple of weeks ago, when one of my blog readers suggested that I write an article about it. I had never heard about it before and my first thought was that it is not my area of experience. In fact, this tool is not as popular as its competition like Nginx or HAProxy, but it provides some interesting features among which we can distinguish out-of-the-box support for MongoDB, Amazon RDS, flexibility around discovery and load balancing or generating a lot of useful traffic statistics. Ok, we know a little about its advantages but what exactly is Envoy proxy? ‘Envoy is an open-source edge and service proxy, designed for cloud-native applications’. It was originally developed by Lift as a high-performance C++ distributed proxy designed for standalone services and applications, as well as for large microservices service mesh. It sounds really good right now. That’s why I decided to take a closer look at it and prepare a sample of service discovery and distributed tracing realized with Envoy and microservices-based on Spring Boot.

Envoy Proxy Configuration

In most of the previous samples based on Spring Cloud we have used Zuul as edge and proxy. Zuul is a popular Netflix OSS tool acting as API Gateway in your microservices architecture. As it turns out, it can be successfully replaced by Envoy proxy. One of the things I really like in Envoy is the way to create configuration. The default format is JSON and is validated against JSON schema. This JSON properties and schema are documented well and can be easily understood. Just what you’d expect from a modern solution the recommended way to get started with it is by using the pre-built Docker images. So, in the beginning we have to create a Dockerfile for building a Docker image with Envoy and provide configuration files in JSON format. Here’s my Dockerfile. Parameters service-cluster and service-node are optional and has to do with provided configuration for service discovery, which I’ll say more about in a minute.

FROM lyft/envoy:latest
RUN apt-get update
COPY envoy.json /etc/envoy.json
CMD /usr/local/bin/envoy -c /etc/envoy.json --service-cluster samplecluster --service-node sample1

I assume you have a basic knowledge about Docker and its commands, which is mandatory at this point. After providing envoy.json configuration file we can proceed with building a Docker image.

$ docker build -t envoy:v1 .

Then just run it using docker run command. Useful ports should be exposed outside.

$ docker run -d --name envoy -p 9901:9901 -p 10000:10000 envoy:v1

The first pretty helpful feature is the local HTTP administrator server. It can be configured in a JSON file inside admin property. For the example purpose I selected port 9901 and as you probably noticed I also had exposed that port outside the Envoy Docker container. Now, the admin console is available under http://192.168.99.100:9901/. If you invoke that address it prints all available commands. For me the most helpful were stats, which print all important statistics related to proxy and logging, where I could change logging level dynamically for some of the defined categories. So, first if you had any problems with Envoy try to change logging level by calling /logging?name=level and watch them on Docker container after running docker logs envoy command.

"admin": {
  "access_log_path": "/tmp/admin_access.log",
  "address": "tcp://0.0.0.0:9901"
}

The next required configuration property is listeners. There we define routing settings and the address on which Envoy will listen for incoming TCP connections. The notation tcp://0.0.0.0:10000 is the wild card match for any IPv4 address with port 10000. This port is also exposed outside the Envoy Docker container. In this case it will therefore be our API gateway available under http://192.168.99.100:10000/ address. We will come back to the proxy configuration details at a next stage and now let’s take a closer look at the architecture of the presented example.

"listeners": [{
  "address": "tcp://0.0.0.0:10000",
  ...
}]

Architecture: Envoy proxy, Zipkin and Spring Boot

The architecture of the described solution is visible on the figure below. We have Envoy proxy as API Gateway, which is an entry point to our system. Envoy integrates with Zipkin and sends tracing messages with information about incoming HTTP requests and responses sent back. Two sample microservices Person and Product register itself in service discovery on startup and deregister on shutdown. They are hidden from external clients behind API Gateway. Envoy has to fetch actual configuration with addresses of registered services and route incoming HTTP requests properly. If there are multiple instances of each service available it should perform load balancing.

envoy-arch

As it turns out Envoy does not support well known discovery servers like Consul or Zookeeper, but defines its own generic REST based API, which needs to be implemented to enable cluster members fetching. The main method of this API is GET /v1/registration/:service used for fetching the list of currently registered instances of service. Lyft’s provides its default implementation in Python, but for the example purpose we develop our own solution using Java and Spring Boot. Sample application source code is available on GitHub. In addition to service discovery implementation you would also find there two sample microservices.

Service Discovery

Our custom discovery implementation does nothing more than exposing REST based API with methods for registration, unregistration and fetching service’s instances. GET method needs to return a specific JSON structure which matches the following schema.

{
  "hosts": [{
    "ip_address": "...",
    "port": "...",
    ...
  }]
}

Here’s a REST controller class with discovery API implementation.

@RestController
public class EnvoyDiscoveryController {

   private static final Logger LOGGER = LoggerFactory.getLogger(EnvoyDiscoveryController.class);

   private Map<String, List<DiscoveryHost>> hosts = new HashMap<>();

   @GetMapping(value = "/v1/registration/{serviceName}")
   public DiscoveryHosts getHostsByServiceName(@PathVariable("serviceName") String serviceName) {
      LOGGER.info("getHostsByServiceName: service={}", serviceName);
      DiscoveryHosts hostsList = new DiscoveryHosts();
      hostsList.setHosts(hosts.get(serviceName));
      LOGGER.info("getHostsByServiceName: hosts={}", hostsList);
      return hostsList;
   }

   @PostMapping("/v1/registration/{serviceName}")
   public void addHost(@PathVariable("serviceName") String serviceName, @RequestBody DiscoveryHost host) {
      LOGGER.info("addHost: service={}, body={}", serviceName, host);
      List<DiscoveryHost> tmp = hosts.get(serviceName);
      if (tmp == null)
         tmp = new ArrayList<>();
      tmp.add(host);
      hosts.put(serviceName, tmp);
   }

   @DeleteMapping("/v1/registration/{serviceName}/{ipAddress}")
   public void deleteHost(@PathVariable("serviceName") String serviceName, @PathVariable("ipAddress") String ipAddress) {
      LOGGER.info("deleteHost: service={}, ip={}", serviceName, ipAddress);
      List<DiscoveryHost> tmp = hosts.get(serviceName);
      if (tmp != null) {
         Optional<DiscoveryHost> optHost = tmp.stream().filter(it -> it.getIpAddress().equals(ipAddress)).findFirst();
         if (optHost.isPresent())
            tmp.remove(optHost.get());
         hosts.put(serviceName, tmp);
      }
   }
}

Let’s get back to the Envoy configuration settings. Assuming we have built an image from Dockerfile visible below and then run the container on the default port we can invoke it under address http://192.168.99.100:9200. That address should be placed in envoy.json configuration file. Service discovery connection settings should be provided inside the Cluster Manager section.

FROM openjdk:alpine
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/envoy-discovery.jar envoy-discovery.jar
ENTRYPOINT ["java", "-jar", "/envoy-discovery.jar"]
EXPOSE 9200

Here’s a fragment from envoy.json file. Cluster for service discovery should be defined as a global SDS configuration, which must be specified inside sds property (1). The most important thing is to provide a correct URL (2) and on the basis of that Envoy automatically tries to call endpoint GET /v1/registration/{service_name}. The last interesting configuration field for that section is refresh_delay_ms, which is responsible for setting a delay between fetches a list of services registered in a discovery server. That’s not all. We also have to define cluster members. They are identified by the name (4). Their type is sds (5), what means that this cluster uses a service discovery server for locating network addresses of calling microservice with the name defined in the service-name property.

"cluster_manager": {
  "clusters": [{
    "name": "service1", (4)
    "type": "sds", // (5)
    "connect_timeout_ms": 5000,
    "lb_type": "round_robin",
    "service_name": "person-service" // (6)
  }, {
    "name": "service2",
    "type": "sds",
    "connect_timeout_ms": 5000,
    "lb_type": "round_robin",
    "service_name": "product-service"
  }],
  "sds": { // (1)
    "cluster": {
      "name": "service_discovery",
      "type": "strict_dns",
      "connect_timeout_ms": 5000,
      "lb_type": "round_robin",
      "hosts": [{
        "url": "tcp://192.168.99.100:9200" // (2)
      }]
    },
    "refresh_delay_ms": 3000 // (3)
  }
}

Routing configuration is defined for every single listener inside route_config property (1). The first route is configured for person-service, which is processing by cluster service1 (2), second for product-service processing by service2 cluster. So, our services are available under http://192.168.99.100:10000/person and http://192.168.99.100:10000/product addresses.

{
  "name": "http_connection_manager",
  "config": {
    "codec_type": "auto",
    "stat_prefix": "ingress_http",
    "route_config": { // (1)
      "virtual_hosts": [{
        "name": "service",
        "domains": ["*"],
        "routes": [{
          "prefix": "/person", // (2)
          "cluster": "service1"
        }, {
          "prefix": "/product", // (3)
          "cluster": "service2"
        }]
      }]
    },
    "filters": [{
      "name": "router",
      "config": {}
    }]
  }
}

Building Microservices

The routing on Envoy proxy has been already configured. We still don’t have running microservices. Their implementation is based on the Spring Boot framework and does nothing more than expose REST API providing simple operations on the object’s list and registering/unregistering service on the discovery server. Here’s @Service bean responsible for that registration. The onApplicationEvent method is fired after application startup and destroy method just before gracefully shutdown.

@Service
public class PersonRegister implements ApplicationListener<ApplicationReadyEvent> {

   private static final Logger LOGGER = LoggerFactory.getLogger(PersonRegister.class);

   private String ip;
   @Value("${server.port}")
   private int port;
   @Value("${spring.application.name}")
   private String appName;
   @Value("${envoy.discovery.url}")
   private String discoveryUrl;
   
   @Autowired
   RestTemplate template;

   @Override
   public void onApplicationEvent(ApplicationReadyEvent event) {
      LOGGER.info("PersonRegistration.register");
      try {
         ip = InetAddress.getLocalHost().getHostAddress();
         DiscoveryHost host = new DiscoveryHost();
         host.setPort(port);
         host.setIpAddress(ip);
         template.postForObject(discoveryUrl + "/v1/registration/{service}", host, DiscoveryHosts.class, appName);
      } catch (Exception e) {
         LOGGER.error("Error during registration", e);
      }
   }

   @PreDestroy
   public void destroy() {
      try {
         template.delete(discoveryUrl + "/v1/registration/{service}/{ip}/", appName, ip);
         LOGGER.info("PersonRegister.unregistered: service={}, ip={}", appName, ip);
      } catch (Exception e) {
         LOGGER.error("Error during unregistration", e);
      }
   }

}

The best way to shutdown Spring Boot application gracefully is by its Actuator endpoint. To enable such endpoints for the service include spring-boot-starter-actuator to your project dependencies. Shutdown is disabled by default, so we should add the following properties to application.yml to enable it and additionally disable default security (endpoints.shutdown.sensitive=false). Now, just by calling POST /shutdown we can stop our Spring Boot application and test unregister method.

endpoints:
  shutdown:
    enabled: true
    sensitive: false

Same as before for microservices we also build docker images. Here’s person-service Dockerfile, which allows you to override default service and SDS port.

FROM openjdk:alpine
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/person-service.jar person-service.jar
ENV DISCOVERY_URL http://192.168.99.100:9200
ENTRYPOINT ["java", "-jar", "/person-service.jar"]
EXPOSE 9300

To build an image and run a container of the service with custom listen port type you need to execute the following docker commands.

$ docker build -t piomin/person-service .
$ docker run -d --name person-service -p 9301:9300 piomin/person-service

Distributed Tracing

It is time for the last piece of the puzzle – Zipkin tracing. Statistics related to all incoming requests should be sent there. The first part of configuration in Envoy proxy is inside tracing property which specifies global settings for the HTTP tracer.

"tracing": {
  "http": {
    "driver": {
      "type": "zipkin",
      "config": {
        "collector_cluster": "zipkin",
        "collector_endpoint": "/api/v1/spans"
      }
    }
  }
}

Network location and settings for Zipkin connection should be defined as a cluster member.

"clusters": [{
  "name": "zipkin",
  "connect_timeout_ms": 5000,
  "type": "strict_dns",
  "lb_type": "round_robin",
  "hosts": [
    {
      "url": "tcp://192.168.99.100:9411"
    }
  ]
}]

We should also add a new section tracing in HTTP connection manager configuration (1). Field operation_name is required and sets a span name. Only ‘ingress’ and ‘egress’ values are supported.


"listeners": [{
  "filters": [{
    "name": "http_connection_manager",
    "config": {
      "tracing": { // (1)
        "operation_name": "ingress" // (2)
      }
      // ...
    }
  }]
}]

Zipkin server can be started using its Docker image.

$ docker run -d --name zipkin -p 9411:9411 openzipkin/zipkin

Summary

Here’s a list of running Docker containers for the test purpose. As you probably remember we have Zipkin, Envoy, custom discovery, two instances of person-service and one of product-service. You can add some person objects by calling POST /person and that display a list of all persons by calling GET /person. The requests should be load balanced between two instances basing on entries in the service discovery.

envoy-1

Information about every request is sent to Zipkin with a service name taken –service-cluster Envoy proxy running parameter.

envoy-2

The post Envoy Proxy with Microservices appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/10/25/envoy-proxy-with-microservices/feed/ 18 6200