API Gateway Archives - Piotr's TechBlog https://piotrminkowski.com/tag/api-gateway/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 23 Feb 2021 10:51:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 API Gateway Archives - Piotr's TechBlog https://piotrminkowski.com/tag/api-gateway/ 32 32 181738725 Microservices with Micronaut, KrakenD and Consul https://piotrminkowski.com/2021/02/23/microservices-with-micronaut-krakend-and-consul/ https://piotrminkowski.com/2021/02/23/microservices-with-micronaut-krakend-and-consul/#comments Tue, 23 Feb 2021 10:50:58 +0000 https://piotrminkowski.com/?p=9481 In this article, you will learn how to use the KrakenD API gateway with Consul DNS and Micronaut to build microservices. Micronaut is a modern, JVM framework for building microservice and serverless applications. It provides built-in support for the most popular discovery servers. One of them is Hashicorp’s Consul. We can also easily integrate Micronaut […]

The post Microservices with Micronaut, KrakenD and Consul appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use the KrakenD API gateway with Consul DNS and Micronaut to build microservices. Micronaut is a modern, JVM framework for building microservice and serverless applications. It provides built-in support for the most popular discovery servers. One of them is Hashicorp’s Consul. We can also easily integrate Micronaut with Zipkin to implement distributed tracing. The only thing that is missing here is the API gateway tool. Especially if we compare it with Spring Boot, where we can run Spring Cloud Gateway. Is it a problem? Of course no, since we may include a third-party API gateway to our system.

We will use KrakenD. Why? Although it is not the most popular API gateway tool, it seems to be very interesting. First of all, it is very fast and lightweight. Also, we can easily integrate it with Zipkin and Consul – which obviously is our goal in this article.

Source Code

In this article, I will use the source code from my previous article Guide to Microservices with Micronaut and Consul. Since it has been written two years ago, I had to update a version of the Micronaut framework. Fortunately, the only thing I also had to change was groupId of the micronaut-openapi artifact. After that change, everything was working perfectly fine.

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂

Architecture

Firstly, let’s take a look at the architecture of our sample system. We have three microservices: employee-service, department-service and organization-service. All of them are simple REST applications. The organization-service calls API exposed by the department-service, while the department-service calls API from the employee-service. They use Consul discovery to locate a network address of target microservices. They also send traces to Zipkin. Each application may be started in multiple instances. At the edge of our system, there is an API gateway – KrakenD. Krakend is integrating with Consul discovery through DNS. It also sends traces to Zipkin. The architecture is visible in the picture below.

krakend-consul-micronaut-architecture

Running Consul, Zipkin and microservices

In the first step, we are going to run Consul and Zipkin on Docker containers. The simplest way to start Consul is to run it in the development mode. To do that you should execute the following command. It is important to expose two ports 8500 and 8600. The first of them is responsible for the discovery, while the second for DNS.

$ docker run -d --name=consul \
   -p 8500:8500 -p 8600:8600/udp \
   -e CONSUL_BIND_INTERFACE=eth0 consul

Then, we need to run Zipkin. Don’t forget to expose port 9411.

$ docker run -d --name=zipkin -p 9411:9411 openzipkin/zipkin

Finally, we can run each of our application. They register themselves in Consul on startup. They are listening on the randomly generated port number. Here’s the common. configuration for every single Micronaut application.

micronaut:
  server:
    port: -1
  router:
    static-resources:
      swagger:
        paths: classpath:META-INF/swagger
        mapping: /swagger/**
endpoints:
  info:
    enabled: true
    sensitive: false
consul:
  client:
    registration:
      enabled: true
tracing:
  zipkin:
    enabled: true
    http:
      url: http://localhost:9411
    sampler:
      probability: 1

Consul acts as a configuration server for the applications. We use Micronaut Config Client for fetching property sources on startup.

micronaut:
  application:
    name: employee-service
  config-client:
    enabled: true
consul:
  client:
    defaultZone: "localhost:8500"
    config:
      format: YAML

Using Micronaut framework

In order to expose REST API, integrate with Consul and Zipkin we need to include the following dependencies.

<dependencies>
        <dependency>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-http-server-netty</artifactId>
        </dependency>
        <dependency>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-tracing</artifactId>
        </dependency>
        <dependency>
            <groupId>io.zipkin.brave</groupId>
            <artifactId>brave-instrumentation-http</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>io.zipkin.reporter2</groupId>
            <artifactId>zipkin-reporter</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>io.opentracing.brave</groupId>
            <artifactId>brave-opentracing</artifactId>
        </dependency>
        <dependency>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-discovery-client</artifactId>
        </dependency>
<dependencies>

The tracing headers (spans) are propagated across applications. Here’s the endpoint in department-service. It calls endpoint GET /employees/department/{departmentId} exposed by employee-service.

@Get("/organization/{organizationId}/with-employees")
@ContinueSpan
public List<Department> findByOrganizationWithEmployees(@SpanTag("organizationId") Long organizationId) {
   LOGGER.info("Department find: organizationId={}", organizationId);
   List<Department> departments = repository.findByOrganization(organizationId);
   departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
   return departments;
}

In order to call employee-service endpoint we use Micronaut declarative REST client.

@Client(id = "employee-service", path = "/employees")
public interface EmployeeClient {
   @Get("/department/{departmentId}")
   List<Employee> findByDepartment(Long departmentId);	
}

Here’s the implementation of the GET /employees/department/{departmentId} endpoint inside employee-service. Micronaut propagates tracing spans between subsequent requests using the @ContinueSpan annotation.

@Get("/department/{departmentId}")
@ContinueSpan
public List<Employee> findByDepartment(@SpanTag("departmentId") Long departmentId) {
    LOGGER.info("Employees find: departmentId={}", departmentId);
    return repository.findByDepartment(departmentId);
}

Configure KrakenD Gateway and Consul DNS

We can configure KrakenD using JSON notation. Firstly, we need to define endpoints. The integration with Consul discovery needs to be configured in the backend section. The host has to be the same as the DNS name of a downstream service in Consul. We also set a target URL (url_pattern) and service discovery type (sd). Let’s take a look at the list of endpoints for department-service. We expose methods for searching by id (GET /department/{id}), adding a new department (POST /department), and searching all departments with a list of employees within a single organization (GET /department-with-employees/{organizationId}).

    {
      "endpoint": "/department/{id}",
      "method": "GET",
      "backend": [
        {
          "url_pattern": "/departments/{id}",
          "sd": "dns",
          "host": [
            "department-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    },
    {
      "endpoint": "/department-with-employees/{organizationId}",
      "method": "GET",
      "backend": [
        {
          "url_pattern": "/departments/organization/{organizationId}/with-employees",
          "sd": "dns",
          "host": [
            "department-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    },
    {
      "endpoint": "/department",
      "method": "POST",
      "backend": [
        {
          "url_pattern": "/departments",
          "sd": "dns",
          "host": [
            "department-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    }

It is also worth mentioning that we cannot create conflicting routes with Krakend. For example, I couldn’t define endpoint GET /department/organization/{organizationId}/with-employees, because it would conflict with the already existing endpoint GET /department/{id}. To avoid it I created a new context path /department-with-employees for my endpoint.

Similarly, I created a following configuration for employee-service endpoints.

    {
      "endpoint": "/employee/{id}",
      "method": "GET",
      "backend": [
        {
          "url_pattern": "/employees/{id}",
          "sd": "dns",
          "host": [
            "employee-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    },
    {
      "endpoint": "/employee",
      "method": "POST",
      "backend": [
        {
          "url_pattern": "/employees",
          "sd": "dns",
          "host": [
            "employee-service.service.consul"
          ],
          "disable_host_sanitize": true
        }
      ]
    }

In order to integrate KrakenD with Consul, we need to configure local DNS properly on our machine. It was a quite challenging task for me since I’m not very familiar with network topics. By default, Consul will listen on port 8600 for DNS queries in the consul domain. But DNS is served from port 53. Therefore, we need to configure DNS forwarding for Consul service discovery. There are several ways to do that, and you may read more about it here. I chose the dnsmasq tool for that. Following the guide, we need to create a file e.g. /etc/dnsmasq.d/10-consul with the following single line.

server=/consul/127.0.0.1#8600

Finally we need to start dnsmasq service and add 127.0.0.1 to the list of nameservers. Here’s my configuration of DNS servers.

Testing Consul DNS

Firstly, let’s run all our sample microservices. They are registered in Consul under the following names.

krakend-consul-services

I run two instances of employee-service. Of course, all the applications are listening on a randomly generated port.

krakend-consul-instances

Finally, if you run the dig command with the DNS name of service you should have a similar response to the following. It means we may proceed to the last part of our exercise!

Running KrakenD API Gateway

Before we run KrakenD API gateway let’s configure one additional thing – integration with Zipkin. To do that we need to create section extra_config. Enabling Zipkin only requires us to add the zipkin exporter in the opencensus module. We need to URL (including port and path) where Zipkin is accepting the spans, and a service name for Krakend spans. I have also enabled metrics. Here’s the currently described part of the KrakenD configuration.

  "extra_config": {
    "github_com/devopsfaith/krakend-opencensus": {
      "sample_rate": 100,
      "reporting_period": 1,
      "exporters": {
        "zipkin": {
          "collector_url": "http://localhost:9411/api/v2/spans",
          "service_name": "api-gateway"
        }
      }
    },
    "github_com/devopsfaith/krakend-metrics": {
      "collection_time": "30s",
      "proxy_disabled": false,
      "listen_address": ":8090"
    }
  }

Finally, we can run KrakenD. The only parameter we need to pass is the location of the krakend.json configuration file. You may find a full version of that file in my GitHub repository inside the config directory.

$ krakend run -c krakend.json -d

Testing KrakenD with Consul and Micronaut

Once we started all our microservices, Consul, Zipkin, and KrakenD we may proceed to the tests. So first, let’s add some employees and departments by sending requests through an API gateway. KrakenD is listening on port 8080.

$ curl http://localhost:8080/employee -d '{"name":"John Smith","age":30,"position":"Architect","departmentId":1,"organizationId":1}' -H "Content-Type: application/json"
{"age":30,"departmentId":1,"id":1,"name":"John Smith","organizationId":1,"position":"Architect"}

$ curl http://localhost:8080/employee -d '{"name":"Paul Walker","age":22,"position":"Developer","departmentId":1,"organizationId":1}' -H "Content-Type: application/json"
{"age":22,"departmentId":1,"id":2,"name":"Paul Walker","organizationId":1,"position":"Developer"}

$ curl http://localhost:8080/employee -d '{"name":"Anna Hamilton","age":26,"position":"Developer","departmentId":2,"organizationId":1}' -H "Content-Type: application/json"
{"age":26,"departmentId":2,"id":3,"name":"Anna Hamilton","organizationId":1,"position":"Developer"}

$ curl http://localhost:8080/department -d '{"name":"Test1","organizationId":1}' -H "Content-Type:application/json"
{"id":1,"name":"Test1","organizationId":1}

$ curl http://localhost:8080/department -d '{"name":"Test2","organizationId":1}' -H "Content-Type:application/json"
{"id":2,"name":"Test2","organizationId":1}

Then let’s call a little bit more complex endpoint GET /department-with-employees/{organizationId}. As you probably remember, it is exposed by department-service and calls employee-service to fetch all employees assigned to the particular department.

$ curl http://localhost:8080/department-with-employees/1

However, we received a response with the HTTP 500 error code. We can find more details in the Krakend logs.

krakend-consul-logs

Krakend is unable to parse JSON array returned as a response by department-service. Therefore, we need to declare it explicitly with "is_collection": true so that KrakenD can convert it to an object for further manipulation. Here’s our current configuration for that endpoint.

    {
      "endpoint": "/department-with-employees/{organizationId}",
      "method": "GET",
      "backend": [
        {
          "url_pattern": "/departments/organization/{organizationId}/with-employees",
          "sd": "dns",
          "host": [
            "department-service.service.consul"
          ],
          "disable_host_sanitize": true,
          "is_collection": true
        }
      ]
    }

Now, let’s call the same endpoint once again. It works perfectly fine!

$ curl http://localhost:8080/department-with-employees/1   
{"collection":[{"id":1,"name":"Test1","organizationId":1},{"employees":[{"age":26,"id":3,"name":"Anna Hamilton","position":"Developer"}],"id":2,"name":"Test2","organizationId":1}]}

The last thing we can do here is to check out traces collected by Zipkin. Thanks to Micronaut support for spans propagation (@ContinueSpan), all the subsequent requests are grouped altogether.

The picture visible below shows Zipkin timeline for the latest request.

Conclusion

If you are looking for a gateway for your microservices – KrakenD is an excellent choice. Moreover, if you use Consul as a discovery server, and Zipkin (or Jaeger) as a tracing tool, it is easy to start with KrakenD. It also offers support for service discovery with Netflix Eureka, but it is a little bit more complicated to configure. Of course, you may also run KrakenD on Kubernetes (and integrate with its discovery), which is an absolute “must-have” for the modern API gateway.

The post Microservices with Micronaut, KrakenD and Consul appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/02/23/microservices-with-micronaut-krakend-and-consul/feed/ 2 9481
Envoy Proxy with Microservices https://piotrminkowski.com/2017/10/25/envoy-proxy-with-microservices/ https://piotrminkowski.com/2017/10/25/envoy-proxy-with-microservices/#comments Wed, 25 Oct 2017 08:48:27 +0000 https://piotrminkowski.wordpress.com/?p=6200 Introduction I came across Envoy Proxy for the first time a couple of weeks ago, when one of my blog readers suggested that I write an article about it. I had never heard about it before and my first thought was that it is not my area of experience. In fact, this tool is not […]

The post Envoy Proxy with Microservices appeared first on Piotr's TechBlog.

]]>
Introduction

I came across Envoy Proxy for the first time a couple of weeks ago, when one of my blog readers suggested that I write an article about it. I had never heard about it before and my first thought was that it is not my area of experience. In fact, this tool is not as popular as its competition like Nginx or HAProxy, but it provides some interesting features among which we can distinguish out-of-the-box support for MongoDB, Amazon RDS, flexibility around discovery and load balancing or generating a lot of useful traffic statistics. Ok, we know a little about its advantages but what exactly is Envoy proxy? ‘Envoy is an open-source edge and service proxy, designed for cloud-native applications’. It was originally developed by Lift as a high-performance C++ distributed proxy designed for standalone services and applications, as well as for large microservices service mesh. It sounds really good right now. That’s why I decided to take a closer look at it and prepare a sample of service discovery and distributed tracing realized with Envoy and microservices-based on Spring Boot.

Envoy Proxy Configuration

In most of the previous samples based on Spring Cloud we have used Zuul as edge and proxy. Zuul is a popular Netflix OSS tool acting as API Gateway in your microservices architecture. As it turns out, it can be successfully replaced by Envoy proxy. One of the things I really like in Envoy is the way to create configuration. The default format is JSON and is validated against JSON schema. This JSON properties and schema are documented well and can be easily understood. Just what you’d expect from a modern solution the recommended way to get started with it is by using the pre-built Docker images. So, in the beginning we have to create a Dockerfile for building a Docker image with Envoy and provide configuration files in JSON format. Here’s my Dockerfile. Parameters service-cluster and service-node are optional and has to do with provided configuration for service discovery, which I’ll say more about in a minute.

FROM lyft/envoy:latest
RUN apt-get update
COPY envoy.json /etc/envoy.json
CMD /usr/local/bin/envoy -c /etc/envoy.json --service-cluster samplecluster --service-node sample1

I assume you have a basic knowledge about Docker and its commands, which is mandatory at this point. After providing envoy.json configuration file we can proceed with building a Docker image.

$ docker build -t envoy:v1 .

Then just run it using docker run command. Useful ports should be exposed outside.

$ docker run -d --name envoy -p 9901:9901 -p 10000:10000 envoy:v1

The first pretty helpful feature is the local HTTP administrator server. It can be configured in a JSON file inside admin property. For the example purpose I selected port 9901 and as you probably noticed I also had exposed that port outside the Envoy Docker container. Now, the admin console is available under http://192.168.99.100:9901/. If you invoke that address it prints all available commands. For me the most helpful were stats, which print all important statistics related to proxy and logging, where I could change logging level dynamically for some of the defined categories. So, first if you had any problems with Envoy try to change logging level by calling /logging?name=level and watch them on Docker container after running docker logs envoy command.

"admin": {
  "access_log_path": "/tmp/admin_access.log",
  "address": "tcp://0.0.0.0:9901"
}

The next required configuration property is listeners. There we define routing settings and the address on which Envoy will listen for incoming TCP connections. The notation tcp://0.0.0.0:10000 is the wild card match for any IPv4 address with port 10000. This port is also exposed outside the Envoy Docker container. In this case it will therefore be our API gateway available under http://192.168.99.100:10000/ address. We will come back to the proxy configuration details at a next stage and now let’s take a closer look at the architecture of the presented example.

"listeners": [{
  "address": "tcp://0.0.0.0:10000",
  ...
}]

Architecture: Envoy proxy, Zipkin and Spring Boot

The architecture of the described solution is visible on the figure below. We have Envoy proxy as API Gateway, which is an entry point to our system. Envoy integrates with Zipkin and sends tracing messages with information about incoming HTTP requests and responses sent back. Two sample microservices Person and Product register itself in service discovery on startup and deregister on shutdown. They are hidden from external clients behind API Gateway. Envoy has to fetch actual configuration with addresses of registered services and route incoming HTTP requests properly. If there are multiple instances of each service available it should perform load balancing.

envoy-arch

As it turns out Envoy does not support well known discovery servers like Consul or Zookeeper, but defines its own generic REST based API, which needs to be implemented to enable cluster members fetching. The main method of this API is GET /v1/registration/:service used for fetching the list of currently registered instances of service. Lyft’s provides its default implementation in Python, but for the example purpose we develop our own solution using Java and Spring Boot. Sample application source code is available on GitHub. In addition to service discovery implementation you would also find there two sample microservices.

Service Discovery

Our custom discovery implementation does nothing more than exposing REST based API with methods for registration, unregistration and fetching service’s instances. GET method needs to return a specific JSON structure which matches the following schema.

{
  "hosts": [{
    "ip_address": "...",
    "port": "...",
    ...
  }]
}

Here’s a REST controller class with discovery API implementation.

@RestController
public class EnvoyDiscoveryController {

   private static final Logger LOGGER = LoggerFactory.getLogger(EnvoyDiscoveryController.class);

   private Map<String, List<DiscoveryHost>> hosts = new HashMap<>();

   @GetMapping(value = "/v1/registration/{serviceName}")
   public DiscoveryHosts getHostsByServiceName(@PathVariable("serviceName") String serviceName) {
      LOGGER.info("getHostsByServiceName: service={}", serviceName);
      DiscoveryHosts hostsList = new DiscoveryHosts();
      hostsList.setHosts(hosts.get(serviceName));
      LOGGER.info("getHostsByServiceName: hosts={}", hostsList);
      return hostsList;
   }

   @PostMapping("/v1/registration/{serviceName}")
   public void addHost(@PathVariable("serviceName") String serviceName, @RequestBody DiscoveryHost host) {
      LOGGER.info("addHost: service={}, body={}", serviceName, host);
      List<DiscoveryHost> tmp = hosts.get(serviceName);
      if (tmp == null)
         tmp = new ArrayList<>();
      tmp.add(host);
      hosts.put(serviceName, tmp);
   }

   @DeleteMapping("/v1/registration/{serviceName}/{ipAddress}")
   public void deleteHost(@PathVariable("serviceName") String serviceName, @PathVariable("ipAddress") String ipAddress) {
      LOGGER.info("deleteHost: service={}, ip={}", serviceName, ipAddress);
      List<DiscoveryHost> tmp = hosts.get(serviceName);
      if (tmp != null) {
         Optional<DiscoveryHost> optHost = tmp.stream().filter(it -> it.getIpAddress().equals(ipAddress)).findFirst();
         if (optHost.isPresent())
            tmp.remove(optHost.get());
         hosts.put(serviceName, tmp);
      }
   }
}

Let’s get back to the Envoy configuration settings. Assuming we have built an image from Dockerfile visible below and then run the container on the default port we can invoke it under address http://192.168.99.100:9200. That address should be placed in envoy.json configuration file. Service discovery connection settings should be provided inside the Cluster Manager section.

FROM openjdk:alpine
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/envoy-discovery.jar envoy-discovery.jar
ENTRYPOINT ["java", "-jar", "/envoy-discovery.jar"]
EXPOSE 9200

Here’s a fragment from envoy.json file. Cluster for service discovery should be defined as a global SDS configuration, which must be specified inside sds property (1). The most important thing is to provide a correct URL (2) and on the basis of that Envoy automatically tries to call endpoint GET /v1/registration/{service_name}. The last interesting configuration field for that section is refresh_delay_ms, which is responsible for setting a delay between fetches a list of services registered in a discovery server. That’s not all. We also have to define cluster members. They are identified by the name (4). Their type is sds (5), what means that this cluster uses a service discovery server for locating network addresses of calling microservice with the name defined in the service-name property.

"cluster_manager": {
  "clusters": [{
    "name": "service1", (4)
    "type": "sds", // (5)
    "connect_timeout_ms": 5000,
    "lb_type": "round_robin",
    "service_name": "person-service" // (6)
  }, {
    "name": "service2",
    "type": "sds",
    "connect_timeout_ms": 5000,
    "lb_type": "round_robin",
    "service_name": "product-service"
  }],
  "sds": { // (1)
    "cluster": {
      "name": "service_discovery",
      "type": "strict_dns",
      "connect_timeout_ms": 5000,
      "lb_type": "round_robin",
      "hosts": [{
        "url": "tcp://192.168.99.100:9200" // (2)
      }]
    },
    "refresh_delay_ms": 3000 // (3)
  }
}

Routing configuration is defined for every single listener inside route_config property (1). The first route is configured for person-service, which is processing by cluster service1 (2), second for product-service processing by service2 cluster. So, our services are available under http://192.168.99.100:10000/person and http://192.168.99.100:10000/product addresses.

{
  "name": "http_connection_manager",
  "config": {
    "codec_type": "auto",
    "stat_prefix": "ingress_http",
    "route_config": { // (1)
      "virtual_hosts": [{
        "name": "service",
        "domains": ["*"],
        "routes": [{
          "prefix": "/person", // (2)
          "cluster": "service1"
        }, {
          "prefix": "/product", // (3)
          "cluster": "service2"
        }]
      }]
    },
    "filters": [{
      "name": "router",
      "config": {}
    }]
  }
}

Building Microservices

The routing on Envoy proxy has been already configured. We still don’t have running microservices. Their implementation is based on the Spring Boot framework and does nothing more than expose REST API providing simple operations on the object’s list and registering/unregistering service on the discovery server. Here’s @Service bean responsible for that registration. The onApplicationEvent method is fired after application startup and destroy method just before gracefully shutdown.

@Service
public class PersonRegister implements ApplicationListener<ApplicationReadyEvent> {

   private static final Logger LOGGER = LoggerFactory.getLogger(PersonRegister.class);

   private String ip;
   @Value("${server.port}")
   private int port;
   @Value("${spring.application.name}")
   private String appName;
   @Value("${envoy.discovery.url}")
   private String discoveryUrl;
   
   @Autowired
   RestTemplate template;

   @Override
   public void onApplicationEvent(ApplicationReadyEvent event) {
      LOGGER.info("PersonRegistration.register");
      try {
         ip = InetAddress.getLocalHost().getHostAddress();
         DiscoveryHost host = new DiscoveryHost();
         host.setPort(port);
         host.setIpAddress(ip);
         template.postForObject(discoveryUrl + "/v1/registration/{service}", host, DiscoveryHosts.class, appName);
      } catch (Exception e) {
         LOGGER.error("Error during registration", e);
      }
   }

   @PreDestroy
   public void destroy() {
      try {
         template.delete(discoveryUrl + "/v1/registration/{service}/{ip}/", appName, ip);
         LOGGER.info("PersonRegister.unregistered: service={}, ip={}", appName, ip);
      } catch (Exception e) {
         LOGGER.error("Error during unregistration", e);
      }
   }

}

The best way to shutdown Spring Boot application gracefully is by its Actuator endpoint. To enable such endpoints for the service include spring-boot-starter-actuator to your project dependencies. Shutdown is disabled by default, so we should add the following properties to application.yml to enable it and additionally disable default security (endpoints.shutdown.sensitive=false). Now, just by calling POST /shutdown we can stop our Spring Boot application and test unregister method.

endpoints:
  shutdown:
    enabled: true
    sensitive: false

Same as before for microservices we also build docker images. Here’s person-service Dockerfile, which allows you to override default service and SDS port.

FROM openjdk:alpine
MAINTAINER Piotr Minkowski <piotr.minkowski@gmail.com>
ADD target/person-service.jar person-service.jar
ENV DISCOVERY_URL http://192.168.99.100:9200
ENTRYPOINT ["java", "-jar", "/person-service.jar"]
EXPOSE 9300

To build an image and run a container of the service with custom listen port type you need to execute the following docker commands.

$ docker build -t piomin/person-service .
$ docker run -d --name person-service -p 9301:9300 piomin/person-service

Distributed Tracing

It is time for the last piece of the puzzle – Zipkin tracing. Statistics related to all incoming requests should be sent there. The first part of configuration in Envoy proxy is inside tracing property which specifies global settings for the HTTP tracer.

"tracing": {
  "http": {
    "driver": {
      "type": "zipkin",
      "config": {
        "collector_cluster": "zipkin",
        "collector_endpoint": "/api/v1/spans"
      }
    }
  }
}

Network location and settings for Zipkin connection should be defined as a cluster member.

"clusters": [{
  "name": "zipkin",
  "connect_timeout_ms": 5000,
  "type": "strict_dns",
  "lb_type": "round_robin",
  "hosts": [
    {
      "url": "tcp://192.168.99.100:9411"
    }
  ]
}]

We should also add a new section tracing in HTTP connection manager configuration (1). Field operation_name is required and sets a span name. Only ‘ingress’ and ‘egress’ values are supported.


"listeners": [{
  "filters": [{
    "name": "http_connection_manager",
    "config": {
      "tracing": { // (1)
        "operation_name": "ingress" // (2)
      }
      // ...
    }
  }]
}]

Zipkin server can be started using its Docker image.

$ docker run -d --name zipkin -p 9411:9411 openzipkin/zipkin

Summary

Here’s a list of running Docker containers for the test purpose. As you probably remember we have Zipkin, Envoy, custom discovery, two instances of person-service and one of product-service. You can add some person objects by calling POST /person and that display a list of all persons by calling GET /person. The requests should be load balanced between two instances basing on entries in the service discovery.

envoy-1

Information about every request is sent to Zipkin with a service name taken –service-cluster Envoy proxy running parameter.

envoy-2

The post Envoy Proxy with Microservices appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/10/25/envoy-proxy-with-microservices/feed/ 18 6200