ELK Archives - Piotr's TechBlog https://piotrminkowski.com/tag/elk/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 01 Jan 2021 13:31:38 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 ELK Archives - Piotr's TechBlog https://piotrminkowski.com/tag/elk/ 32 32 181738725 Spring Boot Best Practices for Microservices https://piotrminkowski.com/2019/12/06/spring-boot-best-practices-for-microservices/ https://piotrminkowski.com/2019/12/06/spring-boot-best-practices-for-microservices/#comments Fri, 06 Dec 2019 11:14:24 +0000 http://piotrminkowski.com/?p=7514 In this article I’m going to propose my list of “golden rules” for building Spring Boot applications, which are a part of a microservices-based system. I’m basing on my experience in migrating monolithic SOAP applications running on JEE servers into REST-based small applications built on top of Spring Boot. This list of Spring Boot best […]

The post Spring Boot Best Practices for Microservices appeared first on Piotr's TechBlog.

]]>
In this article I’m going to propose my list of “golden rules” for building Spring Boot applications, which are a part of a microservices-based system. I’m basing on my experience in migrating monolithic SOAP applications running on JEE servers into REST-based small applications built on top of Spring Boot. This list of Spring Boot best practices assumes you are running many microservices on the production under huge incoming traffic. Let’s begin.

1. Collect metrics

It is just amazing how metrics visualization can change an approach to the systems monitoring in the organization. After setting up monitoring in Grafana we are able to recognize more than 90% of bigger problems in our systems before they are reported by customers to our support team. Thanks to those two monitors with plenty of diagrams and alerts we may react much faster than earlier. If you have microservices-based architecture metrics become even more important than for monoliths.
The good news for us is that Spring Boot comes with a built-in mechanism for collecting the most important metrics. In fact, we just need to set some configuration properties to expose a predefined set of metrics provided by the Spring Boot Actuator. To use it we need to include Actuator starter as dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

To enable metrics endpoint we have to set property management.endpoint.metrics.enabled to true. Now you may check out the full list of generated metrics by calling endpoint GET /actuator/metrics. One of the most important metrics for us is http.server.requests, which provides statistics with the number of incoming requests and response time. It is automatically tagged with method type (POST, GET, etc.), HTTP status, and URI.
Metrics have to be stored somewhere. The most popular tools for that are InfluxDB and Prometheus. They are representing two different models of collecting data. Prometheus periodically retrieves data from the endpoint exposed by the application, while InfluxDB provides REST API that has to be called by the application. The integration with those two tools and several others is realized with the Micrometer library. To enable support for InfluxDB we have to include the following dependency.

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-influx</artifactId>
</dependency>

We also have to provide at least URL and Influx database name inside application.yml file.

management:
  metrics:
    export:
      influx:
        db: springboot
        uri: http://192.168.99.100:8086

To enable Prometheus HTTP endpoint we first need to include the appropriate Micrometer module and also set property management.endpoint.prometheus.enabled to true.

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

By default, Prometheus tries to collect data from defined target endpoint once a minute. A rest of configuration has to be provided inside Prometheus. A scrape_config section is responsible for specifying a set of targets and parameters describing how to connect with them.

scrape_configs:
  - job_name: 'springboot'
    metrics_path: '/actuator/prometheus'
    static_configs:
    - targets: ['person-service:2222']

Sometimes it is useful to provide additional tags to metrics, especially if we have many instances of a single microservice that logs to a single Influx database. Here’s the sample of tagging for applications running on Kubernetes.

@Configuration
class ConfigurationMetrics {

    @Value("\${spring.application.name}")
    lateinit var appName: String
    @Value("\${NAMESPACE:default}")
    lateinit var namespace: String
    @Value("\${HOSTNAME:default}")
    lateinit var hostname: String

    @Bean
    fun tags(): MeterRegistryCustomizer<InfluxMeterRegistry> {
        return MeterRegistryCustomizer { registry ->
            registry.config().commonTags("appName", appName).commonTags("namespace", namespace).commonTags("pod", hostname)
        }
    }

}

Here’s a diagram from Grafana created for http.server.requests metric of a single application.

spring-boot-best-practices-metrics

2. Don’t forget about logging

Logging is something that is not very important during development, but is the key point during maintenance. It is worth to remember that in the organization your application would be viewed through the logs quality. Usually, an application is maintenanced by the support team, so your logs should be significant. Don’t try to put everything there, only the most important events should be logged.
It is also important to use the same standard of logging for all the microservices. For example, if you are logging information in JSON format, do the same for every single application. If you use tag appName for indicating application name or instanceId to distinguish different instances of the same application do it everywhere. Why? You usually want to store the logs collected from all microservices in a single, central place. The most popular tool for that (or rather the collection of tools) is Elastic Stack (ELK). To take advantage of storing logs in a central place, you should ensure that query criteria and response structure would be the same for all the applications, especially that you will correlate the logs between different microservices. How is that? Of course by using the external library. I can recommend my library for Spring Boot logging. To use it you should include it to your dependencies.

<dependency>
  <groupId>com.github.piomin</groupId>
  <artifactId>logstash-logging-spring-boot-starter</artifactId>
  <version>1.2.2.RELEASE</version>
</dependency>

This library will force you to use some good logging practices and automatically integrate with Logstash (one of three ELK tools responsible for collecting logs). Its main features are:

  • an ability to log all incoming HTTP requests and outgoing HTTP responses with full body, and send those logs to Logstash with the proper tags indicating calling method name or response HTTP status
  • it is able to calculate and store an execution time for each request
  • an ability to generate and propagate correlationId for downstream services calling with Spring RestTemplate

To enable sending logs to Logstash we should at least provide its address and property logging.logstash.enabled to true.

logging.logstash:
  enabled: true
  url: 192.168.99.100:5000

After including the library logstash-logging-spring-boot-starter you may take advantage of logs tagging in Logstash. Here’s the screen from Kibana for single response log entry.

logstash-2

We may also include Spring Cloud Sleuth library to our dependencies.

<dependency> 
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>

Spring Cloud Sleuth propagates headers compatible with Zipkin – a popular tool for distributed tracing. Its main features are:

  • adding trace (correlating requests) and span IDs to the Slf4J MDC
  • recording timing information to aid in latency analysis
  • it modifies a pattern of log entry to add some informations like additional MDC fields
  • it provides integration with other Spring components like OpenFeign, RestTemplate or Spring Cloud Netflix Zuul

3. Make your API usable

In most cases, your application will be called by other applications through REST-based API. Therefore, it is worth taking care of proper and clear documentation. The documentation should be generated along with the code. Of course there are some tools for that. One of the most popular of them is Swagger. You can easily integrate Swagger 2 with your Spring Boot application using SpringFox project. In order to expose a Swagger HTML site with API documentation we need to include the following dependencies. The first library is responsible for generating Swagger descriptor from Spring MVC controllers code, while the second embeds Swagger UI to display representation of Swagger YAML descriptor in your web browser.

<dependency>
   <groupId>io.springfox</groupId>
   <artifactId>springfox-swagger2</artifactId>
   <version>2.9.2</version>
</dependency>
<dependency>
   <groupId>io.springfox</groupId>
   <artifactId>springfox-swagger-ui</artifactId>
   <version>2.9.2</version>
</dependency>

It’s not all. We also have to provide some beans to customize default Swagger generation behaviour. It should document only methods implemented inside our controllers, for example not the methods provided by Spring Boot automatically like /actuator/* endpoints. We may also customize UI appearance by defining UiConfiguration bean.

@Configuration
@EnableSwagger2
public class ConfigurationSwagger {

    @Autowired
    Optional<BuildProperties> build;

    @Bean
    public Docket api() {
        String version = "1.0.0";
        if (build.isPresent())
            version = build.get().getVersion();
        return new Docket(DocumentationType.SWAGGER_2)
                .apiInfo(apiInfo(version))
                .select()
                .apis(RequestHandlerSelectors.any())
                .paths(PathSelectors.regex("(/components.*)"))
                .build()
                .useDefaultResponseMessages(false)
                .forCodeGeneration(true);
    }

    @Bean
    public UiConfiguration uiConfig() {
        return UiConfigurationBuilder.builder().docExpansion(DocExpansion.LIST).build();
    }

    private ApiInfo apiInfo(String version) {
        return new ApiInfoBuilder()
                .title("API - Components Service")
                .description("Managing Components.")
                .version(version)
                .build();
    }
}

Here’s an example of Swagger 2 UI for a single microservice.

spring-boot-best-practices-swagger

The next case is to define the same REST API guideline for all microservices. If you are building an API of your microservices consistently, it is much simpler to integrate with it for both external and internal clients. The guideline should contain instructions on how to build your API, which headers need to be set on the request and response, how to generate error codes etc. Such a guideline should be shared with all developers and vendors in your organization. For more detailed explanation of generating Swagger documentation for Spring Boot microservices including exposing it for all the application on API gateway you may refer to my article Microservices API Documentation with Swagger2.

4. Don’t afraid of using circuit breaker

If you are using Spring cloud for communication between microservices, you may leverage Spring Cloud Netflix Hystrix or Spring Cloud Circuit Breaker to implement circuit breaking. However, the first solution has been already moved to the maintenance mode by Pivotal team, since Netflix does not develop Hystrix anymore. The recommended solution is the new Spring Cloud Circuit Breaker built on top of resilience4j project.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-circuitbreaker-resilience4j</artifactId>
</dependency>

Then we need to configure required settings for circuit breaker by defining Customizer bean that is passed a Resilience4JCircuitBreakerFactory. We are using default values as shown below.

@Bean
public Customizer<Resilience4JCircuitBreakerFactory> defaultCustomizer() {
    return factory -> factory.configureDefault(id -> new Resilience4JConfigBuilder(id)
            .timeLimiterConfig(TimeLimiterConfig.custom().timeoutDuration(Duration.ofSeconds(5)).build())
            .circuitBreakerConfig(CircuitBreakerConfig.ofDefaults())
            .build());
}

For more details about integrating Hystrix circuit breaker with Spring Boot application you may refer to my article Part 3: Creating Microservices: Circuit Breaker, Fallback and Load Balancing with Spring Cloud.

5. Make your application transparent

Another important rule amongst Spring Boot best practices is transparency. We should not forget that one of the most important reasons for migration into microservices architecture is a requirement of continuous delivery. Today, the ability to deliver changes fast gives the advantage on the market. You should be able even to deliver changes several times during a day. Therefore, it is important what’s the current version, where it has been released and what changes it includes.
When working with Spring Boot and Maven we may easily publish such information like a date of last changes, Git commit id or numerous version of application. To achieve that we just need to include following Maven plugins to our pom.xml.

<plugins>
   <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      <executions>
         <execution>
            <goals>
               <goal>build-info</goal>
            </goals>
         </execution>
      </executions>
   </plugin>
   <plugin>
      <groupId>pl.project13.maven</groupId>
      <artifactId>git-commit-id-plugin</artifactId>
      <configuration>
         <failOnNoGitDirectory>false</failOnNoGitDirectory>
      </configuration>
   </plugin>
</plugins>

Assuming you have already included Spring Boot Actuator (see Section 1), you have to enable /info endpoint to be able to display all interesting data.


management.endpoint.info.enabled: true

Of course, we have many microservices consisting of our system, and there are a few running instances of every single microservice. It is desirable to monitor our instances in a single, central place – the same as with collecting metrics and logs. Fortunately, there is a tool dedicated for Spring Boot application, that is able to collect data from all Actuator endpoints and display them in UI. It is Spring Boot Admin developed by Codecentric. The most comfortable way to run it is by creating a dedicated Spring Boot application that includes Spring Boot Admin dependencies and integrates with a discovery server, for example Spring Cloud Netflix Eureka.

<dependency>
    <groupId>de.codecentric</groupId>
    <artifactId>spring-boot-admin-starter-server</artifactId>
    <version>2.1.6</version>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>

Then we should enable it for Spring Boot application by annotating the main class with @EnableAdminServer.

@SpringBootApplication
@EnableDiscoveryClient
@EnableAdminServer
@EnableAutoConfiguration
public class Application {
 
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
 
}

With Spring Boot Admin we may easily browse a list of applications registered in the discovery server and check out the version or commit info for each of them.

boot-admin-2

We can expand details to see all elements retrieved from /info endpoint and much more data collected from other Actuator endpoints.

boot-admin-3-details

6. Write contract tests

Consumer Driven Contract (CDC) testing is one of the methods that allows you to verify integration between applications within your system. The number of such interactions may be really large especially if you maintain microservices-based architecture. It is relatively easy to start with contract testing in Spring Boot thanks to the Spring Cloud Contract project. There are some other frameworks designed especially for CDC like Pact, but Spring Cloud Contract would probably be the first choice, since we are using Spring Boot.
To use it on the producer side we need to include Spring Cloud Contract Verifier.

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-contract-verifier</artifactId>
    <scope>test</scope>
</dependency>

On the consumer side we should include Spring Cloud Contract Stub Runner.


<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
    <scope>test</scope>
</dependency>

The first step is to define a contract. One of the options to write it is by using Groovy language. The contract should be verified on the both producer and consumer side. Here’s


import org.springframework.cloud.contract.spec.Contract
Contract.make {
    request {
        method 'GET'
        urlPath('/persons/1')
    }
    response {
        status OK()
        body([
            id: 1,
            firstName: 'John',
            lastName: 'Smith',
            address: ([
                city: $(regex(alphaNumeric())),
                country: $(regex(alphaNumeric())),
                postalCode: $(regex('[0-9]{2}-[0-9]{3}')),
                houseNo: $(regex(positiveInt())),
                street: $(regex(nonEmpty()))
            ])
        ])
        headers {
            contentType(applicationJson())
        }
    }
}

The contract is packaged inside the JAR together with stubs. It may be published to a repository manager like Artifactory or Nexus, and then consumers may download it from there during the JUnit test. Generated JAR file is suffixed with stubs.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.NONE)
@AutoConfigureStubRunner(ids = {"pl.piomin.services:person-service:+:stubs:8090"}, consumerName = "letter-consumer",  stubsPerConsumer = true, stubsMode = StubsMode.REMOTE, repositoryRoot = "http://192.168.99.100:8081/artifactory/libs-snapshot-local")
@DirtiesContext
public class PersonConsumerContractTest {
 
    @Autowired
    private PersonClient personClient;
     
    @Test
    public void verifyPerson() {
        Person p = personClient.findPersonById(1);
        Assert.assertNotNull(p);
        Assert.assertEquals(1, p.getId().intValue());
        Assert.assertNotNull(p.getFirstName());
        Assert.assertNotNull(p.getLastName());
        Assert.assertNotNull(p.getAddress());
        Assert.assertNotNull(p.getAddress().getCity());
        Assert.assertNotNull(p.getAddress().getCountry());
        Assert.assertNotNull(p.getAddress().getPostalCode());
        Assert.assertNotNull(p.getAddress().getStreet());
        Assert.assertNotEquals(0, p.getAddress().getHouseNo());
    }
     
}

Contract testing will not verify sophisticated use cases in your microservices-based system. However, it is the first phase of testing interaction between microservices. Once you ensure the API contracts between applications are valid, you proceed to more advanced integration or end-to-end tests. For more detailed explanation of continuous integration with Spring Cloud Contract you may refer to my article Continuous Integration with Jenkins, Artifactory and Spring Cloud Contract.

7. Be up-to-date

Spring Boot and Spring Cloud relatively often release the new versions of their framework. Assuming that your microservices have a small codebase it is easy to up a version of used libraries. Spring Cloud releases new versions of projects using release train pattern, to simplify dependencies management and avoid problems with conflicts between incompatible versions of libraries.
Moreover, Spring Boot systematically improves startup time and memory footprint of applications, so it is worth updating it just because of that. Here’s the current stable release of Spring Boot and Spring Cloud.

<parent>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-parent</artifactId>
   <version>2.2.1.RELEASE</version>
</parent>
<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-dependencies</artifactId>
         <version>Hoxton.RELEASE</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

Conclusion

I showed you that it is not hard to follow best practices with Spring Boot features and some additional libraries being a part of Spring Cloud. These Spring Boot best practices will make it easier for you to migrate into microservices-based architecture and also to run your applications in containers.

The post Spring Boot Best Practices for Microservices appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/12/06/spring-boot-best-practices-for-microservices/feed/ 11 7514
Reactive Logging With Spring WebFlux and Logstash https://piotrminkowski.com/2019/10/15/reactive-logging-with-spring-webflux-and-logstash/ https://piotrminkowski.com/2019/10/15/reactive-logging-with-spring-webflux-and-logstash/#comments Tue, 15 Oct 2019 11:12:57 +0000 https://piotrminkowski.wordpress.com/?p=7346 I have already introduced my Spring Boot library for synchronous HTTP request/response logging in one of my previous articles Logging with Spring Boot and Elastic Stack. This library is dedicated to synchronous REST applications built with Spring MVC and Spring Web. Since version 5.0 Spring Framework also offers support for reactive REST API through the […]

The post Reactive Logging With Spring WebFlux and Logstash appeared first on Piotr's TechBlog.

]]>
I have already introduced my Spring Boot library for synchronous HTTP request/response logging in one of my previous articles Logging with Spring Boot and Elastic Stack. This library is dedicated to synchronous REST applications built with Spring MVC and Spring Web. Since version 5.0 Spring Framework also offers support for reactive REST API through the Spring WebFlux project. I decided to extend support for logging in my library to reactive Spring WebFlux.
The repository with source code is available on GitHub: https://github.com/piomin/spring-boot-logging.git. It consists with two Maven modules: logstash-logging-spring-boot-starter for synchronous logging and reactive-logstash-logging-spring-boot-starter for reactive Spring WebFlux applications. The library is available on Maven Central:

<dependency>
  <groupId>com.github.piomin</groupId>
  <artifactId>reactive-logstash-logging-spring-boot-starter</artifactId>
  <version>1.0.0.RELEASE</version>
</dependency>

Motivations

Although we are working with reactive APIs and streams, a requirement for logging every incoming request and outgoing response is still actual. Today, we are usually running many, small applications communicating with each other, so we are focusing on storing the logs in a single, central place. Here comes Logstash and Elastic Stack. Spring Boot and Spring WebFlux allow you to build reactive microservices fast. My library takes care of gathering HTTP request/response logs, sending them to ELK with proper tags and correlation. Using it in your application does not require any additional source code. You just need to include the library.
However, some things need to be discussed when talking about reactive logging. Because we are logging full requests with payloads we need to buffer them. It somehow goes against the reactive programming, since we’re trying there to be efficient with the available resources. Also, integration with Logstash is realized synchronously. It is worth keeping those two things in mind when using reactive-logstash-logging-spring-boot-starter in your application.

Implementation Details

Spring WebFlux Dependencies

Since the library is used for Spring Boot reactive APIs logging it needs to have Spring WebFlux in the dependencies. In turn, Spring WebFlux is built on top of Project Reactor, so reactor-core artifact also has to be on the dependencies list. We also need some standard Spring libraries, used for example to provide auto-configuration.

<dependency>
   <groupId>org.springframework</groupId>
   <artifactId>spring-context</artifactId>
   <version>${spring.version}</version>
   <scope>provided</scope>
</dependency>
<dependency>
   <groupId>org.springframework</groupId>
   <artifactId>spring-webflux</artifactId>
   <version>${spring.version}</version>
   <scope>provided</scope>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-autoconfigure</artifactId>
   <version>${spring.boot.version}</version>
   <scope>provided</scope>
</dependency>
<dependency>
   <groupId>io.projectreactor</groupId>
   <artifactId>reactor-core</artifactId>
   <version>3.3.0.RELEASE</version>
   <scope>provided</scope>
</dependency>

Spring WebFlux Reactive Interceptor

With Spring WebFlux we don’t have popular Spring MVC components for caching request/response bodies: ContentCachingRequestWrapper and ContentCachingResponseWrapper. However, an approach will be pretty similar to the approach applied when building a library for synchronous logging. We need to access the request and response body by wrapping it and buffering without consuming the stream. To do that we first need to create classes extending ServerHttpRequestDecorator and ServerHttpResponseDecorator. They give us access to the message body while Spring WebFlux is reading the stream and writing to the stream.
When extending ServerHttpRequestDecorator we need to override getBody. Keep in mind that we cannot block a reactive stream, so one of doOn is suitable for accessing it. The body is published as Flux containing DataBuffer objects. Inside the asynchronous doOnNext method we write the buffer to the temporary byte array.

public class RequestLoggingInterceptor extends ServerHttpRequestDecorator {

   private static final Logger LOGGER = LoggerFactory.getLogger(RequestLoggingInterceptor.class);

   public RequestLoggingInterceptor(ServerHttpRequest delegate) {
      super(delegate);
   }

   @Override
   public Flux<DataBuffer> getBody() {
      ByteArrayOutputStream baos = new ByteArrayOutputStream();
      return super.getBody().doOnNext(dataBuffer -> {
         try {
            Channels.newChannel(baos).write(dataBuffer.asByteBuffer().asReadOnlyBuffer());
            String body = IOUtils.toString(baos.toByteArray(), "UTF-8");
            LOGGER.info("Request: method={}, uri={}, payload={}, audit={}", getDelegate().getMethod(),
                  getDelegate().getPath(), body, value("audit", true));
         } catch (IOException e) {
            e.printStackTrace();
         } finally {
            try {
               baos.close();
            } catch (IOException e) {
               e.printStackTrace();
            }
         }
      });
   }

}

When extending ServerHttpResponseDecorator we need to override writeWith method responsible for writing body to output reactive stream. We will listen for body writing events in doOnNext. Then we access DataBuffer and buffer it in ByteArrayOutputStream.

public class ResponseLoggingInterceptor extends ServerHttpResponseDecorator {

   private static final Logger LOGGER = LoggerFactory.getLogger(ResponseLoggingInterceptor.class);

   private long startTime;
   private boolean logHeaders;

   public ResponseLoggingInterceptor(ServerHttpResponse delegate, long startTime, boolean logHeaders) {
      super(delegate);
      this.startTime = startTime;
      this.logHeaders = logHeaders;
   }

   @Override
   public Mono<Void> writeWith(Publisher<? extends DataBuffer> body) {
      Flux<DataBuffer> buffer = Flux.from(body);
      return super.writeWith(buffer.doOnNext(dataBuffer -> {
         ByteArrayOutputStream baos = new ByteArrayOutputStream();
         try {
            Channels.newChannel(baos).write(dataBuffer.asByteBuffer().asReadOnlyBuffer());
            String bodyRes = IOUtils.toString(baos.toByteArray(), "UTF-8");
            if (logHeaders)
               LOGGER.info("Response({} ms): status={}, payload={}, audit={}", value("X-Response-Time", System.currentTimeMillis() - startTime),
                     value("X-Response-Status", getStatusCode().value()), bodyRes, value("audit", true));
            else
               LOGGER.info("Response({} ms): status={}, payload={}, audit={}", value("X-Response-Time", System.currentTimeMillis() - startTime),
                     value("X-Response-Status", getStatusCode().value()), bodyRes, value("audit", true));
         } catch (IOException e) {
            e.printStackTrace();
         } finally {
            try {
               baos.close();
            } catch (IOException e) {
               e.printStackTrace();
            }
         }
      }));
   }
}

Spring WebFlux Logging Filter

To be able to decorate requests and responses we first need to declare filter intercepting an incoming request. To do that we have to declare a bean that implements WebFilter and its method filter(...). The filtering method allows you to access the exchange object, which contains objects representing request and response. So if we would like to decorate request and response objects we first need to decorate ServerWebExchange. We may easily do it by defining an instance of  the ServerWebExchangeDecorator object with overridden methods getRequest and getResponse. Our decorators are responsible just for listening to events related to message body processing. So, the significant information here is that if a message has an empty body, the listening methods won’t be fired. That’s why I decided to add a code for analyzing the length of content to log a request or response message with an empty body. It is based on the HTTP header Content-Length.

public class ReactiveSpringLoggingFilter implements WebFilter {

   private static final Logger LOGGER = LoggerFactory.getLogger(ReactiveSpringLoggingFilter.class);
   private UniqueIDGenerator generator;
   private String ignorePatterns;
   private boolean logHeaders;
   private boolean useContentLength;

   public ReactiveSpringLoggingFilter(UniqueIDGenerator generator, String ignorePatterns, boolean logHeaders, boolean useContentLength) {
      this.generator = generator;
      this.ignorePatterns = ignorePatterns;
      this.logHeaders = logHeaders;
      this.useContentLength = useContentLength;
   }

   @Override
   public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
      if (ignorePatterns != null && exchange.getRequest().getURI().getPath().matches(ignorePatterns)) {
         return chain.filter(exchange);
      } else {
         generator.generateAndSetMDC(exchange.getRequest());
         final long startTime = System.currentTimeMillis();
         List<String> header = exchange.getRequest().getHeaders().get("Content-Length");
         if (useContentLength && (header == null || header.get(0).equals("0"))) {
            if (logHeaders)
               LOGGER.info("Request: method={}, uri={}, headers={}, audit={}", exchange.getRequest().getMethod(),
                     exchange.getRequest().getURI().getPath(), exchange.getRequest().getHeaders(), value("audit", true));
            else
               LOGGER.info("Request: method={}, uri={}, audit={}", exchange.getRequest().getMethod(),
                     exchange.getRequest().getURI().getPath(), value("audit", true));
         }
         ServerWebExchangeDecorator exchangeDecorator = new ServerWebExchangeDecorator(exchange) {
            @Override
            public ServerHttpRequest getRequest() {
               return new RequestLoggingInterceptor(super.getRequest(), logHeaders);
            }

            @Override
            public ServerHttpResponse getResponse() {
               return new ResponseLoggingInterceptor(super.getResponse(), startTime, logHeaders);
            }
         };
         return chain.filter(exchangeDecorator)
            .doOnSuccess(aVoid -> {
               logResponse(startTime, exchangeDecorator.getResponse(), exchangeDecorator.getResponse().getStatusCode().value());
            })
            .doOnError(throwable -> {
               logResponse(startTime, exchangeDecorator.getResponse(), 500);
            });
      }
   }
}

The last step of implementation is auto-configuration. Here’s the class responsible for it.

@Configuration
@ConfigurationProperties(prefix = "logging.logstash")
public class ReactiveSpringLoggingAutoConfiguration {

   private static final String LOGSTASH_APPENDER_NAME = "LOGSTASH";

   private String url = "localhost:8500";
   private String ignorePatterns;
   private boolean logHeaders;
   private boolean useContentLength;
   private String trustStoreLocation;
   private String trustStorePassword;
   @Value("${spring.application.name:-}")
   String name;

   @Bean
   public UniqueIDGenerator generator() {
      return new UniqueIDGenerator();
   }

   @Bean
   public ReactiveSpringLoggingFilter reactiveSpringLoggingFilter() {
      return new ReactiveSpringLoggingFilter(generator(), ignorePatterns, logHeaders, useContentLength);
   }

   @Bean
   @ConditionalOnProperty("logging.logstash.enabled")
   public LogstashTcpSocketAppender logstashAppender() {
      ...
   }
}

Usage of Spring WebFlux Logging

To be able to create reactive APIs with Spring Boot we first need to include Spring WebFlux starter to Maven dependencies.

<parent>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-parent</artifactId>
   <version>2.1.9.RELEASE</version>
   <relativePath/>
</parent>
<groupId>pl.piomin.test</groupId>
<artifactId>sample-webflux-app</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
   <java.version>11</java.version>
</properties>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
   <groupId>com.github.piomin</groupId>
   <artifactId>reactive-logstash-logging-spring-boot-starter</artifactId>
   <version>1.0.0.BUILD-SNAPSHOT</version>
</dependency>

I have already described how to build microservices architecture with Spring WebFlux and Spring Cloud in one of my previous articles Reactive Microservices with Spring WebFlux and Spring Cloud. So for more information about advanced use cases, you can refer to this article. Here’s a typical controller implementation with Spring WebFlux Mono and Flux objects.

@RestController
public class AccountController {

   private static final Logger LOGGER = LoggerFactory.getLogger(AccountController.class);
   
   @Autowired
   private AccountRepository repository;

   @GetMapping("/customer/{customer}")
   public Flux<Account> findByCustomer(@PathVariable("customer") String customerId) {
      LOGGER.info("findByCustomer: customerId={}", customerId);
      return repository.findByCustomerId(customerId);
   }

   @GetMapping
   public Flux<Account> findAll() {
      LOGGER.info("findAll");
      return repository.findAll();
   }

   @GetMapping("/{id}")
   public Mono<Account> findById(@PathVariable("id") String id) {
      LOGGER.info("findById: id={}", id);
      return repository.findById(id);
   }

   @PostMapping
   public Mono<Account> create(@RequestBody Account account) {
      LOGGER.info("create: {}", account);
      return repository.save(account);
   }
   
}

Here are the log entries for GET (empty body) and POST requests.

reactive-logging

We can customize the library behaviour by overriding default values of configuration properties with logging.logstash.*. Here’s the typical configuration that enables embedded Logstash appender configuration, changes default Logstash URL, includes list of headers to the log and ignores logging of /favicon.ico requests.


logging.logstash:
  enabled: true
  url: 192.168.99.100:8500
  ignorePatterns: .*(favicon).*
  logHeaders: true

With the settings visible above the logs are sent to Logstash available on address 192.168.99.100:8500.

logstash-1

Summary

Spring Boot Logging library now supports logging for synchronous HTTP API with Spring MVC and reactive HTTP API with Spring WebFlux. The detailed description of the libraries configuration features may be found in my article Using logstash-logging-spring-boot-starter for logging with Spring Boot and Logstash. You can report the bugs or propose new enhancements here: https://github.com/piomin/spring-boot-logging/issues. Any feedback would be very welcome.

The post Reactive Logging With Spring WebFlux and Logstash appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/10/15/reactive-logging-with-spring-webflux-and-logstash/feed/ 12 7346
Using logstash-logging-spring-boot-starter for logging with Spring Boot and Logstash https://piotrminkowski.com/2019/10/02/using-logstash-logging-spring-boot-starter-for-logging-with-spring-boot-and-logstash/ https://piotrminkowski.com/2019/10/02/using-logstash-logging-spring-boot-starter-for-logging-with-spring-boot-and-logstash/#comments Wed, 02 Oct 2019 08:39:50 +0000 https://piotrminkowski.wordpress.com/?p=7270 I have already described some implementation details related to my library Spring Boot Logging Logstash starter for HTTP request/response logging in one of the previous articles Logging with Spring Boot and Elastic Stack. The article has been published some weeks ago, and since that time some important features have been added to this library. Today […]

The post Using logstash-logging-spring-boot-starter for logging with Spring Boot and Logstash appeared first on Piotr's TechBlog.

]]>
I have already described some implementation details related to my library Spring Boot Logging Logstash starter for HTTP request/response logging in one of the previous articles Logging with Spring Boot and Elastic Stack. The article has been published some weeks ago, and since that time some important features have been added to this library. Today I’m going to summarise all those changes and describe all the features provided by the library.

First, let’s begin from a brief review of main features provided by the Spring Boot Logging Logstash starter:

  • it is able to log all incoming HTTP requests and outgoing HTTP responses with full body, and send those logs to Logstash with the proper tags
  • calculate and store an execution time for each request
  • it generates and propagates correlationId for downstream services calling with Spring RestTemplate or OpenFeign
  • it is auto-configurable Spring Boot library – you don’t have to do anything more than including it as a dependency to your application to make it work

1. Dependency

The first major change is that the library has been published on Maven Central and now you may use it in your application by including the following dependency to you pom.xml. The current version of library is 1.2.2.RELEASE.

<dependency>
  <groupId>com.github.piomin</groupId>
  <artifactId>logstash-logging-spring-boot-starter</artifactId>
  <version>1.2.2.RELEASE</version>
</dependency>

2. Enable Spring Boot logging

By default the library is enabled, but tries to locate Logback configuration inside your application to settings for Logstash appender. If such appender won’t be found, the library uses Spring Boot default logging configuration, which does not include Logstash appender. To force it use auto-configured appender definition inside library we have to set property logging.logstash.enabled to true. After setting the property the logs would be send to auto-configured default address of Logstash, which is localhost:5000. To override we have to use property logging.logstash.url. Generally, all the names of properties has been changed from spring.logging.* to logstash.logging.*, to meet the requirements for third-party libraries defined by Spring Boot team.

logging.logstash:
  enabled: true
  url: 192.168.99.100:5000

3. Secure connection with Logstash

We can enable secure connection on the client site by providing trust store location. To do that we should define property logstash.logging.trustStoreLocation, and if required password to the keystore in property logstash.logging.trustStorePassword.

logging.logstash:
  trustStoreLocation: /tmp/location/sample.jks
  trustStorePassword: 123456

4. Filtering the logs

By default all HTTP incoming request are intercepted and log by the library. We can customize this behavior and filter some unnecessary calls by declaring matching pattern. The sample configuration is visible below. The following pattern will filter all the requests starting with actuator (to exclude healthcheck calls), swagger (only documentation – not required to log) or webjars.

logging:
  logstash:
    ignorePatterns: /(actuator|swagger|webjars).*

5. Logging HTTP headers

By default HTTP request and response headers are not logged. To enable it we have to set property logging.logstash.logHeaders to true. After that change log message will include section ‘headers’ with key/value map with all HTTP headers.

6. Fields exported to Logstash

If you decided to set property logging.logstash.enabled to true the library is sending all the MDC fields to Logstash. If you provide your own Logback configuration file you should configure Logstash appender to send all fields in mdc tag or define the list of fields to send. Here’s the list of supported fields logged by the library:

  • X-Correlation-ID: correlates all the requests send to downstream services within a single call of endpoint
  • X-Request-ID: id generated for a single pair request/response
  • X-Response-Time: the time of request processing in milliseconds
  • X-Response-Status: the HTTP status of response
  • X-Operation-Name: the name of operation including controller name and method name
  • audit: used for marking logs generated by the library

You can easily filter all the logs generated by the library using audit field as shown below.

spring-boot-logging-logstash-1

Here’s the screen showing all the fields sent by the library for a single record.

logstash-2

Summary

The source code repository with logstash-logging-spring-boot-starter is available on GitHub. You can check it out here: https://github.com/piomin/spring-boot-logging.git. You can report the bugs or propose new enhancements here: https://github.com/piomin/spring-boot-logging/issues. Any feedback would be very welcome. Enjoy 🙂

The post Using logstash-logging-spring-boot-starter for logging with Spring Boot and Logstash appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/10/02/using-logstash-logging-spring-boot-starter-for-logging-with-spring-boot-and-logstash/feed/ 2 7270
Logging with Spring Boot and Elastic Stack https://piotrminkowski.com/2019/05/07/logging-with-spring-boot-and-elastic-stack/ https://piotrminkowski.com/2019/05/07/logging-with-spring-boot-and-elastic-stack/#comments Tue, 07 May 2019 10:00:19 +0000 https://piotrminkowski.wordpress.com/?p=7128 In this article I’ll introduce my library for Spring Boot request and response logging designed especially for Spring Boot RESTful web application. The main assumptions regarding this library are: Logging all incoming HTTP requests and outgoing HTTP responses with full body Integration with Elastic Stack through Logstash using logstash-logback-encoder library Possibility for enabling logging on […]

The post Logging with Spring Boot and Elastic Stack appeared first on Piotr's TechBlog.

]]>
In this article I’ll introduce my library for Spring Boot request and response logging designed especially for Spring Boot RESTful web application. The main assumptions regarding this library are:

  • Logging all incoming HTTP requests and outgoing HTTP responses with full body
  • Integration with Elastic Stack through Logstash using logstash-logback-encoder library
  • Possibility for enabling logging on a client-side for most commonly used components in Spring Boot application: RestTemplate and OpenFeign
  • Generating and propagating correlationId across all communication within a single API endpoint call
  • Calculating and storing execution time for each request
  • A library should be auto-configurable – you don’t have to do anything more than including it as a dependency to your application to make it work

Motivation for request and response logging

I guess that after reading the preface to that article you may ask why I decided to build such a library, while Spring Boot has such features. But the question is if it really has these features? It may be quite surprisingly, but the answer is no. While you may easily log HTTP requests using some built-in Spring components like CommonsRequestLoggingFilter, you don’t have any out-of-the-box mechanism for logging response body. Of course you may implement your own custom solution based Spring HTTP interceptor (HandlerInterceptorAdapter) or filter (OncePerRequestFilter), but that is no so simple as you might think. The second option is to use Zalando Logbook, which is an extensible Java library to enable complete request and response logging for different client-side and server-side technologies. It is a very interesting library dedicated especially for logging HTTP requests and responses, that provides many customization options and supports different clients. So, for more advanced you may always use this library.
My goal is to create a simple library that not only log requests and responses, but provides auto-configuration for sending these logs to Logstash and correlating them. It will also generate automatically some valuable statistics like time of request processing. All such values should be sent to Logstash. Let’s proceed to the implementation.

Implementation of Spring Boot logging library

Let’s start with dependencies. We need some basic Spring libraries, which are included to spring-web, and spring-context that provides some additional annotations. For integration with Logstash we use logstash-logback-encoder library. Slf4j contains abstraction for logging, while javax.servlet-api for HTTP communication. Commons IO is not required, but it offers some useful methods for manipulating input and output streams.

<properties>
   <java.version>11</java.version>
   <commons-io.version>2.6</commons-io.version>
   <javax-servlet.version>4.0.1</javax-servlet.version>
   <logstash-logback.version>5.3</logstash-logback.version>
   <spring.version>5.1.6.RELEASE</spring.version>
   <slf4j.version>1.7.26</slf4j.version>
</properties>
<dependencies>
   <dependency>
      <groupId>org.springframework</groupId>
      <artifactId>spring-context</artifactId>
      <version>${spring.version}</version>
   </dependency>
   <dependency>
      <groupId>org.springframework</groupId>
      <artifactId>spring-web</artifactId>
      <version>${spring.version}</version>
   </dependency>
   <dependency>
      <groupId>net.logstash.logback</groupId>
      <artifactId>logstash-logback-encoder</artifactId>
      <version>${logstash-logback.version}</version>
   </dependency>
   <dependency>
      <groupId>javax.servlet</groupId>
      <artifactId>javax.servlet-api</artifactId>
      <version>${javax-servlet.version}</version>
      <scope>provided</scope>
   </dependency>
   <dependency>
      <groupId>commons-io</groupId>
      <artifactId>commons-io</artifactId>
      <version>${commons-io.version}</version>
   </dependency>
   <dependency>
      <groupId>org.slf4j</groupId>
      <artifactId>slf4j-api</artifactId>
      <version>${slf4j.version}</version>
   </dependency>
</dependencies>

The first step is to implement HTTP request and response wrappers. We have to do it because it is not possible to read the HTTP stream twice. If you would like to log the request or response body, you first have to read the input stream before processing or the output stream before returning it to the client. Spring provides an implementation of an HTTP request and response wrappers, but for unknown reasons they support only some specific use cases like content type application/x-www-form-urlencoded. Because we usually use application/json content type in the communication between RESTful applications Spring ContentCachingRequestWrapper and ContentCachingResponseWrapper won’t be useful here.
Here’s my implementation of the HTTP request wrapper. This can be done in various ways. This is just one of them:

public class SpringRequestWrapper extends HttpServletRequestWrapper {

    private byte[] body;

    public SpringRequestWrapper(HttpServletRequest request) {
        super(request);
        try {
            body = IOUtils.toByteArray(request.getInputStream());
        } catch (IOException ex) {
            body = new byte[0];
        }
    }

    @Override
    public ServletInputStream getInputStream() throws IOException {
        return new ServletInputStream() {
            public boolean isFinished() {
                return false;
            }

            public boolean isReady() {
                return true;
            }

            public void setReadListener(ReadListener readListener) {

            }

            ByteArrayInputStream byteArray = new ByteArrayInputStream(body);

            @Override
            public int read() throws IOException {
                return byteArray.read();
            }
        };
    }
}

The same thing has to be done for the output stream. This implementation is a little bit more complicated:

public class SpringResponseWrapper extends HttpServletResponseWrapper {

   private ServletOutputStream outputStream;
   private PrintWriter writer;
   private ServletOutputStreamWrapper copier;

   public SpringResponseWrapper(HttpServletResponse response) throws IOException {
      super(response);
   }

   @Override
   public ServletOutputStream getOutputStream() throws IOException {
      if (writer != null) {
         throw new IllegalStateException("getWriter() has already been called on this response.");
      }

      if (outputStream == null) {
         outputStream = getResponse().getOutputStream();
         copier = new ServletOutputStreamWrapper(outputStream);
      }

      return copier;
   }

   @Override
   public PrintWriter getWriter() throws IOException {
      if (outputStream != null) {
         throw new IllegalStateException("getOutputStream() has already been called on this response.");
      }

      if (writer == null) {
         copier = new ServletOutputStreamWrapper(getResponse().getOutputStream());
         writer = new PrintWriter(new OutputStreamWriter(copier, getResponse().getCharacterEncoding()), true);
      }

      return writer;
   }

   @Override
   public void flushBuffer() throws IOException {
      if (writer != null) {
         writer.flush();
      }
      else if (outputStream != null) {
         copier.flush();
      }
   }

   public byte[] getContentAsByteArray() {
      if (copier != null) {
         return copier.getCopy();
      }
      else {
         return new byte[0];
      }
   }

}

I moved the implementation out ServletOutputStream wrapper into the separated class:

public class ServletOutputStreamWrapper extends ServletOutputStream {

   private OutputStream outputStream;
   private ByteArrayOutputStream copy;

   public ServletOutputStreamWrapper(OutputStream outputStream) {
      this.outputStream = outputStream;
      this.copy = new ByteArrayOutputStream();
   }

   @Override
   public void write(int b) throws IOException {
      outputStream.write(b);
      copy.write(b);
   }

   public byte[] getCopy() {
      return copy.toByteArray();
   }

   @Override
   public boolean isReady() {
      return true;
   }

   @Override
   public void setWriteListener(WriteListener writeListener) {

   }
}

Because we need to wrap both HTTP request stream and response stream before processing we should use an HTTP filter for that. Spring provides their own implementation of an HTTP filter. Out filter is extending it and uses custom request and response wrappers to log payloads. Additionally it generates and sets X-Request-ID, X-Correlation-ID headers, and request processing time.

public class SpringLoggingFilter extends OncePerRequestFilter {

    private static final Logger LOGGER = LoggerFactory.getLogger(SpringLoggingFilter.class);
    private UniqueIDGenerator generator;

    public SpringLoggingFilter(UniqueIDGenerator generator) {
        this.generator = generator;
    }

    protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain chain) throws ServletException, IOException {
        generator.generateAndSetMDC(request);
        final long startTime = System.currentTimeMillis();
        final SpringRequestWrapper wrappedRequest = new SpringRequestWrapper(request);
        LOGGER.info("Request: method={}, uri={}, payload={}", wrappedRequest.getMethod(),
                wrappedRequest.getRequestURI(), IOUtils.toString(wrappedRequest.getInputStream(),
                wrappedRequest.getCharacterEncoding()));
        final SpringResponseWrapper wrappedResponse = new SpringResponseWrapper(response);
        wrappedResponse.setHeader("X-Request-ID", MDC.get("X-Request-ID"));
        wrappedResponse.setHeader("X-Correlation-ID", MDC.get("X-Correlation-ID"));
        chain.doFilter(wrappedRequest, wrappedResponse);
        final long duration = System.currentTimeMillis() - startTime;
        LOGGER.info("Response({} ms): status={}, payload={}", value("X-Response-Time", duration),
                value("X-Response-Status", wrappedResponse.getStatus()),
                IOUtils.toString(wrappedResponse.getContentAsByteArray(), wrappedResponse.getCharacterEncoding()));
    }
}

Auto-configuration

Once we have finished an implementation of wrappers and HTTP filter, we may prepare auto-configuration for our library. The first step is to create @Configuration that contains all the required beans. We have to register our custom HTTP filter SpringLoggingFilter, the logger appender for integration with Logstash and RestTemplate with HTTP client interceptor:

@Configuration
public class SpringLoggingAutoConfiguration {

   private static final String LOGSTASH_APPENDER_NAME = "LOGSTASH";

   @Value("${spring.logstash.url:localhost:8500}")
   String url;
   @Value("${spring.application.name:-}")
   String name;

   @Bean
   public UniqueIDGenerator generator() {
      return new UniqueIDGenerator();
   }

   @Bean
   public SpringLoggingFilter loggingFilter() {
      return new SpringLoggingFilter(generator());
   }

   @Bean
   public RestTemplate restTemplate() {
      RestTemplate restTemplate = new RestTemplate();
      List<ClientHttpRequestInterceptor> interceptorList = new ArrayList<ClientHttpRequestInterceptor>();
      restTemplate.setInterceptors(interceptorList);
      return restTemplate;
   }

   @Bean
   public LogstashTcpSocketAppender logstashAppender() {
      LoggerContext loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
      LogstashTcpSocketAppender logstashTcpSocketAppender = new LogstashTcpSocketAppender();
      logstashTcpSocketAppender.setName(LOGSTASH_APPENDER_NAME);
      logstashTcpSocketAppender.setContext(loggerContext);
      logstashTcpSocketAppender.addDestination(url);
      LogstashEncoder encoder = new LogstashEncoder();
      encoder.setContext(loggerContext);
      encoder.setIncludeContext(true);
      encoder.setCustomFields("{\"appname\":\"" + name + "\"}");
      encoder.start();
      logstashTcpSocketAppender.setEncoder(encoder);
      logstashTcpSocketAppender.start();
      loggerContext.getLogger(Logger.ROOT_LOGGER_NAME).addAppender(logstashTcpSocketAppender);
      return logstashTcpSocketAppender;
   }

}

The configuration set inside the library has to be loaded by Spring Boot. Spring Boot checks for the presence of a META-INF/spring.factories file within your published jar. The file should list your configuration classes under the EnableAutoConfiguration key:


org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
pl.piomin.logging.config.SpringLoggingAutoConfiguration

Integration with Logstash

Integration with Logstash is realized through auto-configured logging appender. We can override Logstash destination URL by setting property spring.logstash.url in application.yml file;

spring:
  application:
    name: sample-app
  logstash:
    url: 192.168.99.100:5000

To enable all the features described in this article in your application you just need to include my library to the dependencies:

<dependency>
   <groupId>pl.piomin</groupId>
   <artifactId>spring-boot-logging</artifactId>
   <version>1.0-SNAPSHOT</version>
</dependency>

Before running your application you should start Elastic Stack tools on your machine. The best way to do that is through Docker containers. But first let’s create a Docker network to enable communication between containers via container name.


$ docker network create es

Now, let’s start a single node instance of Elasticsearch exposed on port 9200. I use version6.7.2 of Elastic Stack tools :


$ docker run -d --name elasticsearch --net es -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.7.2

When running Logstash we need to provide an additional configuration that contains input and output definitions. We will start TCP input with JSON codec, which is not enabled by default. Elasticsearch URL is set as an output. It will also create an index containing the name of the application.

input {
  tcp {
    port => 5000
    codec => json
  }
}
output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    index => "micro-%{appname}"
  }
}

Now we can start Docker container with Logstash. It is exposed on port 5000 and reads configuration from logstash.conf file:


docker run -d --name logstash --net es -p 5000:5000 -v ~/logstash.conf:/usr/share/logstash/pipeline/logstash.conf docker.elastic.co/logstash/logstash:6.7.2

Finally, we can run Kibana, which used just for displaying logs:

$ docker run -d --name kibana --net es -e "ELASTICSEARCH_URL=http://elasticsearch:9200" -p 5601:5601 docker.elastic.co/kibana/kibana:6.7.2

After starting my sample application that uses spring-boot-logging library the logs from POST requests are displayed in Kibana as shown below:

logging-1

Each entry with response log contains X-Correlation-ID, X-Request-ID, X-Response-Time and X-Response-Status headers.

spring-boot-logging-example-2

Summary

My Spring logging library is available on GitHub in the repository https://github.com/piomin/spring-boot-logging.git. I’m still working on it, so any feedback or suggestions are very welcome. This library is dedicated for use in microservices-based architecture, where your applications may be launched in many instances inside containers. In this model, storing logs in the files does not have any sense. That’s why integration with Elastic Stack is very important.
But the most important feature of this library is to log HTTP request/response with full body and some additional information to this log like correlation id or request processing time. The library is really simple, small, and everything is done out-of-the-box after including it in your application.

The post Logging with Spring Boot and Elastic Stack appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/05/07/logging-with-spring-boot-and-elastic-stack/feed/ 4 7128
Part 2: Creating microservices – monitoring with Spring Cloud Sleuth, ELK and Zipkin https://piotrminkowski.com/2017/04/05/part-2-creating-microservices-monitoring-with-spring-cloud-sleuth-elk-and-zipkin/ https://piotrminkowski.com/2017/04/05/part-2-creating-microservices-monitoring-with-spring-cloud-sleuth-elk-and-zipkin/#respond Wed, 05 Apr 2017 08:36:44 +0000 https://piotrminkowski.wordpress.com/?p=2145 One of the most frequently mentioned challenges related to the creation of microservices-based architecture is monitoring. Each microservice should be run on an environment isolated from the other microservices, so it does not share resources such as databases or log files with them. However, the essential requirement for microservices architecture is relatively easy to access […]

The post Part 2: Creating microservices – monitoring with Spring Cloud Sleuth, ELK and Zipkin appeared first on Piotr's TechBlog.

]]>
One of the most frequently mentioned challenges related to the creation of microservices-based architecture is monitoring. Each microservice should be run on an environment isolated from the other microservices, so it does not share resources such as databases or log files with them. However, the essential requirement for microservices architecture is relatively easy to access the call history, including the ability to look through the request propagation between multiple microservices. Grepping the logs is not the right solution for that problem. There are some helpful tools that can be used when creating microservices with Spring Boot and Spring Cloud frameworks.

Spring Cloud Sleuth – library available as a part of Spring Cloud project. Lets you track the progress of subsequent microservices by adding the appropriate headers to the HTTP requests. The library is based on the MDC (Mapped Diagnostic Context) concept, where you can easily extract values put to context and display them in the logs.

Zipkin – distributed tracing system that helps to gather timing data for every request propagated between independent services. It has a simple management console where we can find a visualization of the time statistics generated by subsequent services.

ELK – Elasticsearch, Logstash, Kibana: three different tools usually used together. They are used for searching, analyzing, and visualizing log data in real-time.

Probably many of you, even if you have not had a touch with Java or microservices before, heard about Logstash and Kibana. For example, if you look at the hub.docker.com among the most popular images you will find the ones for the above tools. In our example, we will just use those images. Let’s begin by running the container with Elasticsearch.

docker run -d -it --name es -p 9200:9200 -p 9300:9300 elasticsearch

Then we can run the Kibana container and link it to the Elasticsearch.

docker run -d -it --name kibana --link es:elasticsearch -p 5601:5601 kibana

In the end, we will start Logstash with input and output declared. As an input, we declare TCP which is compatible with LogstashTcpSocketAppender used as a logging appender in our sample application. As an output elasticsearch has been declared. Each microservice will be indexed on its name with micro prefix.

docker run -d -it --name logstash -p 5000:5000 logstash -e 'input { tcp { port => 5000 codec => "json" } } output { elasticsearch { hosts => ["192.168.99.100"] index => "micro-%{serviceName}"} }'

Now we can take a look on sample microservices. This post is a continuation of my previous article Part 1: Creating microservice using Spring Cloud, Eureka and Zuul. Architecture and exposed services are the same as in the previous sample. Source code is available on GitHub (branch logstash). Like a mentioned before we will use Logback library for sending log data to Logstash. In addition to the three Logback dependencies we also add libraries for Zipkin integration and Spring Cloud Sleuth starter. Here’s fragment of pom.xml for microservice.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-sleuth-zipkin</artifactId>
</dependency>
<dependency>
   <groupId>net.logstash.logback</groupId>
   <artifactId>logstash-logback-encoder</artifactId>
   <version>4.9</version>
</dependency>
<dependency>
   <groupId>ch.qos.logback</groupId>
   <artifactId>logback-classic</artifactId>
   <version>1.2.3</version>
</dependency>
<dependency>
   <groupId>ch.qos.logback</groupId>
   <artifactId>logback-core</artifactId>
   <version>1.2.3</version>
</dependency>
 

There is also Logback configuration file in src/main/resources directory. Here’s logback.xml fragment. We can configure which logging field are sending to Logstash by declaring tags mdc, logLevel, message etc. We are also appending service name field for elasticsearch index creation.

<appender name="STASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
   <destination>192.168.99.100:5000</destination>

   <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
      <providers>
         <mdc />
         <context />
         <logLevel />
         <loggerName />

         <patterns>
            <pattern>
               {
                  "serviceName": "account-service"
               }
            </pattern>
         </patterns>

         <threadName />
         <message />
         <logstashMarkers />
         <stackTrace />
      </providers>
   </encoder>
</appender>
 

The configuration of Spring Cloud Sleuth is very simple. We only have to add spring-cloud-starter-sleuth dependency to pom.xml and declare sampler @Bean . In the sample I declared AlwaysSampler that exports every span, but there is also an other other option – PercentageBasedSampler that samples a fixed fraction of spans.

@Bean
public AlwaysSampler defaultSampler() {
   return new AlwaysSampler();
}
 

After starting ELK docker containers we need to run our microservices. There are 5 Spring Boot applications which need to be run discovery-service, account-service, customer-service, gateway-service and zipkin-service. After launching all of them we can try call some services, for example http://localhost:8765/api/customer/customers/{id}, which causes calling of both customer and account service. All logs will be stored in elasticsearch with micro-%{serviceName} index. They can be searched in Kibana with micro-* index pattern. Index patterns are created in Kibana under section Management -> Index patterns. Kibana is available under address http://192.168.99.100:5601. After first running we will be prompt for index pattern, so let’s type micro-*. Under Discover section we can take o look on all logs matched typed pattern with timeline visualization.

kibana2

Kibana is an intuitive and user-friendly tool. I will not describe in the details how to use Kibana, because you can easily find it out by yourself reading documentation or just clicking UI. The most important thing is to be able to search for logs by filtering criteria. In the picture below there is an example of searching logs by X-B3-TraceId field, which is added to the request header by Spring Cloud Sleuth. Sleuth also adds X-B3-SpanId for marking request for single microservice. We can select which field is displayed in the result list – in this sample I selected message and serviceName as you see in the left pane of the picture.

kibana1

Here’s a picture with single request details. It is visible after expanding each log row.

kibana3

Spring Cloud Sleuth also sends statistics to Zipkin. That is another kind of data that is stored in Logstash. These are timing statistics for each request. Zipkin UI is really simple. You can filter the requests by some criteria like time, service name, endpoint name. Here’s a picture with the same requests which were visualized with Kibana: http://localhost:8765/api/customer/customers/{id}.

zipkin-1

We can always see the details of each request by clicking on it. Then you see the picture similar to visible below. In the beginning, the request has been processed on the API gateway. Then gateway discovered customer service on the Eureka server and called that service. Customer service also has to discover account service and then call it. In this view, you can easily find out which operation is the most time-consuming.

zipkin-3

The post Part 2: Creating microservices – monitoring with Spring Cloud Sleuth, ELK and Zipkin appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/04/05/part-2-creating-microservices-monitoring-with-spring-cloud-sleuth-elk-and-zipkin/feed/ 0 2145
How to ship logs with Logstash, Elasticsearch and RabbitMQ https://piotrminkowski.com/2017/02/03/how-to-ship-logs-with-logstash-elasticsearch-and-rabbitmq/ https://piotrminkowski.com/2017/02/03/how-to-ship-logs-with-logstash-elasticsearch-and-rabbitmq/#respond Fri, 03 Feb 2017 09:01:18 +0000 https://piotrminkowski.wordpress.com/?p=18 The architecture of our solution for shipping application logs to Logstash is visible in the picture below. We’ll start from sample Spring Boot application shipping logs to RabbitMQ exchange. Then using Docker, we’ll configure the environment containing RabbitMQ, Logstash, Elasticsearch, and Kibana – each running on separated Docker container. My sample Java application is available […]

The post How to ship logs with Logstash, Elasticsearch and RabbitMQ appeared first on Piotr's TechBlog.

]]>
The architecture of our solution for shipping application logs to Logstash is visible in the picture below. We’ll start from sample Spring Boot application shipping logs to RabbitMQ exchange. Then using Docker, we’ll configure the environment containing RabbitMQ, Logstash, Elasticsearch, and Kibana – each running on separated Docker container.

sscg9hyasgmdht1k46653

My sample Java application is available on https://github.com/piomin/sample-amqp-logging.git.
There are only two Spring Boot dependencies needed inside pom.xml. First for REST controller and second for AMQP dependencies.

<dependencies>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-data-rest</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-amqp</artifactId>
   </dependency>
</dependencies>

Here’s a simple controller with one logging message.

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class Controller {

   protected Logger logger = LoggerFactory.getLogger(Controller.class.getName());

   @RequestMapping("/hello/{param}")
   public String hello(@PathVariable("param") String param) {
      logger.info("Controller.hello(" + param + ")");
      return "Hello";
   }

}

I use logback as logger implementation and Spring AMQP appender for sending logs to RabbitMQ over AMQP protocol.

<appender name="AMQP" class="org.springframework.amqp.rabbit.logback.AmqpAppender">
   <layout>
      <pattern>
         {
            "time": "%date{ISO8601}",
            "thread": "%thread",
            "level": "%level",
            "class": "%logger{36}",
            "message": "%message"
         }
      </pattern>
   </layout>

   <!-- RabbitMQ connection -->
   <host>192.168.99.100</host>
   <port>30000</port>
   <username>guest</username>
   <password>guest</password>

   <applicationId>api-service-4</applicationId>
   <routingKeyPattern>api-service-4</routingKeyPattern>
   <declareExchange>true</declareExchange>
   <exchangeType>direct</exchangeType>
   <exchangeName>ex_logstash</exchangeName>

   <generateId>true</generateId>
   <charset>UTF-8</charset>
   <durable>true</durable>
   <deliveryMode>PERSISTENT</deliveryMode>
</appender>

I run RabbitMQ server using docker image https://hub.docker.com/_/rabbitmq/. Here’s docker command for it. I choosed rabbitmq:management docker image to enable expose of RabbitMQ UI management console on port 30001. After running this command we can go to management console available on 192.168.99.100:30001. There we have to create queue named q_logstash and direct exchange named ex_logstach having routing set to q_logstash queue.

$ docker run -d -it --name rabbit --hostname rabbit -p 30000:5672 -p 30001:15672 rabbitmq:management
rabbit
RabbitMQ management console with exchange and queue binding

Then we run Elasticsearch and Kibana docker images. Kibana container need to be linked to elasticsearch.

$ docker run -d -it --name es -p 9200:9200 -p 9300:9300 elasticsearch
$ docker run -d -it --name kibana --link es:elasticsearch -p 5601:5601 kibana

Finally we can run Logstash docker image which get RabbitMQ queue as input and set Elasticsearch api as output. We have to change host to docker machine default address and port configured when running RabbitMQ container. Also we have durable queue so it has to be changed because default value for that is false following this reference:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-rabbitmq.html

$ docker run -d -it --name logstash logstash -e 'input { rabbitmq {
host => "192.168.99.100" port => 30000 durable => true } }
output { elasticsearch { hosts => ["192.168.99.100"] } }'

After running all docker containers for RabbitMQ, Logstash, Elasticsearch and Kibana we can run our sample Spring Boot application and see logs on Kibana available on http://192.168.99.100:5601.

The post How to ship logs with Logstash, Elasticsearch and RabbitMQ appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/02/03/how-to-ship-logs-with-logstash-elasticsearch-and-rabbitmq/feed/ 0 18