Spring MVC Archives - Piotr's TechBlog https://piotrminkowski.com/tag/spring-mvc/ Java, Spring, Kotlin, microservices, Kubernetes, containers Sat, 14 Jan 2023 17:10:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Spring MVC Archives - Piotr's TechBlog https://piotrminkowski.com/tag/spring-mvc/ 32 32 181738725 Spring Boot Tips, Tricks and Techniques https://piotrminkowski.com/2021/01/13/spring-boot-tips-tricks-and-techniques/ https://piotrminkowski.com/2021/01/13/spring-boot-tips-tricks-and-techniques/#comments Wed, 13 Jan 2021 11:10:29 +0000 https://piotrminkowski.com/?p=9354 In this article, I will show you some tips and tricks that help you in building the Spring Boot application efficiently. I hope you will find there tips and techniques that help to boost your productivity in Spring Boot development. Of course, that’s my private list of favorite features. You may find some others by […]

The post Spring Boot Tips, Tricks and Techniques appeared first on Piotr's TechBlog.

]]>
In this article, I will show you some tips and tricks that help you in building the Spring Boot application efficiently. I hope you will find there tips and techniques that help to boost your productivity in Spring Boot development. Of course, that’s my private list of favorite features. You may find some others by yourself, for example on the Spring “How-to” Guides site.

I have already published all these Spring Boot tips on Twitter in a graphical form visible below. You may them using the #SpringBootTip hashtag. I’m a huge fan of Spring Boot. So, if you have suggestions or your own favorite features just ping me on Twitter (@piotr_minkowski). I will definitely retweet your tweet 🙂

spring-boot-tips

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should execute the command mvn clean package spring-boot:run to build and run the sample application. This application uses embedded database H2 and exposes the REST API. Of course, it demonstrates all the features described in this article. If you have any suggestions, don’t be afraid to create a pull request!

Tip 1. Use a random HTTP port in tests

Let’s begin with some Spring Boot testing tips. You should not use a static port in your Spring Boot tests. In order to set this option for the particular test you need to use the webEnvironment field in @SpringBootTest. So, instead of a default DEFINED_PORT provide the RANDOM_PORT value. Then, you can inject a port number into the test with the @LocalServerPort annotation.

@SpringBootTest(webEnvironment = 
   SpringBootTest.WebEnvironment.RANDOM_PORT)
public class AppTest {

   @LocalServerPort
   private int port;

   @Test
   void test() {
      Assertions.assertTrue(port > 0);
   }
}

Tip 2. Use @DataJpaTest to test the JPA layer

Typically for integration testing, you probably use @SpringBootTest to annotate the test class. The problem with it is that it starts the whole application context. This in turn increases the total time required for running your test. Instead, you may use @DataJpaTest that starts JPA components and @Repository beans. By default, it logs SQL queries. So, a good idea is to disable it with the showSql field. Moreover, if you want to include beans annotated with @Service or @Component to the test, you may use @Import annotation.

@DataJpaTest(showSql = false)
@Import(TipService.class)
public class TipsControllerTest {

    @Autowired
    private TipService tipService;

    @Test
    void testFindAll() {
        List<Tip> tips = tipService.findAll();
        Assertions.assertEquals(3, tips.size());
    }
}

Be careful with changing test annotations, if you have multiple integration tests in your application. Since such change modifies a global state of your application context, it may result in not reusing that context between your tests. You can read more about it in the following article by Philip Riecks.

Tip 3. Rollback transaction after each test

Let’s begin with an embedded, in-memory database. In general, you should rollback all changes performed during each test. The changes during a particular test should not have an influence on the result of another test. However, don’t try to rollback such changes manually! For example, you should not remove a new entity added during the test as shown below.

 public void testAdd() {
     Tip tip = tipRepository.save(new Tip(null, "Tip1", "Desc1"));
     Assertions.assertNotNull(tip);
     tipRepository.deleteById(tip.getId());
 }

Spring Boot comes with a very handy solution for that case. You just need to annotate the test class with @Transactional. Rollback is the default behavior in the test mode, so nothing else is required here. But remember – it works properly only on the client’s side. If your application performs a transaction on the server’s side, it will not be rolled back.

@SpringBootTest
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
@Transactional
public class TipsRepositoryTest {

    @Autowired
    private TipRepository tipRepository;

    @Test
    @Order(1)
    public void testAdd() {
        Tip tip = tipRepository.save(new Tip(null, "Tip1", "Desc1"));
        Assertions.assertNotNull(tip);
    }

    @Test
    @Order(2)
    public void testFindAll() {
        Iterable<Tip> tips = tipRepository.findAll();
        Assertions.assertEquals(0, ((List<Tip>) tips).size());
    }
}

In some cases, you won’t use an in-memory, embedded database in your tests. For example, if you have a complex data structure, you may want to check committed data instead of debugging if your tests fail. Therefore you need to use an external database, and commit data after each test. Each time you should start your tests with a cleanup.

Tip 4. Multiple Spring Conditions with logical “OR”

What if you would like to define multiple conditions with @Conditional on a Spring bean? By default, Spring Boot combines all defined conditions with logical “AND”. In the example code visible below, a target bean would be available only if MyBean1 and MyBean2 exist and the property multipleBeans.enabled is defined.

@Bean
@ConditionalOnProperty("multipleBeans.enabled")
@ConditionalOnBean({MyBean1.class, MyBean2.class})
public MyBean myBean() {
   return new MyBean();
}

In order to define multiple “OR” conditions, you need to create a class that extends AnyNestedCondition, and put there all your conditions. Then you should use that class with @Conditional annotation as shown below.

public class MyBeansOrPropertyCondition extends AnyNestedCondition {

    public MyBeansOrPropertyCondition() {
        super(ConfigurationPhase.REGISTER_BEAN);
    }

    @ConditionalOnBean(MyBean1.class)
    static class MyBean1ExistsCondition {}

    @ConditionalOnBean(MyBean2.class)
    static class MyBean2ExistsCondition {}

    @ConditionalOnProperty("multipleBeans.enabled")
    static class MultipleBeansPropertyExists {}
}

@Bean
@Conditional(MyBeansOrPropertyCondition.class)
public MyBean myBean() {
   return new MyBean();
}

Tip 5. Inject Maven data into an application

You may choose between two options that allow injecting Maven data into an application. Firstly, you can use a special placeholder with the project prefix and @ delimiter in the application.properties file.

maven.app=@project.artifactId@:@project.version@

Then, you just need to inject a property into the application using @Value annotation.

@SpringBootApplication
public class TipsApp {
   @Value("${maven.app}")
   private String name;
}

On the other hand, you may use BuildProperties bean as shown below. It stores data available in the build-info.properties file.

@SpringBootApplication
public class TipsApp {

   @Autowired
   private BuildProperties buildProperties;

   @PostConstruct
   void init() {
      log.info("Maven properties: {}, {}", 
	     buildProperties.getArtifact(), 
	     buildProperties.getVersion());
   }
}

In order to generate build-info.properties you execute goal build-info provided by Spring Boot Maven Plugin.

$ mvn package spring-boot:build-info

Tip 6. Inject Git data into an application

Sometimes, you may want to access Git data inside in your Spring Boot application. In order to do that, you first need to include git-commit-id-plugin to the Maven plugins. During the build it generates git.properties file.

<plugin>
   <groupId>pl.project13.maven</groupId>
   <artifactId>git-commit-id-plugin</artifactId>
   <configuration>
      <failOnNoGitDirectory>false</failOnNoGitDirectory>
   </configuration>
</plugin>

Finally, you may inject the content from the git.properties file to the application using GitProperties bean.

@SpringBootApplication
public class TipsApp {

   @Autowired
   private GitProperties gitProperties;

   @PostConstruct
   void init() {
      log.info("Git properties: {}, {}", 
	     gitProperties.getCommitId(), 
	     gitProperties.getCommitTime());
   }
}

Tip 7. Insert initial non-production data

Sometimes, you need to insert some data on the application startup for demo purposes. You can also use such an initial data set to test your application manually during development. In order to achieve it, you just need to put the data.sql file on the classpath. Typically, you will place it somewhere inside src/main/resources directory. Then you easily filter out such a file during a non-dev build.

insert into tip(title, description) values ('Test1', 'Desc1');
insert into tip(title, description) values ('Test2', 'Desc2');
insert into tip(title, description) values ('Test3', 'Desc3');

However, if you need to generate a large data set or you are just not convinced about the solution with data.sql you can insert data programmatically. In that case, it is important to activate the feature only in a specific profile.

@Profile("demo")
@Component
public class ApplicationStartupListener implements 
      ApplicationListener<ApplicationReadyEvent> {

   @Autowired
   private TipRepository repository;

   public void onApplicationEvent(final ApplicationReadyEvent event) {
      repository.save(new Tip("Test1", "Desc1"));
      repository.save(new Tip("Test2", "Desc2"));
      repository.save(new Tip("Test3", "Desc3"));
   }
}

Tip 8. Configuration properties instead of @Value

You should not use @Value for injection if you have multiple properties with the same prefix (e.g. app). Instead, use @ConfigurationProperties with constructor injection. You can mix it with Lombok @AllArgsConstructor and @Getter.

@ConfigurationProperties("app")
@AllArgsConstructor
@Getter
@ToString
public class TipsAppProperties {
    private final String name;
    private final String version;
}
@SpringBootApplication
public class TipsApp {

    @Autowired
    private TipsAppProperties properties;
	
}

Tip 9. Error handling with Spring MVC

Spring MVC Exception Handling is very important to make sure you are not sending server exceptions to the client. Currently, there are two recommended approaches when handling exceptions. In the first of them, you will use a global error handler with @ControllerAdvice and @ExceptionHandler annotations. Obviously, a good practice is to handle all the business exceptions thrown by your application and assign HTTP codes to them. By default, Spring MVC returns HTTP 500 code for an unhandled exception.

@ControllerAdvice
public class TipNotFoundHandler {

    @ResponseStatus(HttpStatus.NO_CONTENT)
    @ExceptionHandler(NoSuchElementException.class)
    public void handleNotFound() {

    }
}

You can also handle every exception locally inside the controller method. In that case, you just need to throw ResponseStatusException with a particular HTTP code.

@GetMapping("/{id}")
public Tip findById(@PathVariable("id") Long id) {
   try {
      return repository.findById(id).orElseThrow();
   } catch (NoSuchElementException e) {
      log.error("Not found", e);
      throw new ResponseStatusException(HttpStatus.NO_CONTENT);
   }
}

Tip 10. Ignore not existing config file

In general, the application should not fail to start if a configuration file does not exist. Especially since you can set default values for the properties. Since the default behavior of the Spring application is to fail to start in case of a missing configuration file, you need to change it. Set the spring.config.on-not-found property to ignore.

$ java -jar target/spring-boot-tips.jar \
   --spring.config.additional-location=classpath:/add.properties \
   --spring.config.on-not-found=ignore

There is another handy solution to avoid startup failure. You can use the optional keyword in the config file location as shown below.

$ java -jar target/spring-boot-tips.jar \
   --spring.config.additional-location=optional:classpath:/add.properties

Tip 11. Different levels of configuration

You can change the default location of the Spring configuration file with the spring.config.location property. The priority of property sources is determined by the order of files in the list. The most significant is in the end. This feature allows you to define different levels of configuration starting from general settings to the most application-specific settings. So, let’s assume we have a global configuration file with the content visible below.

property1=Global property1
property2=Global property2

Also, we have an application-specific configuration file as shown below. It contains the property with the same name as the property in a global configuration file.

property1=App specific property1

And here’s a JUnit test that verifies that feature.

@SpringBootTest(properties = {
    "spring.config.location=classpath:/global.properties,classpath:/app.properties"
})
public class TipsAppTest {

    @Value("${property1}")
    private String property1;
    @Value("${property2}")
    private String property2;
    
    @Test
    void testProperties() {
        Assertions.assertEquals("App specific property1", property1);
        Assertions.assertEquals("Global property2", property2);
    }
}

Tip 12. Deploy Spring Boot on Kubernetes

With the Dekorate project, you don’t have to create any Kubernetes YAML manifests manually. Firstly, you need to include the io.dekorate:kubernetes-spring-starter dependency. Then you can use annotations like @KubernetesApplication to add some new parameters into the generated YAML manifest or override defaults.

@SpringBootApplication
@KubernetesApplication(replicas = 2,
    envVars = { 
       @Env(name = "propertyEnv", 
            value = "Hello from env!"
       ),
       @Env(name = "propertyFromMap", 
            value = "property1", 
            configmap = "sample-configmap"
       ) 
    },
    expose = true,
    ports = @Port(name = "http", containerPort = 8080),
    labels = @Label(key = "version", value = "v1")
)
@JvmOptions(server = true, xmx = 256, gc = GarbageCollector.SerialGC)
public class TipsApp {

    public static void main(String[] args) {
        SpringApplication.run(TipsApp.class, args);
    }

}

After that, you need to set dekorate.build and dekorate.deploy parameters to true in your Maven build command. It automatically generates manifests and deploys the Spring Boot application on Kubernetes. If you use Skaffold for deploying applications on Kubernetes you can easily integrate it with Dekorate. To read more about the details please refer to the following article.

$ mvn clean install -Ddekorate.build =true -Ddekorate.deploy=true

Tip 13. Generate a random HTTP port

Finally, we may proceed to the last of the Spring Boot tips described in this article. Probably you know that feature, but I must mention it here. Spring Boot assigns a random and free port to the web application if you set server.port property to 0.

server.port=0

You can set a random port in a custom predefined range, e.g. 8000-8100. However, there is no guarantee that a generated port will be unassigned.

server.port=${random.int(8000,8100)}

The post Spring Boot Tips, Tricks and Techniques appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/01/13/spring-boot-tips-tricks-and-techniques/feed/ 6 9354
Performance Comparison Between Spring MVC vs Spring WebFlux with Elasticsearch https://piotrminkowski.com/2019/10/30/performance-comparison-between-spring-mvc-and-spring-webflux-with-elasticsearch/ https://piotrminkowski.com/2019/10/30/performance-comparison-between-spring-mvc-and-spring-webflux-with-elasticsearch/#respond Wed, 30 Oct 2019 14:50:38 +0000 https://piotrminkowski.wordpress.com/?p=7383 Since Spring 5 and Spring Boot 2, there is full support for reactive REST API with the Spring WebFlux project. Also, project Spring Data systematically includes support for reactive NoSQL databases, and recently for SQL databases too. Since Spring Data Moore we can take advantage of reactive template and repository for Elasticsearch, what I have […]

The post Performance Comparison Between Spring MVC vs Spring WebFlux with Elasticsearch appeared first on Piotr's TechBlog.

]]>
Since Spring 5 and Spring Boot 2, there is full support for reactive REST API with the Spring WebFlux project. Also, project Spring Data systematically includes support for reactive NoSQL databases, and recently for SQL databases too. Since Spring Data Moore we can take advantage of reactive template and repository for Elasticsearch, what I have already described in one of my previous article Reactive Elasticsearch With Spring Boot.
Recently, we can observe the rising popularity of reactive programming and reactive APIs. This fact has led me to perform some comparison between synchronous API built on top of Spring MVC vs Spring WebFlux reactive API. The comparison will cover server-side memory usage and an average response time on the client-side. We will also use Spring Data Elasticsearch Repositories accessed by the controller for integration with a running instance of Elasticsearch on a Docker container. To make the test objective we will of course use the same versions of Spring Boot and Spring Data projects. First, let’s consider some prerequisites.

1. Dependencies

We are using Spring Boot in version 2.2.0.RELEASE with JDK 11.

<parent>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-parent</artifactId>
   <version>2.2.0.RELEASE</version>
   <relativePath/>
</parent>
<properties>
   <java.version>11</java.version>
</properties>

Here’s the list of dependencies for the application with synchronous REST API:

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

And here’s for the application reactive API:

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

2. Running Elasticsearch

We will run the same Docker container for both tests. The container is started in development mode as a single node.

$ docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.6.2

We will insert the initial set of data into Elasticsearch.

public class SampleDataSet {

    private static final Logger LOGGER = LoggerFactory.getLogger(SampleDataSet.class);
    private static final String INDEX_NAME = "sample";
    private static final String INDEX_TYPE = "employee";
    private static int COUNTER = 0;

    @Autowired
    ElasticsearchTemplate template;
    @Autowired
    TaskExecutor taskExecutor;

    @PostConstruct
    public void init() {
        if (!template.indexExists(INDEX_NAME)) {
            template.createIndex(INDEX_NAME);
            LOGGER.info("New index created: {}", INDEX_NAME);
        }
        for (int i = 0; i < 10000; i++) {
            taskExecutor.execute(() -> bulk());
        }
    }

    public void bulk() {
        try {
            ObjectMapper mapper = new ObjectMapper();
            List<IndexQuery> queries = new ArrayList<>();
            List<Employee> employees = employees();
            for (Employee employee : employees) {
                IndexQuery indexQuery = new IndexQuery();
                indexQuery.setSource(mapper.writeValueAsString(employee));
                indexQuery.setIndexName(INDEX_NAME);
                indexQuery.setType(INDEX_TYPE);
                queries.add(indexQuery);
            }
            if (queries.size() > 0) {
                template.bulkIndex(queries);
            }
            template.refresh(INDEX_NAME);
            LOGGER.info("BulkIndex completed: {}", ++COUNTER);
        } catch (Exception e) {
            LOGGER.error("Error bulk index", e);
        }
    }

    private List<Employee> employees() {
        List<Employee> employees = new ArrayList<>();
        for (int i = 0; i < 10000; i++) {
            Random r = new Random();
            Employee employee = new Employee();
            employee.setName("JohnSmith" + r.nextInt(1000000));
            employee.setAge(r.nextInt(100));
            employee.setPosition("Developer");
            int departmentId = r.nextInt(500000);
            employee.setDepartment(new Department((long) departmentId, "TestD" + departmentId));
            int organizationId = departmentId / 100;
            employee.setOrganization(new Organization((long) organizationId, "TestO" + organizationId, "Test Street No. " + organizationId));
            employees.add(employee);
        }
        return employees;
    }

}

We are testing a single document Employee:

@Document(indexName = "sample", type = "employee")
public class Employee {

    @Id
    private String id;
    @Field(type = FieldType.Object)
    private Organization organization;
    @Field(type = FieldType.Object)
    private Department department;
    private String name;
    private int age;
    private String position;
   
}

I think that a set of data shouldn’t be too large, but also not too small. Let’s test node with around 18M of documents divided into 5 shards.

elastic-perf-1

3. Synchronous API Tests

The library used for performance tests is junit-benchmarks. It allows to define the number of concurrent threads for JUnit test method, and the number of repeats.

<dependency>
   <groupId>com.carrotsearch</groupId>
   <artifactId>junit-benchmarks</artifactId>
   <version>0.7.2</version>
   <scope>test</scope>
</dependency>

The implementation of JUnit test class is visible below. It should extends AbstractBenchmark class and define the test rule BenchmarkRule. The tests are performed on the running external application available under localhost:8080 using TestRestTemplate. We have three test scenarios. In the first implementation inside addTest we are verifying a time required for adding a new document to Elasticsearch through POST method. Another two scenarios defined in methods findByNameTest and findByOrganizationNameTest tests search methods. Each test is running in 30 concurrent threads and repeated 500 times.

public class EmployeeRepositoryPerformanceTest extends AbstractBenchmark {

    private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeRepositoryPerformanceTest.class);

    @Rule
    public TestRule benchmarkRun = new BenchmarkRule();

    private TestRestTemplate template = new TestRestTemplate();
    private Random r = new Random();

    @Test
    @BenchmarkOptions(concurrency = 30, benchmarkRounds = 500, warmupRounds = 2)
    public void addTest() {
        Employee employee = new Employee();
        employee.setName("John Smith");
        employee.setAge(r.nextInt(100));
        employee.setPosition("Developer");
        employee.setDepartment(new Department((long) r.nextInt(1000), "TestD"));
        employee.setOrganization(new Organization((long) r.nextInt(100), "TestO", "Test Street No. 1"));
        employee = template.postForObject("http://localhost:8080/employees", employee, Employee.class);
        Assert.assertNotNull(employee);
        Assert.assertNotNull(employee.getId());
    }

    @Test
    @BenchmarkOptions(concurrency = 30, benchmarkRounds = 500, warmupRounds = 2)
    public void findByNameTest() {
        String name = "JohnSmith" + r.nextInt(1000000);
        Employee[] employees = template.getForObject("http://localhost:8080/employees/{name}", Employee[].class, name);
        LOGGER.info("Found: {}", employees.length);
        Assert.assertNotNull(employees);
    }

    @Test
    @BenchmarkOptions(concurrency = 30, benchmarkRounds = 500, warmupRounds = 2)
    public void findByOrganizationNameTest() {
        String organizationName = "TestO" + r.nextInt(5000);
        Employee[] employees = template.getForObject("http://localhost:8080/employees/organization/{organizationName}", Employee[].class, organizationName);
        LOGGER.info("Found: {}", employees.length);
        Assert.assertNotNull(employees);
    }

}

4. Reactive API Tests

For reactive API we have the same scenarios, but they have to be implemented a little differently since we have asynchronous, non-blocking API. First, we will use a smart library called concurrentunit for testing multi-threaded or asynchronous code.

<dependency>
   <groupId>net.jodah</groupId>
   <artifactId>concurrentunit</artifactId>
   <version>0.4.6</version>
   <scope>test</scope>
</dependency>

ConcurrentUnit library allows us to define the Waiter object which is responsible for performing assertions and waiting for operations in any thread, and then notifying back the main test thread. Also we are using WebClient, which is able to retrieve reactive streams defined as Flux and Mono.

public class EmployeeRepositoryPerformanceTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeRepositoryPerformanceTest.class);

    @Rule
    public TestRule benchmarkRun = new BenchmarkRule();

    private final Random r = new Random();
    private final WebClient client = WebClient.builder()
            .baseUrl("http://localhost:8080")
            .defaultHeader(HttpHeaders.CONTENT_TYPE, "application/json")
            .build();

    @Test
    @BenchmarkOptions(concurrency = 30, benchmarkRounds = 500, warmupRounds = 2)
    public void addTest() throws TimeoutException, InterruptedException {
        final Waiter waiter = new Waiter();
        Employee employee = new Employee();
        employee.setName("John Smith");
        employee.setAge(r.nextInt(100));
        employee.setPosition("Developer");
        employee.setDepartment(new Department((long) r.nextInt(10), "TestD"));
        employee.setOrganization(new Organization((long) r.nextInt(10), "TestO", "Test Street No. 1"));
        Mono<Employee> empMono = client.post().uri("/employees").body(Mono.just(employee), Employee.class).retrieve().bodyToMono(Employee.class);
        empMono.subscribe(employeeLocal -> {
            waiter.assertNotNull(employeeLocal);
            waiter.assertNotNull(employeeLocal.getId());
            waiter.resume();
        });
        waiter.await(5000);
    }

    @Test
    @BenchmarkOptions(concurrency = 30, benchmarkRounds = 500, warmupRounds = 2)
    public void findByNameTest() throws TimeoutException, InterruptedException {
        final Waiter waiter = new Waiter();
        String name = "JohnSmith" + r.nextInt(1000000);
        Flux<Employee> employees = client.get().uri("/employees/{name}", name).retrieve().bodyToFlux(Employee.class);
        employees.count().subscribe(count -> {
            waiter.assertTrue(count > 0);
            waiter.resume();
            LOGGER.info("Found({}): {}", name, count);
        });
        waiter.await(5000);
    }

    @Test
    @BenchmarkOptions(concurrency = 30, benchmarkRounds = 500, warmupRounds = 2)
    public void findByOrganizationNameTest() throws TimeoutException, InterruptedException {
        final Waiter waiter = new Waiter();
        String organizationName = "TestO" + r.nextInt(5000);
        Flux<Employee> employees = client.get().uri("/employees/organization/{organizationName}", organizationName).retrieve().bodyToFlux(Employee.class);
        employees.count().subscribe(count -> {
            waiter.assertTrue(count > 0);
            waiter.resume();
            LOGGER.info("Found: {}", count);
        });
        waiter.await(5000);
    }

}

5. Spring MVC vs Spring WebFlux – Test Results

After discussing some prerequisites and implementation details we may finally proceed to the tests. I think that the results are pretty interesting. Let’s begin with Spring MVC tests. Here are graphs that illustrate memory usage during the tests. The first of them shows heap memory usage.

spring-mvc-vs-webflux-elastic-perf-2

The second shows metaspace.

spring-mvc-vs-webflux-elastic-perf-3

Here are equivalent graphs for reactive API tests. The heap memory usage is a little higher than for previous tests, although generally, Netty requires lower memory than Tomcat (50MB instead of 100MB before running the test).

elastic-perf-6

The metaspace usage is a little lower than for synchronous API tests (60MB vs 75MB).

spring-mvc-vs-webflux-elastic-perf-7

And now the processing time test results. They may be a little unexpected. In fact, there is no big difference between synchronous and reactive tests. One thing that should be explained here. The method findByName returns a lower set of employees than findByOrganizationName. That’s why it is much faster than the method for searching by organization name.

spring-mvc-vs-webflux-elastic-perf-4

As I mentioned before the results are pretty the same especially if thinking about the POST method. The result for findByName is 6.2s instead of 7.1s for synchronous calls, which gives a difference of around 15%. The test for findByOrganizationName has failed due to exceeding the 5s timeout defined for every single run of the test method. It seems that processing results around 3-4k of objects in a single response has significantly slowed down the sample application based on Spring WebFlux and reactive Elasticsearch repositories.

elastic-perf-5

Summary

I won’t discuss the result of these tests. The thoughts are on your side. The source code repository is available on GitHub https://github.com/piomin/sample-spring-elasticsearch. Branch master contains a version for Spring MVC tests, while branch reactive for Spring WebFlux tests.

The post Performance Comparison Between Spring MVC vs Spring WebFlux with Elasticsearch appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/10/30/performance-comparison-between-spring-mvc-and-spring-webflux-with-elasticsearch/feed/ 0 7383
Spring REST Docs versus SpringFox Swagger for API documentation https://piotrminkowski.com/2018/07/19/spring-rest-docs-versus-springfox-swagger-for-api-documentation/ https://piotrminkowski.com/2018/07/19/spring-rest-docs-versus-springfox-swagger-for-api-documentation/#respond Thu, 19 Jul 2018 12:44:09 +0000 https://piotrminkowski.wordpress.com/?p=6744 Recently, I have come across some articles and mentions about Spring REST Docs, where it has been presented as a better alternative to traditional Swagger docs. Until now, I was always using Swagger for building API documentation, so I decided to try Spring REST Docs. You may even read on the main page of that […]

The post Spring REST Docs versus SpringFox Swagger for API documentation appeared first on Piotr's TechBlog.

]]>
Recently, I have come across some articles and mentions about Spring REST Docs, where it has been presented as a better alternative to traditional Swagger docs. Until now, I was always using Swagger for building API documentation, so I decided to try Spring REST Docs. You may even read on the main page of that Spring project (https://spring.io/projects/spring-restdocs) some references to Swagger, for example: “This approach frees you from the limitations of the documentation produced by tools like Swagger”. Are you interested in building API documentation using Spring REST Docs? Let’s take a closer look at that project!

A first difference in comparison to Swagger is a test-driven approach to generating API documentation. Thanks to that Spring REST Docs ensures that the documentation is always generated accurately matches the actual behavior of the API. When using Swagger SpringFox library you just need to enable it for the project and provide some configuration to force it work following your expectations. I have already described usage of Swagger 2 for automated build API documentation for Spring Boot based application in my two previous articles:

The articles mentioned above describe in the details how to use SpringFox Swagger in your Spring Boot application to automatically generate API documentation based on the source code. Here I’ll give you only a short introduction to that technology, to easily find out differences between usage of Swagger2 and Spring REST Docs.

1. Using Swagger2 with Spring Boot

To enable SpringFox library for your application you need to include the following dependencies to pom.xml.

<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger2</artifactId>
    <version>2.9.2</version>
</dependency>
<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger-ui</artifactId>
    <version>2.9.2</version>
</dependency>

Then you should annotate the main or configuration class with @EnableSwagger2. You can also customize the behaviour of SpringFox library by declaring Docket bean.

@Bean
public Docket swaggerEmployeeApi() {
   return new Docket(DocumentationType.SWAGGER_2)
      .select()
         .apis(RequestHandlerSelectors.basePackage("pl.piomin.services.employee.controller"))
         .paths(PathSelectors.any())
      .build()
      .apiInfo(new ApiInfoBuilder().version("1.0").title("Employee API").description("Documentation Employee API v1.0").build());
}

Now, after running the application the documentation is available under context path /v2/api-docs. You can also display it in your web browser using Swagger UI available at site /swagger-ui.html.

spring-cloud-3
It looks easy? Let’s see how to do this with Spring REST Docs.

2. Using Asciidoctor with Spring Boot

There are some other differences between Spring REST Docs and SpringFox Swagger. By default, Spring REST Docs uses Asciidoctor. Asciidoctor processes plain text and produces HTML, styled and layed out to suit your needs. If you prefer, Spring REST Docs can also be configured to use Markdown. This really distinguished it from Swagger, which uses its own notation called OpenAPI Specification.
Spring REST Docs makes use of snippets produced by tests written with Spring MVC’s test framework, Spring WebFlux’s WebTestClient or REST Assured 3. I’ll show you an example based on Spring MVC.
I suggest you begin from creating a base Asciidoc file. It should be placed in src/main/asciidoc directory in your application source code. I don’t know if you are familiar with Asciidoctor notation, but it is really intuitive. The sample visible below shows two important things. First we’ll display the version of the project taken from pom.xml. Then we’ll include the snippets generated during JUnit tests by declaring a macro called operation containing document name and list of snippets. We can choose between such snippets like curl-request, http-request, http-response, httpie-request, links, request-body, request-fields, response-body, response-fields or path-parameters. The document name is determined by the name of the test method in our JUnit test class.

= RESTful Employee API Specification
{project-version}
:doctype: book

== Add a new person

A `POST` request is used to add a new person

operation::add-person[snippets='http-request,request-fields,http-response']

== Find a person by id

A `GET` request is used to find a new person by id

operation::find-person-by-id[snippets='http-request,path-parameters,http-response,response-fields']

The source code fragment with Asciidoc natation is just a template. We would like to generate an HTML file, which prettily displays all our automatically generated staff. To achieve it we should enable plugin asciidoctor-maven-plugin in the project’s pom.xml. In order to display Maven project version we need to pass it to the Asciidoc plugin configuration attributes. We also need to spring-restdocs-asciidoctor dependency to that plugin.

<plugin>
   <groupId>org.asciidoctor</groupId>
   <artifactId>asciidoctor-maven-plugin</artifactId>
   <version>1.5.6</version>
   <executions>
      <execution>
         <id>generate-docs</id>
         <phase>prepare-package</phase>
         <goals>
            <goal>process-asciidoc</goal>
         </goals>
         <configuration>
            <backend>html</backend>
            <doctype>book</doctype>
            <attributes>
               <project-version>${project.version}</project-version>
            </attributes>
         </configuration>
      </execution>
   </executions>
   <dependencies>
      <dependency>
         <groupId>org.springframework.restdocs</groupId>
         <artifactId>spring-restdocs-asciidoctor</artifactId>
         <version>2.0.0.RELEASE</version>
      </dependency>
   </dependencies>
</plugin>

Ok, the documentation is automatically generated during Maven build from our api.adoc file located inside src/main/asciidoc directory. But we still need to develop JUnit API tests that automatically generate required snippets. Let’s do that in the next step.

3. Generating snippets for Spring MVC

First, we should enable Spring REST Docs for our project. To achieve it we have to include the following dependency.

<dependency>
   <groupId>org.springframework.restdocs</groupId>
   <artifactId>spring-restdocs-mockmvc</artifactId>
   <scope>test</scope>
</dependency>

Now, all we need to do is to implement JUnit tests. Spring Boot provides an @AutoConfigureRestDocs annotation that allows you to leverage Spring REST Docs in your tests.
In fact, we need to prepare for the standard Spring MVC test using MockMvc bean. I also mocked some methods implemented by EmployeeRepository. Then, I used some static methods provided by Spring REST Docs with support for generating documentation of request and response payloads. First of those method is document("{method-name}/",...), which is responsible for generating snippets under directory target/generated-snippets/{method-name}, where method name is the name of the test method formatted using kebab-case. I have described all the JSON fields in the requests using requestFields(...) and responseFields(...) methods.

@RunWith(SpringRunner.class)
@WebMvcTest(EmployeeController.class)
@AutoConfigureRestDocs
public class EmployeeControllerTest {

   @MockBean
   EmployeeRepository repository;
   @Autowired
   MockMvc mockMvc;
   
   private ObjectMapper mapper = new ObjectMapper();

   @Before
   public void setUp() {
      Employee e = new Employee(1L, 1L, "John Smith", 33, "Developer");
      e.setId(1L);
      when(repository.add(Mockito.any(Employee.class))).thenReturn(e);
      when(repository.findById(1L)).thenReturn(e);
   }

   @Test
   public void addPerson() throws JsonProcessingException, Exception {
      Employee employee = new Employee(1L, 1L, "John Smith", 33, "Developer");
      mockMvc.perform(post("/").contentType(MediaType.APPLICATION_JSON).content(mapper.writeValueAsString(employee)))
         .andExpect(status().isOk())
         .andDo(document("{method-name}/", requestFields(
            fieldWithPath("id").description("Employee id").ignored(),
            fieldWithPath("organizationId").description("Employee's organization id"),
            fieldWithPath("departmentId").description("Employee's department id"),
            fieldWithPath("name").description("Employee's name"),
            fieldWithPath("age").description("Employee's age"),
            fieldWithPath("position").description("Employee's position inside organization")
         )));
   }
   
   @Test
   public void findPersonById() throws JsonProcessingException, Exception {
      this.mockMvc.perform(get("/{id}", 1).accept(MediaType.APPLICATION_JSON))
         .andExpect(status().isOk())
         .andDo(document("{method-name}/", responseFields(
            fieldWithPath("id").description("Employee id"),
            fieldWithPath("organizationId").description("Employee's organization id"),
            fieldWithPath("departmentId").description("Employee's department id"),
            fieldWithPath("name").description("Employee's name"),
            fieldWithPath("age").description("Employee's age"),
            fieldWithPath("position").description("Employee's position inside organization")
         ), pathParameters(parameterWithName("id").description("Employee id"))));
   }

}

If you would like to customize some settings of Spring REST Docs you should provide @TestConfiguration class inside JUnit test class. In the following code fragment you may see an example of such customization. I overridden default snippets output directory from index to test method-specific name, and force generation of sample request and responses using prettyPrint option (single parameter in the separated line).

@TestConfiguration
static class CustomizationConfiguration implements RestDocsMockMvcConfigurationCustomizer {

   @Override
   public void customize(MockMvcRestDocumentationConfigurer configurer) {
      configurer.operationPreprocessors()
         .withRequestDefaults(prettyPrint())
         .withResponseDefaults(prettyPrint());
   }
   
   @Bean
   public RestDocumentationResultHandler restDocumentation() {
      return MockMvcRestDocumentation.document("{method-name}");
   }
}

Now, if you execute mvn clean install on your project you should see the following structure inside your output directory.
rest-api-docs-3

4. Viewing and publishing API docs

Once we have successfully built our project, the documentation has been generated. We can display HTML files available at target/generated-docs/api.html. It provides the full documentation of our API.

rest-api-docs-1
And the next part…

rest-api-docs-2
You may also want to publish it inside your application fat JAR file. If you configure maven-resources-plugin following example vibisle below it would be available under /static/docs directory inside JAR.

<plugin>
   <artifactId>maven-resources-plugin</artifactId>
   <executions>
      <execution>
         <id>copy-resources</id>
         <phase>prepare-package</phase>
         <goals>
            <goal>copy-resources</goal>
         </goals>
         <configuration>
            <outputDirectory>
               ${project.build.outputDirectory}/static/docs
            </outputDirectory>
            <resources>
               <resource>
                  <directory>
                     ${project.build.directory}/generated-docs
                  </directory>
               </resource>
            </resources>
         </configuration>
      </execution>
   </executions>
</plugin>

Conclusion

That’s all what I wanted to show in this article. The sample service generating documentation using Spring REST Docs is available on GitHub under repository https://github.com/piomin/sample-spring-microservices-new/tree/rest-api-docs/employee-service. I’m not sure that Swagger and Spring REST Docs should be treated as competitive solutions. I use Swagger for simple testing an API on the running application or exposing specifications that can be used for automated generation of a client code. Spring REST Docs is rather used for generating documentation that can be published somewhere, and “is accurate, concise, and well-structured. This documentation then allows your users to get the information they need with a minimum of fuss”. I think there is no obstacle to using Spring REST Docs and SpringFox Swagger together in your project in order to provide the most valuable documentation of API exposed by the application.

The post Spring REST Docs versus SpringFox Swagger for API documentation appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2018/07/19/spring-rest-docs-versus-springfox-swagger-for-api-documentation/feed/ 0 6744
Exporting metrics to InfluxDB and Prometheus using Spring Boot Actuator https://piotrminkowski.com/2018/05/11/exporting-metrics-to-influxdb-and-prometheus-using-spring-boot-actuator/ https://piotrminkowski.com/2018/05/11/exporting-metrics-to-influxdb-and-prometheus-using-spring-boot-actuator/#respond Fri, 11 May 2018 09:44:10 +0000 https://piotrminkowski.wordpress.com/?p=6551 Spring Boot Actuator is one of the most modified projects after the release of Spring Boot 2. It has been through major improvements, which aimed to simplify customization and include some new features like support for other web technologies, for example, the new reactive module – Spring WebFlux. Spring Boot Actuator also adds out-of-the-box support […]

The post Exporting metrics to InfluxDB and Prometheus using Spring Boot Actuator appeared first on Piotr's TechBlog.

]]>
Spring Boot Actuator is one of the most modified projects after the release of Spring Boot 2. It has been through major improvements, which aimed to simplify customization and include some new features like support for other web technologies, for example, the new reactive module – Spring WebFlux. Spring Boot Actuator also adds out-of-the-box support for exporting metrics to InfluxDB – an open-source time-series database designed to handle high volumes of timestamped data. It is really a great simplification in comparison to the version used with Spring Boot 1.5. You can see for yourself how much by reading one of my previous articles Custom metrics visualization with Grafana and InfluxDB. I described there how to export metrics generated by Spring Boot Actuator to InfluxDB using @ExportMetricsWriter bean. The sample Spring Boot application has been available for that article on GitHub repository sample-spring-graphite (https://github.com/piomin/sample-spring-graphite.git) in the branch master. For the current article, I have created the branch spring2 (https://github.com/piomin/sample-spring-graphite/tree/spring2), which show how to implement the same feature as before using version 2.0 of Spring Boot and Spring Boot Actuator.

Additionally, I’m going to show you how to use Spring Boot Actuator to export the same metrics to another popular monitoring system – Prometheus. There is one major difference between models of exporting metrics between InfluxDB and Prometheus. The first of them is a push based system, while the second is a pull based system. So, our sample application needs to actively send data to the InfluxDB monitoring system, while with Prometheus it only has to expose endpoints that will be fetched for data periodically. Let’s begin from InfluxDB.

1. Running InfluxDB

In the previous article I didn’t write much about this database and its configuration. The first step is typical for my examples – we will run a Docker container with InfluxDB. Here’s the simplest command that runs InfluxDB on your local machine and exposes HTTP API over 8086 port.

$ docker run -d --name influx -p 8086:8086 influxdb

Once we started that container, you would probably want to login there and execute some commands. Nothing simpler, just run the following command and you would be able to do it. After login, you should see the version of InfluxDB running on the target Docker container.

$ docker exec -it influx influx
Connected to http://localhost:8086 version 1.5.2
InfluxDB shell version: 1.5.2

The first step is to create a database. As you can probably guess, it can be achieved using the command create database. Then switch to the newly created database.

$ create database springboot
$ use springboot

Does that semantic look familiar for you? InfluxDB provides a very similar query language to SQL. It is called InfluxQL, and allows you to define SELECT statements, GROUP BY or INTO clauses, and many more. However, before executing such queries, we should have data stored inside the database, am I right? Now, let’s proceed to the next steps in order to generate some test metrics.

2. Integrating Spring Boot Actuator with InfluxDB

If you include artifact micrometer-registry-influx to the project’s dependencies, an export to InfluxDB will be enabled automatically. Of course, we also need to include starter spring-boot-starter-actuator.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
   <groupId>io.micrometer</groupId>
   <artifactId>micrometer-registry-influx</artifactId>
</dependency>

The only thing you have to do is to override the default address of InfluxDB because we are running InfluxDB a Docker container on VM. By default, Spring Boot Data tries to connect to a database named mydb. However, I have already created a database springboot, so I should also override this default value. In version 2 of Spring Boot all the configuration properties related to Spring Boot Actuator endpoints have been moved to management.* section.


management:
  metrics:
    export:
      influx:
        db: springboot
        uri: http://192.168.99.100:8086

You may be surprised a little after starting Spring Boot application with actuator included on the classpath, that it exposes only two HTTP endpoints by default /actuator/info and /actuator/health. That’s why in the newest version of Spring Boot all actuators other than /health and /info are disabled by default, for security purposes. To enable all the actuator endpoints, you have to set property management.endpoints.web.exposure.include to '*'.
In the newest version of Spring Boot monitoring of HTTP metrics has been improved significantly. We can enable collecting all Spring MVC metrics by setting the property management.metrics.web.server.auto-time-requests to true. Alternatively, when it is set to false, you can enable metrics for the specific REST controller by annotating it with @Timed. You can also annotate a single method inside the controller, to generate metrics only for specific endpoints.
After application boot you may check out the full list of generated metrics by calling endpoint GET /actuator/metrics. By default, metrics for Spring MVC controller are generated under the name http.server.requests. This name can be customized by setting the management.metrics.web.server.requests-metric-name property. If you run the sample application available inside my GitHub repository it is by default available under port 2222. Now, you can check out the list of statistics generated for a single metric by calling the endpoint GET /actuator/metrics/{requiredMetricName}, as shown in the following picture.

actuator-6

3. Building Spring Boot application

The sample Spring Boot application used for generating metrics consists of a single controller that implements basic CRUD operations for manipulating Person entity, repository bean and entity class. The application connects to MySQL database using Spring Data JPA repository providing CRUD implementation. Here’s the controller class.

@RestController
@Timed
public class PersonController {

   protected Logger logger = Logger.getLogger(PersonController.class.getName());

   @Autowired
   PersonRepository repository;

   @GetMapping("/persons/pesel/{pesel}")
   public List findByPesel(@PathVariable("pesel") String pesel) {
      logger.info(String.format("Person.findByPesel(%s)", pesel));
      return repository.findByPesel(pesel);
   }

   @GetMapping("/persons/{id}")
   public Person findById(@PathVariable("id") Integer id) {
      logger.info(String.format("Person.findById(%d)", id));
      return repository.findById(id).get();
   }

   @GetMapping("/persons")
   public List findAll() {
      logger.info(String.format("Person.findAll()"));
      return (List) repository.findAll();
   }

   @PostMapping("/persons")
   public Person add(@RequestBody Person person) {
      logger.info(String.format("Person.add(%s)", person));
      return repository.save(person);
   }

   @PutMapping("/persons")
   public Person update(@RequestBody Person person) {
      logger.info(String.format("Person.update(%s)", person));
      return repository.save(person);
   }

   @DeleteMapping("/persons/{id}")
   public void remove(@PathVariable("id") Integer id) {
      logger.info(String.format("Person.remove(%d)", id));
      repository.deleteById(id);
   }

}

Before running the application we have set up a MySQL database. The most convenient way to achieve it is through MySQL Docker image. Here’s the command that runs a container with database grafana, defines user and password, and exposes MySQL 5 on port 33306.

$ docker run -d --name mysql -e MYSQL_DATABASE=grafana -e MYSQL_USER=grafana -e MYSQL_PASSWORD=grafana -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -p 33306:3306 mysql:5
 

Then we need to set some database configuration properties on the application side. All the required tables will be created on application’s boot thanks to setting property spring.jpa.properties.hibernate.hbm2ddl.auto to update.


spring:
  datasource:
    url: jdbc:mysql://192.168.99.100:33306/grafana?useSSL=false
    username: grafana
    password: grafana
    driverClassName: com.mysql.jdbc.Driver
  jpa:
    properties:
      hibernate:
        dialect: org.hibernate.dialect.MySQL5Dialect
        hbm2ddl.auto: update
 

4. Generating metrics with Spring Boot Actuator

After starting the application and the required Docker containers, the only thing that needs to be is done is to generate some test statistics. I created the JUnit test class that generates some test data and calls endpoints exposed by the application in a loop. Here’s the fragment of that test method.

int ix = new Random().nextInt(100000);
Person p = new Person();
p.setFirstName("Jan" + ix);
p.setLastName("Testowy" + ix);
p.setPesel(new DecimalFormat("0000000").format(ix) + new DecimalFormat("000").format(ix%100));
p.setAge(ix%100);
p = template.postForObject("http://localhost:2222/persons", p, Person.class);
LOGGER.info("New person: {}", p);

p = template.getForObject("http://localhost:2222/persons/{id}", Person.class, p.getId());
p.setAge(ix%100);
template.put("http://localhost:2222/persons", p);
LOGGER.info("Person updated: {} with age={}", p, ix%100);

template.delete("http://localhost:2222/persons/{id}", p.getId());

Now, let’s move back to step 1. As you probably remember, I have shown you how to run the influx client in the InfluxDB Docker container. After some minutes of working, the test unit should call exposed endpoints many times. We can check out the values of metric http_server_requests stored on Influx. The following query returns a list of measurements collected during the last 3 minutes.

spring-boot-actuator-prometheus-1

As you see, all the metrics generated by Spring Boot Actuator are tagged with the following information: method, uri, status and exception. Thanks to that tag we may easily group metrics per single endpoint including failures and success percentage. Let’s see how to configure and view it in Grafana.

5. Metrics visualization using Grafana

Once we have exported succesfully metrics to InfluxDB, it is time to visualize them using Grafana. First, let’s run Docker container with Grafana.


$ docker run -d --name grafana -p 3000:3000 grafana/grafana
 

Grafana provides a user friendly interface for creating influx queries. We define a graph that visualizes requests processing time per each of calling endpoints and the total number of requests received by the application. If we filter the statistics stored in the table http_server_requests by method type and uri, we would collect all metrics generated per single endpoint.

spring-boot-actuator-prometheus-4

The similar definition should be created for the other endpoints. We will illustrate them all on a single graph.

actuator-5

Here’s the final result.

spring-boot-actuator-prometheus-2

Here’s the graph that visualizes the total number of requests sent to the application.

actuator-3

6. Running Prometheus

The most suitable way to run Prometheus locally is obviously through a Docker container. The API is exposed under port 9090. We should also pass the initial configuration file and name of Docker network. Why? You will find all the answers in the next part of this step description.


docker run -d --name prometheus -p 9090:9090 -v /tmp/prometheus.yml:/etc/prometheus/prometheus.yml --network springboot prom/prometheus

In contrast to InfluxDB, Prometheus pulls metrics from an application. Therefore, we need to enable the Spring Boot Actuator endpoint that exposes metrics for Prometheus, which is disabled by default. To enable it, set property management.endpoint.prometheus.enabled to true, as shown on the configuration fragment below.

management:
  endpoint:
    prometheus:
      enabled: true

Then we should set the address of the Spring Boot Actuator endpoint exposed by the application in the Prometheus configuration file. A scrape_config section is responsible for specifying a set of targets and parameters describing how to connect with them. By default, Prometheus tries to collect data from the target endpoint once a minute.

scrape_configs:
  - job_name: 'springboot'
    metrics_path: '/actuator/prometheus'
    static_configs:
    - targets: ['person-service:2222']

Similarly for integration with InfluxDB we need to include the following artifact to the project’s dependencies.

<dependency>
   <groupId>io.micrometer</groupId>
   <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

In my case, Docker is running on VM, and is available under IP 192.168.99.100. If I would like Prometheus, which is launched as a Docker container, to be able to connect my application, I also should launch it as a Docker container. The most convenient way to link two independent containers is through the Docker network. If both containers are assigned to the same network, they would be able to connect to each other using the container’s name as a target address. Dockerfile is available in the root directory of the sample application’s source code. Second command visible below (docker build) is not required, because the required image piomin/person-service is available on my Docker Hub repository.

$ docker network create springboot
$ docker build -t piomin/person-service .
$ docker run -d --name person-service -p 2222:2222 --network springboot piomin/person-service
 

7. Integrate Prometheus with Grafana

Prometheus exposes a web console under address 192.168.99.100:9090, where you can specify query and display graphs with metrics. However, we can integrate it with Grafana to take an advantage of nicer visualization offered by this tool. First, you should create a Prometheus data source.

actuator-9

Then we should define queries for collecting metrics from Prometheus API. Spring Boot Actuator exposes three different metrics related to HTTP traffic: http_server_requests_seconds_count, http_server_requests_seconds_sum and http_server_requests_seconds_max. For example, we may calculate a per-second average rate of increase of the time series for http_server_requests_seconds_sum, that returns the total number of seconds spent on processing requests by using rate() function. The values can be filtered by method and uri using expression inside {}. The following picture illustrates configuration of rate() function per each endpoint.

actuator-8

Here’s the graph.

actuator-7

Summary

The improvement in metrics generation between version 1.5 and 2.0 of Spring Boot is significant. Exporting data to such popular monitoring systems like InfluxDB or Prometheus is now much easier than before with Spring Boot Actuator, and does not require any additional development. The metrics relating to HTTP traffic are more detailed and they may be easily associated with specific endpoints, thanks to tags indicating the uri, type and status of HTTP request. I think that modifications in Spring Boot Actuator in relation to the previous version of Spring Boot, could be one of the main motivations to migrate your applications to the newest version.

The post Exporting metrics to InfluxDB and Prometheus using Spring Boot Actuator appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2018/05/11/exporting-metrics-to-influxdb-and-prometheus-using-spring-boot-actuator/feed/ 0 6551