spring data elasticsearch Archives - Piotr's TechBlog https://piotrminkowski.com/tag/spring-data-elasticsearch/ Java, Spring, Kotlin, microservices, Kubernetes, containers Sat, 19 Dec 2020 07:48:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 spring data elasticsearch Archives - Piotr's TechBlog https://piotrminkowski.com/tag/spring-data-elasticsearch/ 32 32 181738725 Performance Comparison Between Spring MVC vs Spring WebFlux with Elasticsearch https://piotrminkowski.com/2019/10/30/performance-comparison-between-spring-mvc-and-spring-webflux-with-elasticsearch/ https://piotrminkowski.com/2019/10/30/performance-comparison-between-spring-mvc-and-spring-webflux-with-elasticsearch/#respond Wed, 30 Oct 2019 14:50:38 +0000 https://piotrminkowski.wordpress.com/?p=7383 Since Spring 5 and Spring Boot 2, there is full support for reactive REST API with the Spring WebFlux project. Also, project Spring Data systematically includes support for reactive NoSQL databases, and recently for SQL databases too. Since Spring Data Moore we can take advantage of reactive template and repository for Elasticsearch, what I have […]

The post Performance Comparison Between Spring MVC vs Spring WebFlux with Elasticsearch appeared first on Piotr's TechBlog.

]]>
Since Spring 5 and Spring Boot 2, there is full support for reactive REST API with the Spring WebFlux project. Also, project Spring Data systematically includes support for reactive NoSQL databases, and recently for SQL databases too. Since Spring Data Moore we can take advantage of reactive template and repository for Elasticsearch, what I have already described in one of my previous article Reactive Elasticsearch With Spring Boot.
Recently, we can observe the rising popularity of reactive programming and reactive APIs. This fact has led me to perform some comparison between synchronous API built on top of Spring MVC vs Spring WebFlux reactive API. The comparison will cover server-side memory usage and an average response time on the client-side. We will also use Spring Data Elasticsearch Repositories accessed by the controller for integration with a running instance of Elasticsearch on a Docker container. To make the test objective we will of course use the same versions of Spring Boot and Spring Data projects. First, let’s consider some prerequisites.

1. Dependencies

We are using Spring Boot in version 2.2.0.RELEASE with JDK 11.

<parent>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-parent</artifactId>
   <version>2.2.0.RELEASE</version>
   <relativePath/>
</parent>
<properties>
   <java.version>11</java.version>
</properties>

Here’s the list of dependencies for the application with synchronous REST API:

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

And here’s for the application reactive API:

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

2. Running Elasticsearch

We will run the same Docker container for both tests. The container is started in development mode as a single node.

$ docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.6.2

We will insert the initial set of data into Elasticsearch.

public class SampleDataSet {

    private static final Logger LOGGER = LoggerFactory.getLogger(SampleDataSet.class);
    private static final String INDEX_NAME = "sample";
    private static final String INDEX_TYPE = "employee";
    private static int COUNTER = 0;

    @Autowired
    ElasticsearchTemplate template;
    @Autowired
    TaskExecutor taskExecutor;

    @PostConstruct
    public void init() {
        if (!template.indexExists(INDEX_NAME)) {
            template.createIndex(INDEX_NAME);
            LOGGER.info("New index created: {}", INDEX_NAME);
        }
        for (int i = 0; i < 10000; i++) {
            taskExecutor.execute(() -> bulk());
        }
    }

    public void bulk() {
        try {
            ObjectMapper mapper = new ObjectMapper();
            List<IndexQuery> queries = new ArrayList<>();
            List<Employee> employees = employees();
            for (Employee employee : employees) {
                IndexQuery indexQuery = new IndexQuery();
                indexQuery.setSource(mapper.writeValueAsString(employee));
                indexQuery.setIndexName(INDEX_NAME);
                indexQuery.setType(INDEX_TYPE);
                queries.add(indexQuery);
            }
            if (queries.size() > 0) {
                template.bulkIndex(queries);
            }
            template.refresh(INDEX_NAME);
            LOGGER.info("BulkIndex completed: {}", ++COUNTER);
        } catch (Exception e) {
            LOGGER.error("Error bulk index", e);
        }
    }

    private List<Employee> employees() {
        List<Employee> employees = new ArrayList<>();
        for (int i = 0; i < 10000; i++) {
            Random r = new Random();
            Employee employee = new Employee();
            employee.setName("JohnSmith" + r.nextInt(1000000));
            employee.setAge(r.nextInt(100));
            employee.setPosition("Developer");
            int departmentId = r.nextInt(500000);
            employee.setDepartment(new Department((long) departmentId, "TestD" + departmentId));
            int organizationId = departmentId / 100;
            employee.setOrganization(new Organization((long) organizationId, "TestO" + organizationId, "Test Street No. " + organizationId));
            employees.add(employee);
        }
        return employees;
    }

}

We are testing a single document Employee:

@Document(indexName = "sample", type = "employee")
public class Employee {

    @Id
    private String id;
    @Field(type = FieldType.Object)
    private Organization organization;
    @Field(type = FieldType.Object)
    private Department department;
    private String name;
    private int age;
    private String position;
   
}

I think that a set of data shouldn’t be too large, but also not too small. Let’s test node with around 18M of documents divided into 5 shards.

elastic-perf-1

3. Synchronous API Tests

The library used for performance tests is junit-benchmarks. It allows to define the number of concurrent threads for JUnit test method, and the number of repeats.

<dependency>
   <groupId>com.carrotsearch</groupId>
   <artifactId>junit-benchmarks</artifactId>
   <version>0.7.2</version>
   <scope>test</scope>
</dependency>

The implementation of JUnit test class is visible below. It should extends AbstractBenchmark class and define the test rule BenchmarkRule. The tests are performed on the running external application available under localhost:8080 using TestRestTemplate. We have three test scenarios. In the first implementation inside addTest we are verifying a time required for adding a new document to Elasticsearch through POST method. Another two scenarios defined in methods findByNameTest and findByOrganizationNameTest tests search methods. Each test is running in 30 concurrent threads and repeated 500 times.

public class EmployeeRepositoryPerformanceTest extends AbstractBenchmark {

    private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeRepositoryPerformanceTest.class);

    @Rule
    public TestRule benchmarkRun = new BenchmarkRule();

    private TestRestTemplate template = new TestRestTemplate();
    private Random r = new Random();

    @Test
    @BenchmarkOptions(concurrency = 30, benchmarkRounds = 500, warmupRounds = 2)
    public void addTest() {
        Employee employee = new Employee();
        employee.setName("John Smith");
        employee.setAge(r.nextInt(100));
        employee.setPosition("Developer");
        employee.setDepartment(new Department((long) r.nextInt(1000), "TestD"));
        employee.setOrganization(new Organization((long) r.nextInt(100), "TestO", "Test Street No. 1"));
        employee = template.postForObject("http://localhost:8080/employees", employee, Employee.class);
        Assert.assertNotNull(employee);
        Assert.assertNotNull(employee.getId());
    }

    @Test
    @BenchmarkOptions(concurrency = 30, benchmarkRounds = 500, warmupRounds = 2)
    public void findByNameTest() {
        String name = "JohnSmith" + r.nextInt(1000000);
        Employee[] employees = template.getForObject("http://localhost:8080/employees/{name}", Employee[].class, name);
        LOGGER.info("Found: {}", employees.length);
        Assert.assertNotNull(employees);
    }

    @Test
    @BenchmarkOptions(concurrency = 30, benchmarkRounds = 500, warmupRounds = 2)
    public void findByOrganizationNameTest() {
        String organizationName = "TestO" + r.nextInt(5000);
        Employee[] employees = template.getForObject("http://localhost:8080/employees/organization/{organizationName}", Employee[].class, organizationName);
        LOGGER.info("Found: {}", employees.length);
        Assert.assertNotNull(employees);
    }

}

4. Reactive API Tests

For reactive API we have the same scenarios, but they have to be implemented a little differently since we have asynchronous, non-blocking API. First, we will use a smart library called concurrentunit for testing multi-threaded or asynchronous code.

<dependency>
   <groupId>net.jodah</groupId>
   <artifactId>concurrentunit</artifactId>
   <version>0.4.6</version>
   <scope>test</scope>
</dependency>

ConcurrentUnit library allows us to define the Waiter object which is responsible for performing assertions and waiting for operations in any thread, and then notifying back the main test thread. Also we are using WebClient, which is able to retrieve reactive streams defined as Flux and Mono.

public class EmployeeRepositoryPerformanceTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeRepositoryPerformanceTest.class);

    @Rule
    public TestRule benchmarkRun = new BenchmarkRule();

    private final Random r = new Random();
    private final WebClient client = WebClient.builder()
            .baseUrl("http://localhost:8080")
            .defaultHeader(HttpHeaders.CONTENT_TYPE, "application/json")
            .build();

    @Test
    @BenchmarkOptions(concurrency = 30, benchmarkRounds = 500, warmupRounds = 2)
    public void addTest() throws TimeoutException, InterruptedException {
        final Waiter waiter = new Waiter();
        Employee employee = new Employee();
        employee.setName("John Smith");
        employee.setAge(r.nextInt(100));
        employee.setPosition("Developer");
        employee.setDepartment(new Department((long) r.nextInt(10), "TestD"));
        employee.setOrganization(new Organization((long) r.nextInt(10), "TestO", "Test Street No. 1"));
        Mono<Employee> empMono = client.post().uri("/employees").body(Mono.just(employee), Employee.class).retrieve().bodyToMono(Employee.class);
        empMono.subscribe(employeeLocal -> {
            waiter.assertNotNull(employeeLocal);
            waiter.assertNotNull(employeeLocal.getId());
            waiter.resume();
        });
        waiter.await(5000);
    }

    @Test
    @BenchmarkOptions(concurrency = 30, benchmarkRounds = 500, warmupRounds = 2)
    public void findByNameTest() throws TimeoutException, InterruptedException {
        final Waiter waiter = new Waiter();
        String name = "JohnSmith" + r.nextInt(1000000);
        Flux<Employee> employees = client.get().uri("/employees/{name}", name).retrieve().bodyToFlux(Employee.class);
        employees.count().subscribe(count -> {
            waiter.assertTrue(count > 0);
            waiter.resume();
            LOGGER.info("Found({}): {}", name, count);
        });
        waiter.await(5000);
    }

    @Test
    @BenchmarkOptions(concurrency = 30, benchmarkRounds = 500, warmupRounds = 2)
    public void findByOrganizationNameTest() throws TimeoutException, InterruptedException {
        final Waiter waiter = new Waiter();
        String organizationName = "TestO" + r.nextInt(5000);
        Flux<Employee> employees = client.get().uri("/employees/organization/{organizationName}", organizationName).retrieve().bodyToFlux(Employee.class);
        employees.count().subscribe(count -> {
            waiter.assertTrue(count > 0);
            waiter.resume();
            LOGGER.info("Found: {}", count);
        });
        waiter.await(5000);
    }

}

5. Spring MVC vs Spring WebFlux – Test Results

After discussing some prerequisites and implementation details we may finally proceed to the tests. I think that the results are pretty interesting. Let’s begin with Spring MVC tests. Here are graphs that illustrate memory usage during the tests. The first of them shows heap memory usage.

spring-mvc-vs-webflux-elastic-perf-2

The second shows metaspace.

spring-mvc-vs-webflux-elastic-perf-3

Here are equivalent graphs for reactive API tests. The heap memory usage is a little higher than for previous tests, although generally, Netty requires lower memory than Tomcat (50MB instead of 100MB before running the test).

elastic-perf-6

The metaspace usage is a little lower than for synchronous API tests (60MB vs 75MB).

spring-mvc-vs-webflux-elastic-perf-7

And now the processing time test results. They may be a little unexpected. In fact, there is no big difference between synchronous and reactive tests. One thing that should be explained here. The method findByName returns a lower set of employees than findByOrganizationName. That’s why it is much faster than the method for searching by organization name.

spring-mvc-vs-webflux-elastic-perf-4

As I mentioned before the results are pretty the same especially if thinking about the POST method. The result for findByName is 6.2s instead of 7.1s for synchronous calls, which gives a difference of around 15%. The test for findByOrganizationName has failed due to exceeding the 5s timeout defined for every single run of the test method. It seems that processing results around 3-4k of objects in a single response has significantly slowed down the sample application based on Spring WebFlux and reactive Elasticsearch repositories.

elastic-perf-5

Summary

I won’t discuss the result of these tests. The thoughts are on your side. The source code repository is available on GitHub https://github.com/piomin/sample-spring-elasticsearch. Branch master contains a version for Spring MVC tests, while branch reactive for Spring WebFlux tests.

The post Performance Comparison Between Spring MVC vs Spring WebFlux with Elasticsearch appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/10/30/performance-comparison-between-spring-mvc-and-spring-webflux-with-elasticsearch/feed/ 0 7383
Elasticsearch with Spring Boot https://piotrminkowski.com/2019/03/29/elasticsearch-with-spring-boot/ https://piotrminkowski.com/2019/03/29/elasticsearch-with-spring-boot/#comments Fri, 29 Mar 2019 09:11:13 +0000 https://piotrminkowski.wordpress.com/?p=7061 Elasticsearch is a full-text search engine especially designed for working with large data sets. Following this description, it is a natural choice to use it for storing and searching application logs. Together with Logstash and Kibana, it is a part of a powerful solution called Elastic Stack, which has already been described in some of […]

The post Elasticsearch with Spring Boot appeared first on Piotr's TechBlog.

]]>
Elasticsearch is a full-text search engine especially designed for working with large data sets. Following this description, it is a natural choice to use it for storing and searching application logs. Together with Logstash and Kibana, it is a part of a powerful solution called Elastic Stack, which has already been described in some of my previous articles.
Keeping application logs is not the only use case for Elasticsearch. It is often used as a secondary database for the application, that has a primary relational database. Such an approach can be especially useful if you have to perform a full-text search over a large data set or just store many historical records that are no longer modified by the application. Of course, there are always questions about the advantages and disadvantages of that approach.
When you are working with two different data sources that contain the same data, you have to first think about synchronization. You have several options. Depending on the relational database vendor, you can leverage binary or transaction logs, which contain the history of SQL updates. This approach requires some middleware that reads logs and then puts data to Elasticsearch. You can always move the whole responsibility to the database side (trigger) or into the Elasticsearch side (JDBC plugins).
No matter how you will import your data into Elasticsearch, you have to consider another problem. The data structure. You probably have data distributed between a few tables in your relational database. If you would like to take advantage of Elasticsearch you should store it as a single type. It forces you to keep redundant data, which results in larger disc space usage. Of course, that effect is acceptable if the queries would work faster than equivalent queries in relational databases.
Ok, let’s proceed to the example after that long introduction. Spring Boot provides an easy way to interact with Elasticsearch through Spring Data repositories.

1. Enabling Elasticsearch support in Spring Boot

As is customary with Spring Boot we don’t have to provide any additional beans in the context to enable support for Elasticsearch. We just need to include the following dependency to our pom.xml:


<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

By default, the application tries to connect with Elasticsearch on localhost. If we use another target URL we need to override it in configuration settings. Here’s the fragment of our application.yml file that overrides default cluster name and address to the address of Elasticsearch started on Docker container:

spring:
  data:
    elasticsearch:
      cluster-name: docker-cluster
      cluster-nodes: 192.168.99.100:9300

The health status of Elasticsearch connection may be exposed by the application through Spring Boot Actuator health endpoint. First, you need to include the following Maven dependency:

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

Healthcheck is enabled by default, and Elasticsearch check is auto-configured. However, this verification is performed via Elasticsearch Rest API client. In that case, we need to override property spring.elasticsearch.rest.uris responsible for setting address used by REST client:

spring:
  elasticsearch:
    rest:
      uris: http://192.168.99.100:9200

2. Running Elasticsearch on Docker

For our tests we need a single node Elasticsearch instance running in development mode. As usual we will use Docker containers. Here’s the command that starts the Docker container and exposes it on ports 9200 and 9300.

$ docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.6.2

3. Building Spring Data Repositories

To enable Elasticsearch repositories we just need to annotate the main or configuration class with @EnableElasticsearchRepositories:

@SpringBootApplication
@EnableElasticsearchRepositories
public class SampleApplication { ... }

The next step is to create a repository interface that extends CrudRepository. It provides some basic operations like save or findById. If you would like to have some additional find methods you should define new methods inside the interface following Spring Data naming convention.

public interface EmployeeRepository extends CrudRepository<Employee, Long> {

    List<Employee> findByOrganizationName(String name);
    List<Employee> findByName(String name);

}

4. Building Document with Spring Data Elasticsearch

Our relational structure of entities is flattened into the single Employee object that contains related objects (Organization, Department). You can compare this approach to creating a view for a group of related tables in RDBMS. In Spring Data Elasticsearch nomenclature a single object is stored as a document. So, you need to annotate your object with @Document. You should also set the name of Elasticsearch target index, type and id. Additional mappings can be configured with @Field annotation.

@Document(indexName = "sample", type = "employee")
public class Employee {

    @Id
    private Long id;
    @Field(type = FieldType.Object)
    private Organization organization;
    @Field(type = FieldType.Object)
    private Department department;
    private String name;
    private int age;
    private String position;
	
    // Getters and Setters ...

}

5. Initial import to Elasticsearch

As I have mentioned in the preface the main reason you may decide to use Elasticsearch is the need for working with large data. Therefore it is desirable to fill our test Elasticsearch node with many documents. If you would like to insert many documents in one step you should definitely use Bulk API. The bulk API makes it possible to perform many index/delete operations in a single API call. This can greatly increase the indexing speed.
The bulk operations may be performed with Spring Data ElasticsearchTemplate bean. It is also auto-configured on Spring Boot. The template provides a bulkIndex method that takes a list of index queries as an input parameter. Here’s the implementation of bean that inserts sample test data on application startup:

public class SampleDataSet {

    private static final Logger LOGGER = LoggerFactory.getLogger(SampleDataSet.class);
    private static final String INDEX_NAME = "sample";
    private static final String INDEX_TYPE = "employee";

    @Autowired
    EmployeeRepository repository;
    @Autowired
    ElasticsearchTemplate template;

    @PostConstruct
    public void init() {
        for (int i = 0; i < 10000; i++) {
            bulk(i);
        }
    }

    public void bulk(int ii) {
        try {
            if (!template.indexExists(INDEX_NAME)) {
                template.createIndex(INDEX_NAME);
            }
            ObjectMapper mapper = new ObjectMapper();
            List<IndexQuery> queries = new ArrayList<>();
            List<Employee> employees = employees();
            for (Employee employee : employees) {
                IndexQuery indexQuery = new IndexQuery();
                indexQuery.setId(employee.getId().toString());
                indexQuery.setSource(mapper.writeValueAsString(employee));
                indexQuery.setIndexName(INDEX_NAME);
                indexQuery.setType(INDEX_TYPE);
                queries.add(indexQuery);
            }
            if (queries.size() > 0) {
                template.bulkIndex(queries);
            }
            template.refresh(INDEX_NAME);
            LOGGER.info("BulkIndex completed: {}", ii);
        } catch (Exception e) {
            LOGGER.error("Error bulk index", e);
        }
    }
	
	// sample data set implementation ...
	
}

If you don’t need to insert data on startup you can disable that process by setting property initial-import.enabled to false. Here’s declaration of SampleDataSet bean:

@Bean
@ConditionalOnProperty("initial-import.enabled")
public SampleDataSet dataSet() {
	return new SampleDataSet();
}

6. Viewing data and running queries

Assuming that you have already started the sample application, the bean responsible for bulking index was not disabled, and you had enough patience to wait some hours until all data has been inserted into your Elasticsearch node, now it contains 100M documents of employee type. It is worth displaying some information about your cluster. You can do it using Elasticsearch queries or you can download one of the available GUI tools, for example ElasticHQ. Fortunately, ElasticHQ is also available as a Docker container. You have to execute the following command to start container with ElasticHQ:

$ docker run -d --name elastichq -p 5000:5000 elastichq/elasticsearch-hq

After starting ElasticHQ GUI can be accessed via web browser on port 5000. Its web console provides basic information about the cluster, index and allows to perform queries. You only need to put Elasticsearch node address and you will be redirected into the main dashboard with statistics. Here’s the main dashboard of ElasticHQ.

spring-boot-elasticsearch-3

As you can see we have a single index called sample divided into 5 shards. That is the default value provided by Spring Data @Document, which can be overridden with field shards. We can navigate to the index management panel after clicking on it. You can perform some operations on indexes like clear cache or refresh index. You can also take a look at statistics for all shards.

spring-boot-elasticsearch-4

For the current test purposes, I have around 25M (around ~3GB of space) documents of Employee type. We can execute some test queries. I have exposed two endpoints for searching: by employee name GET /employees/{name} and by organization name GET /employees/organization/{organizationName}. The results are not overwhelming. I think we could have the same results for relational databases using the same amount of data.

elastic-2

7. Testing of Spring Boot Elasticseaech application

Ok, we have already finished development and performed some manual tests on the large data set. Now, it’s time to create some integration tests running on built time. We can use the library that allows us to automatically start Docker containers with databases during JUnit tests – Testcontainers. For more about this library you may refer to its site https://www.testcontainers.org or to one of my previous articles: Testing Spring Boot Integration with Vault and Postgres using Testcontainers Framework. Fortunately, Testcontainers supports Elasticsearch. To enable it on test scope you first need to include the following dependency to your pom.xml:

<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>elasticsearch</artifactId>
	<version>1.11.1</version>
	<scope>test</scope>
</dependency>

The next step is to define @ClassRule or @Rule bean that points to the Elasticsearch container. It is automatically started before test class or before each depending on the annotation you use. The exposed port number is generated automatically, so you need to retrieve it and set as value for spring.data.elasticsearch.cluster-nodes property. Here’s the full implementation of our JUnit integration test:

@RunWith(SpringRunner.class)
@SpringBootTest
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class EmployeeRepositoryTest {

    @ClassRule
    public static ElasticsearchContainer container = new ElasticsearchContainer();
    @Autowired
    EmployeeRepository repository;

    @BeforeClass
    public static void before() {
        System.setProperty("spring.data.elasticsearch.cluster-nodes", container.getContainerIpAddress() + ":" + container.getMappedPort(9300));
    }

    @Test
    public void testAdd() {
        Employee employee = new Employee();
        employee.setId(1L);
        employee.setName("John Smith");
        employee.setAge(33);
        employee.setPosition("Developer");
        employee.setDepartment(new Department(1L, "TestD"));
        employee.setOrganization(new Organization(1L, "TestO", "Test Street No. 1"));
        employee = repository.save(employee);
        Assert.assertNotNull(employee);
    }

    @Test
    public void testFindAll() {
        Iterable<Employee> employees = repository.findAll();
        Assert.assertTrue(employees.iterator().hasNext());
    }

    @Test
    public void testFindByOrganization() {
        List<Employee> employees = repository.findByOrganizationName("TestO");
        Assert.assertTrue(employees.size() > 0);
    }

    @Test
    public void testFindByName() {
        List<Employee> employees = repository.findByName("John Smith");
        Assert.assertTrue(employees.size() > 0);
    }

}

Summary

In this article you have learned how to:

  • Run your local instance of Elasticsearch with Docker
  • Integrate Spring Boot application with Elasticsearch
  • Use Spring Data Repositories for saving data and performing simple queries
  • User Spring Data ElasticsearchTemplate to perform bulk operations on index
  • Use ElasticHQ for monitoring your cluster
  • Build automatic integration tests for Elasticsearch with Testcontainers

The sample application source code is as usual available on GitHub in repository sample-spring-elasticsearch.

The post Elasticsearch with Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/03/29/elasticsearch-with-spring-boot/feed/ 15 7061