MongoDB Archives - Piotr's TechBlog https://piotrminkowski.com/tag/mongodb/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 26 May 2023 14:26:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 MongoDB Archives - Piotr's TechBlog https://piotrminkowski.com/tag/mongodb/ 32 32 181738725 Spring Boot Development Mode with Testcontainers and Docker https://piotrminkowski.com/2023/05/26/spring-boot-development-mode-with-testcontainers-and-docker/ https://piotrminkowski.com/2023/05/26/spring-boot-development-mode-with-testcontainers-and-docker/#comments Fri, 26 May 2023 14:26:38 +0000 https://piotrminkowski.com/?p=14207 In this article, you will learn how to use Spring Boot built-in support for Testcontainers and Docker Compose to run external services in development mode. Spring Boot introduces those features in the current latest version 3.1. Of course, you can already take advantage of Testcontainers in your Spring Boot app tests. However, the ability to […]

The post Spring Boot Development Mode with Testcontainers and Docker appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use Spring Boot built-in support for Testcontainers and Docker Compose to run external services in development mode. Spring Boot introduces those features in the current latest version 3.1. Of course, you can already take advantage of Testcontainers in your Spring Boot app tests. However, the ability to run external databases, message brokers, or other external services on app startup was something I was waiting for. Especially, since the competitive framework, Quarkus, already provides a similar feature called Dev Services, which is very useful during my development. Also, we should not forget about another exciting feature – integration with Docker Compose. Let’s begin.

If you are looking for more articles related to Spring Boot 3 you can refer to the following one, about microservices with Spring Cloud.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. Since I’m using Testcontainers often, you can find examples in my several repositories. Here’s a list of repositories we will use today:

You can clone them and then follow my instruction to see how to leverage Spring Boot built-in support for Testcontainers and Docker Compose in development mode.

Use Testcontainers in Tests

Let’s start with the standard usage example. The first repository has a single Spring Boot app that connects to the Mongo database. In order to build automated tests we have to include the following Maven dependencies:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-test</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>mongodb</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>junit-jupiter</artifactId>
  <scope>test</scope>
</dependency>

Now, we can create the tests. We need to annotate our test class with @Testcontainers. Then, we have to declare the MongoDBContainer bean. Before Spring Boot 3.1, we would have to use DynamicPropertyRegistry to set the Mongo address automatically generated by Testcontainers.

@SpringBootTest(webEnvironment = 
   SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerTest {

   @Container
   static MongoDBContainer mongodb = 
      new MongoDBContainer("mongo:5.0");

   @DynamicPropertySource
   static void registerMongoProperties(DynamicPropertyRegistry registry) {
      registry.add("spring.data.mongodb.uri", mongodb::getReplicaSetUrl);
   }

   // ... test methods

}

Fortunately, beginning from Spring Boot 3.1 we can simplify that notation with @ServiceConnection annotation. Here’s the full test implementation with the latest approach. It verifies some REST endpoints exposed by the app.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerTest {

    private static String id;

    @Container
    @ServiceConnection
    static MongoDBContainer mongodb = new MongoDBContainer("mongo:5.0");

    @Autowired
    TestRestTemplate restTemplate;

    @Test
    @Order(1)
    void add() {
        Person p = new Person(null, "Test", "Test", 100, Gender.FEMALE);
        Person personAdded = restTemplate
            .postForObject("/persons", p, Person.class);
        assertNotNull(personAdded);
        assertNotNull(personAdded.getId());
        assertEquals(p.getLastName(), personAdded.getLastName());
        id = personAdded.getId();
    }

    @Test
    @Order(2)
    void findById() {
        Person person = restTemplate
            .getForObject("/persons/{id}", Person.class, id);
        assertNotNull(person);
        assertNotNull(person.getId());
        assertEquals(id, person.getId());
    }

    @Test
    @Order(2)
    void findAll() {
        Person[] persons = restTemplate
            .getForObject("/persons", Person[].class);
        assertEquals(6, persons.length);
    }

}

Now, we can build the project with the standard Maven command. Then Testcontainers will automatically start the Mongo database before the test. Of course, we need to have Docker running on our machine.

$ mvn clean package

Tests run fine. But what will happen if we would like to run our app locally for development? We can do it by running the app main class directly from IDE or with the mvn spring-boot:run Maven command. Here’s our main class:

@SpringBootApplication
@EnableMongoRepositories
public class SpringBootOnKubernetesApp implements ApplicationListener<ApplicationReadyEvent> {

    public static void main(String[] args) {
        SpringApplication.run(SpringBootOnKubernetesApp.class, args);
    }

    @Autowired
    PersonRepository repository;

    @Override
    public void onApplicationEvent(ApplicationReadyEvent applicationReadyEvent) {
        if (repository.count() == 0) {
            repository.save(new Person(null, "XXX", "FFF", 20, Gender.MALE));
            repository.save(new Person(null, "AAA", "EEE", 30, Gender.MALE));
            repository.save(new Person(null, "ZZZ", "DDD", 40, Gender.FEMALE));
            repository.save(new Person(null, "BBB", "CCC", 50, Gender.MALE));
            repository.save(new Person(null, "YYY", "JJJ", 60, Gender.FEMALE));
        }
    }
}

Of course, unless we start the Mongo database our app won’t be able to connect it. If we use Docker, we first need to execute the docker run command that runs MongoDB and exposes it on the local port.

spring-boot-testcontainers-logs

Use Testcontainers in Development Mode with Spring Boot

Fortunately, with Spring Boot 3.1 we can simplify that process. We don’t have to Mongo before starting the app. What we need to do – is to enable development mode with Testcontainers. Firstly, we should include the following Maven dependency in the test scope:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-testcontainers</artifactId>
  <scope>test</scope>
</dependency>

Then we need to prepare the @TestConfiguration class with the definition of containers we want to start together with the app. For me, it is just a single MongoDB container as shown below:

@TestConfiguration
public class MongoDBContainerDevMode {

    @Bean
    @ServiceConnection
    MongoDBContainer mongoDBContainer() {
        return new MongoDBContainer("mongo:5.0");
    }

}

After that, we have to “override” the Spring Boot main class. It should have the same name as the main class with the Test suffix. Then we pass the current main method inside the SpringApplication.from(...) method. We also need to set @TestConfiguration class using the with(...) method.

public class SpringBootOnKubernetesAppTest {

    public static void main(String[] args) {
        SpringApplication.from(SpringBootOnKubernetesApp::main)
                .with(MongoDBContainerDevMode.class)
                .run(args);
    }

}

Finally, we can start our “test” main class directly from the IDE or we can just execute the following Maven command:

$ mvn spring-boot:test-run

Once the app starts you will see that the Mongo container is up and running and connection to it is established.

Since we are in dev mode we will also include the Spring Devtools module to automatically restart the app after the source code change.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-devtools</artifactId>
  <optional>true</optional>
</dependency>

Let’s what happened. Once we provide a change in the source code Spring Devtools will restart the app and the Mongo container. You can verify it in the app logs and also on the list of running Docker containers. As you see the Testcontainer ryuk has been initially started a minute ago, while Mongo was restarted after the app restarted 9 seconds ago.

In order to prevent restarting the container on app restart with Devtools we need to annotate the MongoDBContainer bean with @RestartScope.

@TestConfiguration
public class MongoDBContainerDevMode {

    @Bean
    @ServiceConnection
    @RestartScope
    MongoDBContainer mongoDBContainer() {
        return new MongoDBContainer("mongo:5.0");
    }

}

Now, Devtools just restart the app without restarting the container.

spring-boot-testcontainers-containers

Sharing Container across Multiple Apps

In the previous example, we have a single app that connects to the database on a single container. Now, we will switch to the repository with some microservices that communicates with each other via the Kafka broker. Let’s say I want to develop and test all three apps simultaneously. Of course, our services need to have the following Maven dependencies:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-testcontainers</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>kafka</artifactId>
  <version>1.18.1</version>
  <scope>test</scope>
</dependency>

Then we need to do a very similar thing as before – declare the @TestConfiguration bean with a list of required containers. However, this time we need to make our Kafka container reusable between several apps. In order to do that, we will invoke the withReuse(true) on the KafkaContainer. By the way, it is also possible to use Kafka Raft mode instead of Zookeeper.

@TestConfiguration
public class KafkaContainerDevMode {

    @Bean
    @ServiceConnection
    public KafkaContainer kafka() {
        return new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:7.4.0"))
                .withKraft()
                .withReuse(true);
    }

}

The same as before we have to create a “test” main class that uses the @TestConfiguration bean. We will do the same thing for two other apps inside the repository: payment-service and stock-service.

public class OrderAppTest {

    public static void main(String[] args) {
        SpringApplication.from(OrderApp::main)
                .with(KafkaContainerDevMode.class)
                .run(args);
    }

}

Let’s run our three microservices. Just to remind you, it is possible to run the “test” main class directly from IDE or with the mvn spring-boot:test-run command. As you see, I run all three apps.

spring-boot-testcontainers-microservices

Now, if we display a list of running containers, there is only one Kafka broker shared between all the apps.

Use Spring Boot support for Docker Compose

Beginning from version 3.1 Spring Boot provides built-in support for Docker Compose. Let’s switch to our last sample repository. It consists of several microservices that connect to the Mongo database and the Netflix Eureka discovery server. We can go to the directory with one of the microservices, e.g. customer-service. Assuming we include the following Maven dependency, Spring Boot looks for a Docker Compose configuration file in the current working directory. Let’s activate that mechanism only for a specific Maven profile:

<profiles>
  <profile>
    <id>compose</id>
    <dependencies>
      <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-docker-compose</artifactId>
        <optional>true</optional>
      </dependency>
    </dependencies>
  </profile>
</profiles>

Our goal is to run all the required external services before running the customer-service app. The customer-service app connects to Mongo, Eureka, and calls endpoint exposed by the account-service. Here’s the implementation of the REST client that communicates to the account-service.

@FeignClient("account-service")
public interface AccountClient {

    @RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}")
    List<Account> getAccounts(@PathVariable("customerId") String customerId);

}

We need to prepare the docker-compose.yml with all required containers definition. As you see, there is the mongo service and two applications discovery-service and account-service, which uses local Docker images.

version: "3.8"
services:
  mongo:
    image: mongo:5.0
    ports:
      - "27017:27017"
  discovery-service:
    image: sample-spring-microservices-advanced/discovery-service:1.0-SNAPSHOT
    ports:
      - "8761:8761"
    healthcheck:
      test: curl --fail http://localhost:8761/eureka/v2/apps || exit 1
      interval: 4s
      timeout: 2s
      retries: 3
    environment:
      SPRING_PROFILES_ACTIVE: docker
  account-service:
    image: sample-spring-microservices-advanced/account-service:1.0-SNAPSHOT
    ports:
      - "8080"
    depends_on:
      discovery-service:
        condition: service_healthy
    links:
      - mongo
      - discovery-service
    environment:
      SPRING_PROFILES_ACTIVE: docker

Before we run the service, let’s build the images with our apps. We could as well use built-in Spring Boot mechanisms based on Buildpacks, but I’ve got some problems with it. Jib works fine in my case.

<profile>
  <id>build-image</id>
  <build>
    <plugins>
      <plugin>
        <groupId>com.google.cloud.tools</groupId>
        <artifactId>jib-maven-plugin</artifactId>
        <version>3.3.2</version>
        <configuration>
          <to>
            
          </to>
        </configuration>
        <executions>
          <execution>
            <goals>
              <goal>dockerBuild</goal>
            </goals>
            <phase>package</phase>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</profile>

Let’s execute the following command on the repository root directory:

$ mvn clean package -Pbuild-image -DskipTests

After a successful build, we can verify a list of available images with the docker images command. As you see, there are two images used in our docker-compose.yml file:

Finally, the only thing you need to do is to run the customer-service app. Let’s switch to the customer-service directory once again and execute the mvn spring-boot:run with a profile that includes the spring-boot-docker-compose dependency:

$ mvn spring-boot:run -Pcompose

As you see, our app locates docker-compose.yml.

spring-boot-testcontainers-docker-compose

Once we start our app, it also starts all required containers.

For example, we can take a look at the Eureka dashboard available at http://localhost:8761. There are two apps registered there. The account-service is running on Docker, while the customer-service has been started locally.

Final Thoughts

Spring Boot 3.1 comes with several improvements in the area of containerization. Especially the feature related to the ability to run Testcontainers in development together with the app was something that I was waiting for. I hope this article will clarify how you can take advantage of the latest Spring Boot features for better integration with Testcontainers and Docker Compose.

The post Spring Boot Development Mode with Testcontainers and Docker appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/05/26/spring-boot-development-mode-with-testcontainers-and-docker/feed/ 5 14207
Java Development on OpenShift with odo https://piotrminkowski.com/2021/02/05/java-development-on-openshift-with-odo/ https://piotrminkowski.com/2021/02/05/java-development-on-openshift-with-odo/#respond Fri, 05 Feb 2021 16:06:58 +0000 https://piotrminkowski.com/?p=9425 OpenShift Do (odo) is a CLI tool for running applications on OpenShift. Opposite to the oc client, it is a tool for developers. It automates all the things required to deploy your application on OpenShift. Thanks to odo you can focus on the most important aspect – code. In order to start, you just need […]

The post Java Development on OpenShift with odo appeared first on Piotr's TechBlog.

]]>
OpenShift Do (odo) is a CLI tool for running applications on OpenShift. Opposite to the oc client, it is a tool for developers. It automates all the things required to deploy your application on OpenShift. Thanks to odo you can focus on the most important aspect – code. In order to start, you just need to download the latest release from GitHub and add it to your path.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. It is the simple Spring Boot application we are going to deploy on OpenShift. Then you should just follow my instructions.

Before we begin with OpenShift odo

It is important to understand some key concepts related to odo before we begin. If you want to deploy a new application you should create a component. Each component can be run and deployed separately. There are two types of components: Devfile and S2I. The S2I component is based on the Source-To-Image process. Therefore, your application is built on the server-side with the S2I builder. With the Devfile component, you run the application with the mvn spring-boot:run command. Here’s the list of available components. We will use the java component.

openshift-odo-component-list

You can also check out the list of services by executing the command odo catalog list services. A service is a software that your component links to or depends on. However, I won’t focus on that feature. Currently, the Service Catalog is deprecated in OpenShift. Instead, you can operators with odo the same as you would use templates.

The Spring Boot application

Let’s take a moment on discussing our sample Spring Boot application. It is a REST-based application, which connects to MongoDB. We will use JDK 11 for compilation.

<groupId>pl.piomin.samples</groupId>
<artifactId>sample-spring-boot-on-kubernetes</artifactId>
<version>1.0-SNAPSHOT</version>

<properties>
    <java.version>11</java.version>
</properties>

<dependencies>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-data-mongodb</artifactId>
   </dependency>
   ...
</dependencies>

The MongoDB connection settings are set inside application.yml. We will use environment variables for getting credentials and database name. We will set these values in the local configuration using odo.

spring:
  application:
    name: sample-spring-boot-on-kubernetes
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

Creating configuration with odo

Ok, let’s by creating a new component. We need to choose java S2I type and then set a name of our component.

$ odo create java sample-spring-boot

After that odo is creating a file config.yaml inside the .odo directory in the project root catalog. It’s pretty simple.

kind: LocalConfig
apiversion: odo.dev/v1alpha1
ComponentSettings:
  Type: java
  SourceLocation: ./
  SourceType: local
  Ports:
  - 8443/TCP
  - 8778/TCP
  - 8080/TCP
  Application: app
  Project: pminkows-workshop
  Name: sample-spring-boot

Then, we will add some additional configuration properties. Firstly, we need to expose our application outside the cluster. The command visible below creates a Route to the Service on OpenShift on the 8080 port.

$ odo url create --port 8080

In the next step, we add environment variables to the odo configuration responsible for establishing MongoDB connection: MONGO_USERNAME, MONGO_PASSWORD, and MONGO_DATABASE. Just to simplify, I created a standalone instance of MongoDB on OpenShift using the template. Here are the values in the mongodb secret.

Let’s set environment variables using odo config command. Of course, all things are still available only in the local configuration.

$ odo config set --env MONGO_USERNAME=userJ5Q
$ odo config set --env MONGO_PASSWORD=UrfgtUKohNOFVqbQ
$ odo config set --env MONGO_DATABASE=sampledb 

Finally, we are ready to deploy our application on OpenShift. To do that, we need to execute odo push command. If you would like to see the logs from the build add option --show-log.

openshift-odo-push

Instead of using environment variables to provide MongoDB connection settings, you may take advantage of odo link command. This feature helps to connect an odo component to a service or another component. However, to use it you first need to install the Service Binding Operator on OpenShift. Then you may install a database like MongoDB with an operator and link to your application component.

Verify S2I build on OpenShift

After running odo push let’s switch to oc client. Our application is running on OpenShift as shown below. The same with MongoDB.

We can also navigate to the OpenShift Management Console. The environment variables set inside odo configuration has been applied to the DeploymentConfig.

Now, let’s say we need to provide some changes in our code. Fortunately, we can execute the command that watches for changes in the directory for current component. After detecting such a change, the new version of application is immediately deployed on OpenShift.

$ odo watch
Waiting for something to change in /Users/pminkows/IdeaProjects/sample-spring-boot-on-kubernetes

Using OpenShift odo in Devfile mode

In opposition to the S2I approach, we may use Devfile mode. In order to do that we will choose the java-springboot component.

As a result, OpenShift odo creates devfile.yaml in the project root directory.

schemaVersion: 2.0.0
metadata:
  name: java-springboot
  version: 1.1.0
starterProjects:
  - name: springbootproject
    git:
      remotes:
        origin: "https://github.com/odo-devfiles/springboot-ex.git"
components:
  - name: tools
    container:
      image: quay.io/eclipse/che-java11-maven:nightly
      memoryLimit: 768Mi
      mountSources: true
      endpoints:
      - name: '8080-tcp'
        targetPort: 8080
      volumeMounts:
        - name: m2
          path: /home/user/.m2
  - name: m2
    volume:
      size: 3Gi
commands:
  - id: build
    exec:
      component: tools
      commandLine: "mvn clean -Dmaven.repo.local=/home/user/.m2/repository package -Dmaven.test.skip=true"
      group:
        kind: build
        isDefault: true
  - id: run
    exec:
      component: tools
      commandLine: "mvn -Dmaven.repo.local=/home/user/.m2/repository spring-boot:run"
      group:
        kind: run
        isDefault: true
  - id: debug
    exec:
      component: tools
      commandLine: "java -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=${DEBUG_PORT},suspend=n -jar target/*.jar"
      group:
        kind: debug
        isDefault: true

Of course, all next steps are the same as for S2I component.

Install OpenShift Intellij Plugin

If you do not like command-line tools, you may install the Intellij OpenShift plugin as well. It uses odo for build and deploy. The good news is that the latest version of this plugin supports odo in 2.0.3 version.

openshift-odo-intellij

Thanks to that plugin you can for example easily create component with OpenShift odo.

Conclusion

With odo you can easily deploy your application on OpenShift in a few seconds. You may also continuously watch for changes in the code, and immediately deploy a new version of application. Moreover, you don’t need to create any Kubernetes YAML manifests.

OpenShift odo is a similar tool to Skaffold. If you would like to compare both these tools used to run the Spring Boot application you may read the following article about Skaffold.

The post Java Development on OpenShift with odo appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/02/05/java-development-on-openshift-with-odo/feed/ 0 9425
Integration Testing on Kubernetes with JUnit5 https://piotrminkowski.com/2020/09/01/integration-testing-on-kubernetes-with-junit5/ https://piotrminkowski.com/2020/09/01/integration-testing-on-kubernetes-with-junit5/#respond Tue, 01 Sep 2020 08:40:11 +0000 https://piotrminkowski.com/?p=8688 With Hoverfly you can easily mock HTTP traffic during automated tests. Kubernetes is also based on the REST API. Today, I’m going to show you how to use both these tools together to improve integration testing on Kubernetes. In the first step, we will build an application that uses the fabric8 Kubernetes Client. We don’t […]

The post Integration Testing on Kubernetes with JUnit5 appeared first on Piotr's TechBlog.

]]>
With Hoverfly you can easily mock HTTP traffic during automated tests. Kubernetes is also based on the REST API. Today, I’m going to show you how to use both these tools together to improve integration testing on Kubernetes.
In the first step, we will build an application that uses the fabric8 Kubernetes Client. We don’t have to use it directly. Therefore, I’m going to include Spring Cloud Kubernetes. It uses the fabric8 client for integration with Kubernetes API. Moreover, the fabric8 client provides a mock server for the integration tests. In the beginning, we will use it, but then I’m going to replace it with Hoverfly. Let’s begin!

Source code

The source code is available on GitHub. If you want to clone the repository or just give me a star go here 🙂

Building applications with Spring Cloud Kubernetes

Spring Cloud Kubernetes provides implementations of well known Spring Cloud components based on Kubernetes API. It includes a discovery client, load balancer, and property sources support. We should add the following Maven dependency to enable it in our project.

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-kubernetes-all</artifactId>
</dependency>

Our application connects to the Mongo database, exposes REST API, and communicates with other applications over HTTP. Therefore we need to include some additional dependencies.

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

The overview of our system is visible in the picture below. We need to mock communication between applications and with Kubernetes API. We will also run an embedded in-memory Mongo database during tests. For more details about building microservices with Spring Cloud Kubernetes read the following article.

integration-testing-on-kubernetes-architecture

Testing API with Kubernetes MockServer

First, we need to include a Spring Boot Test starter, that contains basic dependencies used for JUnit tests implementation. Since our application is connected to Mongo and Kubernetes API, we should also mock them during the test. Here’s the full list of required dependencies.

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>de.flapdoodle.embed</groupId>
    <artifactId>de.flapdoodle.embed.mongo</artifactId>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>io.fabric8</groupId>
    <artifactId>kubernetes-server-mock</artifactId>
    <version>4.10.3</version>
    <scope>test</scope>
</dependency>

Let’s discuss what exactly is happening during our test.
(1) First, we are enabling fabric8 Kubernetes Client JUnit5 extension in CRUD mode. It means that we can create a Kubernetes object on the mocked server.
(2) Then the KubernetesClient is injected to the test by the JUnit5 extension.
(3) TestRestTemplate is able to call endpoints exposed by the application that is started during the test.
(4) We need to set the basic properties for KubernetesClient like a default namespace name, master URL.
(5) We are creating ConfigMap that contains application.properties file. ConfigMap with name employee is automatically read by the application employee.
(6) In the test method we are using TestRestTemplate to call REST endpoints. We are mocking Kubernetes API and running Mongo database in the embedded mode.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@EnableKubernetesMockClient(crud = true) // (1)
@TestMethodOrder(MethodOrderer.Alphanumeric.class)
class EmployeeAPITest {

    static KubernetesClient client; // (2)

    @Autowired
    TestRestTemplate restTemplate; // (3)

    @BeforeAll
    static void init() {
        System.setProperty(Config.KUBERNETES_MASTER_SYSTEM_PROPERTY,
            client.getConfiguration().getMasterUrl());
        System.setProperty(Config.KUBERNETES_TRUST_CERT_SYSTEM_PROPERTY,
            "true");
        System.setProperty(
            Config.KUBERNETES_AUTH_TRYKUBECONFIG_SYSTEM_PROPERTY, "false");
        System.setProperty(
            Config.KUBERNETES_AUTH_TRYSERVICEACCOUNT_SYSTEM_PROPERTY, "false");
        System.setProperty(Config.KUBERNETES_HTTP2_DISABLE, "true");
        System.setProperty(Config.KUBERNETES_NAMESPACE_SYSTEM_PROPERTY,
            "default"); // (4)
        client.configMaps().inNamespace("default").createNew()
            .withNewMetadata().withName("employee").endMetadata()
            .addToData("application.properties",
                "spring.data.mongodb.uri=mongodb://localhost:27017/test")
            .done(); // (5)
    }

    @Test // (6)
    void addEmployeeTest() {
        Employee employee = new Employee(1L, 1L, "Test", 30, "test");
        employee = restTemplate.postForObject("/", employee, Employee.class);
        Assertions.assertNotNull(employee);
        Assertions.assertNotNull(employee.getId());
    }

    @Test
    void addAndThenFindEmployeeByIdTest() {
        Employee employee = new Employee(1L, 2L, "Test2", 20, "test2");
        employee = restTemplate.postForObject("/", employee, Employee.class);
        Assertions.assertNotNull(employee);
        Assertions.assertNotNull(employee.getId());
        employee = restTemplate
            .getForObject("/{id}", Employee.class, employee.getId());
        Assertions.assertNotNull(employee);
        Assertions.assertNotNull(employee.getId());
    }

    @Test
    void findAllEmployeesTest() {
        Employee[] employees =
            restTemplate.getForObject("/", Employee[].class);
        Assertions.assertEquals(2, employees.length);
    }

    @Test
    void findEmployeesByDepartmentTest() {
        Employee[] employees =
            restTemplate.getForObject("/department/1", Employee[].class);
        Assertions.assertEquals(1, employees.length);
    }

    @Test
    void findEmployeesByOrganizationTest() {
        Employee[] employees =
            restTemplate.getForObject("/organization/1", Employee[].class);
        Assertions.assertEquals(2, employees.length);
    }

}

Integration Testing on Kubernetes with Hoverfly

To test HTTP communication between applications we usually need to use an additional tool for mocking API. Hoverfly is an ideal solution for such a use case. It is a lightweight, open-source API simulation tool not only for REST-based applications. It allows you to write tests in Java and Python. In addition, it also supports JUnit5. You need to include the following dependencies to enable it in your project.

<dependency>
	<groupId>io.specto</groupId>
	<artifactId>hoverfly-java-junit5</artifactId>
	<version>0.13.0</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>io.specto</groupId>
	<artifactId>hoverfly-java</artifactId>
	<version>0.13.0</version>
	<scope>test</scope>
</dependency>

You can enable Hoverfly in your tests with @ExtendWith annotation. It automatically starts Hoverfly proxy during a test. Our main goal is to mock the Kubernetes client. To do that we still need to set some properties inside @BeforeAll method. The default URL used by KubernetesClient is kubernetes.default.svc. In the first step, we are mocking configmap endpoint and returning predefined Kubernetes ConfigMap with application.properties. The name of ConfigMap is the same as the application name. We are testing communication from the department application to the employee application.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ExtendWith(HoverflyExtension.class)
public class DepartmentAPIAdvancedTest {

    @Autowired
    KubernetesClient client;

    @BeforeAll
    static void setup(Hoverfly hoverfly) {
        System.setProperty(Config.KUBERNETES_TRUST_CERT_SYSTEM_PROPERTY, "true");
        System.setProperty(Config.KUBERNETES_AUTH_TRYKUBECONFIG_SYSTEM_PROPERTY, "false");
        System.setProperty(Config.KUBERNETES_AUTH_TRYSERVICEACCOUNT_SYSTEM_PROPERTY,
            "false");
        System.setProperty(Config.KUBERNETES_HTTP2_DISABLE, "true");
        System.setProperty(Config.KUBERNETES_NAMESPACE_SYSTEM_PROPERTY, "default");
        hoverfly.simulate(dsl(service("kubernetes.default.svc")
            .get("/api/v1/namespaces/default/configmaps/department")
            .willReturn(success().body(json(buildConfigMap())))));
    }

    private static ConfigMap buildConfigMap() {
        return new ConfigMapBuilder().withNewMetadata()
            .withName("department").withNamespace("default")
            .endMetadata()
            .addToData("application.properties",
                "spring.data.mongodb.uri=mongodb://localhost:27017/test")
            .build();
    }
	
    // TESTS ...
	
}

After application startup, we may use TestRestTemplate to call a test endpoint. The endpoint GET /organization/{organizationId}/with-employees retrieves data from the employee application. It finds the department by organization id and then finds all employees assigned to the department. We need to mock a target endpoint using Hoverfly. But before that, we are mocking Kubernetes APIs responsible for getting service and endpoint by name. The address and port returned by the mocked endpoints must be the same as the address of a target application endpoint.

@Autowired
TestRestTemplate restTemplate;

private final String EMPLOYEE_URL = "employee.default:8080";

@Test
void findByOrganizationWithEmployees(Hoverfly hoverfly) {
    Department department = new Department(1L, "Test");
    department = restTemplate.postForObject("/", department, Department.class);
    Assertions.assertNotNull(department);
    Assertions.assertNotNull(department.getId());

    hoverfly.simulate(
        dsl(service(prepareUrl())
            .get("/api/v1/namespaces/default/endpoints/employee")
            .willReturn(success().body(json(buildEndpoints())))),
        dsl(service(prepareUrl())
            .get("/api/v1/namespaces/default/services/employee")
            .willReturn(success().body(json(buildService())))),
        dsl(service(EMPLOYEE_URL)
            .get("/department/" + department.getId())
            .willReturn(success().body(json(buildEmployees())))));

    Department[] departments = restTemplate
        .getForObject("/organization/{organizationId}/with-employees", Department[].class, 1L);
    Assertions.assertEquals(1, departments.length);
    Assertions.assertEquals(1, departments[0].getEmployees().size());
}

private Service buildService() {
    return new ServiceBuilder().withNewMetadata().withName("employee")
            .withNamespace("default").withLabels(new HashMap<>())
            .withAnnotations(new HashMap<>()).endMetadata().withNewSpec().addNewPort()
            .withPort(8080).endPort().endSpec().build();
}

private Endpoints buildEndpoints() {
    return new EndpointsBuilder().withNewMetadata()
        .withName("employee").withNamespace("default")
        .endMetadata()
        .addNewSubset().addNewAddress()
        .withIp("employee.default").endAddress().addNewPort().withName("http")
        .withPort(8080).endPort().endSubset()
        .build();
}

private List<Employee> buildEmployees() {
    List<Employee> employees = new ArrayList<>();
    Employee employee = new Employee();
    employee.setId("abc123");
    employee.setAge(30);
    employee.setName("Test");
    employee.setPosition("test");
    employees.add(employee);
    return employees;
}

private String prepareUrl() {
    return client.getConfiguration().getMasterUrl()
        .replace("/", "")
        .replace("https:", "");
}

Conclusion

The approach described in this article allows you to create integration tests without running a Kubernetes instance. On the other hand, you could start a single-node Kubernetes instance like Microk8s and deploy your application there. You could as well use an existing cluster, and implement your tests with Arquillian Cube. It is able to communicate directly to the Kubernetes API.
Another key point is testing communication between applications. In my opinion, Hoverfly is the best tool for that. It is able to mock the whole traffic over HTTP in the single test. With Hoverfly, fabric8 and Spring Cloud you can improve your integration testing on Kubernetes.

The post Integration Testing on Kubernetes with JUnit5 appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/09/01/integration-testing-on-kubernetes-with-junit5/feed/ 0 8688
Running Java Microservices on OpenShift using Source-2-Image https://piotrminkowski.com/2019/01/08/running-java-microservices-on-openshift-using-source-2-image/ https://piotrminkowski.com/2019/01/08/running-java-microservices-on-openshift-using-source-2-image/#comments Tue, 08 Jan 2019 09:06:31 +0000 https://piotrminkowski.wordpress.com/?p=6944 One of the reasons you would prefer OpenShift instead of Kubernetes is the simplicity of running new applications. When working with plain Kubernetes you need to provide an already built image together with the set of descriptor templates used for deploying it. OpenShift introduces Source-2-Image feature used for building reproducible Docker images from application source […]

The post Running Java Microservices on OpenShift using Source-2-Image appeared first on Piotr's TechBlog.

]]>
One of the reasons you would prefer OpenShift instead of Kubernetes is the simplicity of running new applications. When working with plain Kubernetes you need to provide an already built image together with the set of descriptor templates used for deploying it. OpenShift introduces Source-2-Image feature used for building reproducible Docker images from application source code. With S2I you don’t have to provide any Kubernetes YAML templates or build a Docker image by yourself, OpenShift will do it for you. Let’s see how it works. The best way to test it locally is via Minishift. But the first step is to prepare sample applications source code.

1. Prepare application code

I have already described how to run your Java applications on Kubernetes in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker. We will use the same source code as used in that article now, so you would be able to compare those two different approaches. Our source code is available on GitHub in repository sample-spring-microservices-new. We will modify a little the version used in Kubernetes by removing Spring Cloud Kubernetes library and including some additional resources. The current version is available in the branch openshift.
Our sample system consists of three microservices which communicate with each other and use Mongo database backend. Here’s the diagram that illustrates our architecture.

s2i-1

Every microservice is a Spring Boot application, which uses Maven as a built tool. After including spring-boot-maven-plugin it is able to generate single fat jar with all dependencies, which is required by source-2-image builder.

<build>
   <plugins>
      <plugin>
         <groupId>org.springframework.boot</groupId>
         <artifactId>spring-boot-maven-plugin</artifactId>
      </plugin>
   </plugins>
</build>

Every application includes starters for Spring Web, Spring Actuator and Spring Data MongoDB for integration with Mongo database. We will also include libraries for generating Swagger API documentation, and Spring Cloud OpenFeign for these applications which call REST endpoints exposed by other microservices.

<dependencies>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-actuator</artifactId>
   </dependency>
   <dependency>
      <groupId>io.springfox</groupId>
      <artifactId>springfox-swagger2</artifactId>
      <version>2.9.2>/version<
   </dependency>
   <dependency>
      <groupId>io.springfox</groupId>
      <artifactId>springfox-swagger-ui</artifactId>
      <version>2.9.2</version>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-data-mongodb</artifactId>
   </dependency>
</dependencies>

Every Spring Boot application exposes REST API for simple CRUD operations on a given resource. The Spring Data repository bean is injected into the controller.

@RestController
@RequestMapping(“/employee”)
public class EmployeeController {

   private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
   
   @Autowired
   EmployeeRepository repository;
   
   @PostMapping("/")
   public Employee add(@RequestBody Employee employee) {
      LOGGER.info("Employee add: {}", employee);
      return repository.save(employee);
   }
   
   @GetMapping("/{id}")
   public Employee findById(@PathVariable("id") String id) {
      LOGGER.info("Employee find: id={}", id);
      return repository.findById(id).get();
   }
   
   @GetMapping("/")
   public Iterable<Employee> findAll() {
      LOGGER.info("Employee find");
      return repository.findAll();
   }
   
   @GetMapping("/department/{departmentId}")
   public List<Employee> findByDepartment(@PathVariable("departmentId") Long departmentId) {
      LOGGER.info("Employee find: departmentId={}", departmentId);
      return repository.findByDepartmentId(departmentId);
   }
   
   @GetMapping("/organization/{organizationId}")
   public List<Employee> findByOrganization(@PathVariable("organizationId") Long organizationId) {
      LOGGER.info("Employee find: organizationId={}", organizationId);
      return repository.findByOrganizationId(organizationId);
   }
   
}

The application expects to have environment variables pointing to the database name, user and password.

spring:
  application:
    name: employee
  data:
    mongodb:
      uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME}

Inter-service communication is realized through OpenFeign – a declarative REST client. It is included in department and organization microservices.

@FeignClient(name = "employee", url = "${microservices.employee.url}")
public interface EmployeeClient {

   @GetMapping("/employee/organization/{organizationId}")
   List<Employee> findByOrganization(@PathVariable("organizationId") String organizationId);
   
}

The address of the target service accessed by Feign client is set inside application.yml. The communication is realized via OpenShift/Kubernetes services. The name of each service is also injected through an environment variable.


spring:
  application:
    name: organization
  data:
    mongodb:
      uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME}
microservices:
  employee:
    url: http://${EMPLOYEE_SERVICE}:8080
  department:
    url: http://${DEPARTMENT_SERVICE}:8080

2. Running Minishift

To run Minishift locally you just have to download it from that site, copy minishift.exe (for Windows) to your PATH directory and start using minishift start command. For more details you may refer to my previous article about OpenShift and Java applications Quick guide to deploying Java apps on OpenShift. The current version of Minishift used during writing this article is 1.29.0.
After starting Minishift we need to run some additional oc commands to enable source-2-image for Java apps. First, we add some privileges to user admin to be able to access project openshift. In this project OpenShift stores all the build-in templates and image streams used, for example as S2I builders. Let’s begin from enable admin-user addon.

$ minishift addons apply admin-user

Thanks to that plugin we are able to login to Minishift as cluster admin. Now, we can grant role cluster-admin to user admin.

$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin
$ oc login -u admin -p admin

After that, you can login to the web console using credentials admin/admin. You should be able to see project openshift. It is not all. The image used for building runnable Java apps (openjdk18-openshift) is not available by default on Minishift. We can import it manually from RedHat registry using oc import-image command or just enable and apply plugin xpaas. I prefer the second option.

$ minishift addons apply xpaas

Now, you can go to Minishift web console (for me available under address https://192.168.99.100:8443), select project openshift and then navigate to Builds -> Images. You should see the image stream redhat-openjdk18-openshift on the list.

s2i-2

The newest version of that image is 1.3. Surprisingly it is not the newest version on OpenShift Container Platform. There you have version 1.5. However, the newest versions of builder images has been moved to registry.redhat.io, which requires authentication.

3. Deploying Java app using S2I

We are finally able to deploy our app on Minishift with S2I builder. The application source code is ready, and the same with the Minishift instance. The first step is to deploy an instance of MongoDB. It is very easy with OpenShift, because the Mongo template is available in the built-in service catalog. We can provide our own configuration settings or left default values. What’s important for us, OpenShift generates secrets, by default available under the name mongodb.

s2i-3

The S2I builder image provided by OpenShift may be used through the image stream redhat-openjdk18-openshift. This image is intended for use with Maven-based Java standalone projects that are run via main class, for example Spring Boot applications. If you would not provide any builder during creating the new app the type of application is auto-detected by OpenShift, and source code written Java it will be deployed on WildFly server. The current version of the Java S2I builder image supports OpenJDK 1.8, Jolokia 1.3.5, and Maven 3.3.9-2.8.
Let’s create our first application on OpenShift. We begin from microservice employee. Under normal circumstances each microservice would be located in a separate Git repository. In our sample all of them are placed in the single repository, so we have provided the location of the current app by setting parameter --context-dir. We will also override default branch to openshift, which has been created for the purposes of this article.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=employee --context-dir=employee-service

All our microservices are connected to the Mongo database, so we also have to inject connection settings and credentials into the application pod. It can be achieved by injecting mongodb secret to BuildConfig object.

$ oc set env bc/employee --from="secret/mongodb" --prefix=MONGO_

BuildConfig is one of the OpenShift object created after running command oc new-app. It also creates DeploymentConfig with deployment definition, Service, and ImageStream with newest Docker image of application. After creating the application a new build is running. First, it downloads source code from the Git repository, then it builds it using Maven, assembles, builds results into the Docker image, and finally saves the image in the registry.
Now, we can create the next application – department. For simplification, all three microservices are connecting to the same database, which is not recommended under normal circumstances. In that case the only difference between department and employee app is the environment variable EMPLOYEE_SERVICE set as parameter on oc new-app command.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=department --context-dir=department-service -e EMPLOYEE_SERVICE=employee 

The same as before we also inject mongodb secret into BuildConfig object.

$ oc set env bc/department --from="secret/mongodb" --prefix=MONGO_

A build is starting just after creating a new application, but we can also start it manually by executing the following running command.

$ oc start-build department

Finally, we are deploying the last microservice. Here are the appropriate commands.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=organization --context-dir=organization-service -e EMPLOYEE_SERVICE=employee -e DEPARTMENT_SERVICE=department
$ oc set env bc/organization --from="secret/mongodb" --prefix=MONGO_

4. Deep look into created OpenShift objects

The list of builds may be displayed on the web console under section Builds -> Builds. As you can see on the picture below there are three BuildConfig objects available – each one for the single application. The same list can be displayed using oc command oc get bc.

s2i-4

You can take a look at build history by selecting one of the elements from the list. You can also start a new by clicking the button Start Build as shown below.

s2i-5

We can always display YAML configuration files with BuildConfig definition. But it is also possible to perform a similar action using a web console. The following picture shows the list of environment variables injected from mongodb secret into the BuildConfig object.

s2i-6.PNG

Every build generates a Docker image with an application and saves it in Minishift internal registry. Minishift internal registry is available under address 172.30.1.1:5000. The list of available image streams is available under section Builds -> Images.

s2i-7

Every application is automatically exposed on ports 8080 (HTTP), 8443 (HTTPS) and 8778 (Jolokia) via services. You can also expose these services outside Minishift by creating OpenShift Route using oc expose command.

s2i-8

5. Testing the sample system

To proceed with the tests we should first expose our microservices outside Minishift. To do that just run the following commands.

$ oc expose svc employee
$ oc expose svc department
$ oc expose svc organization

After that we can access applications on the address http://${APP_NAME}-${PROJ_NAME}.${MINISHIFT_IP}.nip.io as shown below.

s2i-9

Each microservice provides Swagger2 API documentation available on page swagger-ui.html. Thanks to that we can easily test every single endpoint exposed by the service.

s2i-10

It’s worth notice that every application making use of three approaches to inject environment variables into the pod:

  1. It stores version number in source code repository inside the file .s2i/environment. S2I builder reads all the properties defined inside that file and set them as environment variables for builder pod, and then application pod. Our property name is VERSION, which is injected using Spring @Value, and set for Swagger API (the code is visible below).
  2. I have already set the names of dependent services as ENV vars during executing command oc new-app for department and organization apps.
  3. I have also inject MongoDB secret into every BuildConfig object using oc set env command.
@Value("${VERSION}")
String version;

public static void main(String[] args) {
   SpringApplication.run(DepartmentApplication.class, args);
}

@Bean
public Docket swaggerApi() {
   return new Docket(DocumentationType.SWAGGER_2)
      .select()
         .apis(RequestHandlerSelectors.basePackage("pl.piomin.services.department.controller"))
         .paths(PathSelectors.any())
      .build()
      .apiInfo(new ApiInfoBuilder().version(version).title("Department API").description("Documentation Department API v" + version).build());
}

Conclusion

In this article I show you that deploying your applications on OpenShift may be a very simple thing. You don’t have to create any YAML descriptor files or build Docker images by yourself to run your app. It is built directly from your source code. You can compare it with deployment on Kubernetes described in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker.

The post Running Java Microservices on OpenShift using Source-2-Image appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/01/08/running-java-microservices-on-openshift-using-source-2-image/feed/ 1 6944
Reactive Microservices with Spring WebFlux and Spring Cloud https://piotrminkowski.com/2018/05/04/reactive-microservices-with-spring-webflux-and-spring-cloud/ https://piotrminkowski.com/2018/05/04/reactive-microservices-with-spring-webflux-and-spring-cloud/#comments Fri, 04 May 2018 09:32:45 +0000 https://piotrminkowski.wordpress.com/?p=6475 I have already described Spring reactive support about one year ago in the article Reactive microservices with Spring 5. At that time the project Spring WebFlux was under active development. Now after the official release of Spring 5 it is worth to take a look at the current version of it. Moreover, we will try […]

The post Reactive Microservices with Spring WebFlux and Spring Cloud appeared first on Piotr's TechBlog.

]]>
I have already described Spring reactive support about one year ago in the article Reactive microservices with Spring 5. At that time the project Spring WebFlux was under active development. Now after the official release of Spring 5 it is worth to take a look at the current version of it. Moreover, we will try to put our reactive microservices inside Spring Cloud ecosystem, which contains such the elements like service discovery with Eureka, load balancing with Spring Cloud Commons @LoadBalanced, and API gateway using Spring Cloud Gateway (also based on WebFlux and Netty). We will also check out Spring reactive support for NoSQL databases by the example of Spring Data Reactive Mongo project.

Here’s the figure that illustrates an architecture of our sample system consisting of two microservices, discovery server, gateway and MongoDB databases. The source code is as usual available on GitHub in sample-spring-cloud-webflux repository.

reactive-1

Let’s describe the further steps on the way to create the system illustrated above.

Step 1. Building reactive application using Spring WebFlux

To enable library Spring WebFlux for the project we should include starter spring-boot-starter-webflux to the dependencies. It includes some dependent libraries like Reactor or Netty server.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-webflux</artifactId>
</dependency>

REST controller looks pretty similar to the controller defined for synchronous web services. The only difference is in type of returned objects. Instead of a single object we return an instance of class Mono, and instead of list we return instance of class Flux. Thanks to Spring Data Reactive Mongo we don’t have to do anything more that call the needed method on the repository bean.


@RestController
public class AccountController {

   private static final Logger LOGGER = LoggerFactory.getLogger(AccountController.class);

   @Autowired
   private AccountRepository repository;

   @GetMapping("/customer/{customer}")
   public Flux findByCustomer(@PathVariable("customer") String customerId) {
      LOGGER.info("findByCustomer: customerId={}", customerId);
      return repository.findByCustomerId(customerId);
   }

   @GetMapping
   public Flux findAll() {
      LOGGER.info("findAll");
      return repository.findAll();
   }

   @GetMapping("/{id}")
   public Mono findById(@PathVariable("id") String id) {
   LOGGER.info("findById: id={}", id);
      return repository.findById(id);
   }

   @PostMapping
   public Mono create(@RequestBody Account account) {
      LOGGER.info("create: {}", account);
      return repository.save(account);
   }

}

Step 2. Integrate an application with database using Spring Data Reactive Mongo

The implementation of integration between application and database is also very simple. First, we need to include starter spring-boot-starter-data-mongodb-reactive to the project dependencies.

<dependency>
  <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-mongodb-reactive</artifactId>
</dependency>

The support for reactive Mongo repositories is automatically enabled after including the starter. The next step is to declare entity with ORM mappings. The following class is also returned as reponse by AccountController.

@Document
public class Account {

   @Id
   private String id;
   private String number;
   private String customerId;
  private int amount;

   ...

}

Finally, we may create a repository interface that extends ReactiveCrudRepository. It follows the patterns implemented by Spring Data JPA and provides some basic methods for CRUD operations. It also allows you to define methods with names, which are automatically mapped to queries. The only difference in comparison with standard Spring Data JPA repositories is in method signatures. The objects are wrapped by Mono and Flux.

public interface AccountRepository extends ReactiveCrudRepository {

   Flux findByCustomerId(String customerId);

}

In this example I used Docker container for running MongoDB locally. Because I run Docker on Windows using Docker Toolkit the default address of Docker machine is 192.168.99.100. Here’s the configuration of data source in application.yml file.

spring:
  data:
    mongodb:
      uri: mongodb://192.168.99.100/test

Step 3. Enabling service discovery using Eureka

Integration with Spring Cloud Eureka is pretty the same as for synchronous REST microservices. To enable discovery client we should first include starter spring-cloud-starter-netflix-eureka-client to the project dependencies.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>

Then we have to enable it using @EnableDiscoveryClient annotation.

@SpringBootApplication
@EnableDiscoveryClient
public class AccountApplication {

   public static void main(String[] args) {
      SpringApplication.run(AccountApplication.class, args);
   }

}

Microservice will automatically register itself in Eureka. Of course, we may run more than one instance of every service. Here’s the screen illustrating Eureka Dashboard (http://localhost:8761) after running two instances of account-service and a single instance of customer-service.  I would not like to go into the details of running application with embedded Eureka server. You may refer to my previous article for details: Quick Guide to Microservices with Spring Boot 2.0, Eureka and Spring Cloud. Eureka server is available as discovery-service module.

spring-reactive

Step 4. Inter-service communication between reactive microservices with WebClient

An inter-service communication is realized by the WebClient from Spring WebFlux project. The same as for RestTemplate you should annotate it with Spring Cloud Commons @LoadBalanced . It enables integration with service discovery and load balancing using Netflix OSS Ribbon client. So, the first step is to declare a client builder bean with @LoadBalanced annotation.

@Bean
@LoadBalanced
public WebClient.Builder loadBalancedWebClientBuilder() {
   return WebClient.builder();
}

Then we may inject WebClientBuilder into the REST controller. Communication with account-service is implemented inside GET /{id}/with-accounts , where first we are searching for customer entity using reactive Spring Data repository. It returns object Mono , while the WebClient returns Flux . Now, our main goal is to merge those to publishers and return single Mono object with the list of accounts taken from Flux without blocking the stream. The following fragment of code illustrates how I used WebClient to communicate with other microservice, and then merge the response and result from repository to single Mono object. This merge may probably be done in more “elegant” way, so feel free to create push request with your proposal.

@Autowired
private WebClient.Builder webClientBuilder;

@GetMapping("/{id}/with-accounts")
public Mono findByIdWithAccounts(@PathVariable("id") String id) {
   LOGGER.info("findByIdWithAccounts: id={}", id);
   Flux accounts = webClientBuilder.build().get().uri("http://account-service/customer/{customer}", id).retrieve().bodyToFlux(Account.class);
   return accounts
      .collectList()
      .map(a -> new Customer(a))
      .mergeWith(repository.findById(id))
      .collectList()
      .map(CustomerMapper::map);
}

Step 5. Building API gateway using Spring Cloud Gateway

Spring Cloud Gateway is one of the newest Spring Cloud projects. It is built on top of Spring WebFlux, and thanks to that we may use it as a gateway to our sample system based on reactive microservices with Spring Boot. Similar to Spring WebFlux applications it is run on an embedded Netty server. To enable it for the Spring Boot application just include the following dependency to your project.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>

We should also enable a discovery client in order to allow the gateway to fetch a list of registered microservices. However, there is no need to register a gateway application in Eureka. To disable registration you may set property eureka.client.registerWithEureka to false inside application.yml file.

@SpringBootApplication
@EnableDiscoveryClient
public class GatewayApplication {

   public static void main(String[] args) {
      SpringApplication.run(GatewayApplication.class, args);
   }

}

By default, Spring Cloud Gateway does not enable integration with service discovery. To enable it we should set property spring.cloud.gateway.discovery.locator.enabled to true. Now, the last thing that should be done is the configuration of the routes. Spring Cloud Gateway provides two types of components that may be configured inside routes: filters and predicates. Predicates are used for matching HTTP requests with the route, while filters can be used to modify requests and responses before or after sending the downstream request. Here’s the full configuration of gateway. It enables service discovery location, and defines two routes based on entries in service registry. We use the Path Route Predicate factory for matching the incoming requests, and the RewritePath GatewayFilter factory for modifying the requested path to adapt it to the format exposed by the downstream services (endpoints are exposed under path /, while gateway expose them under paths /account and /customer).

spring:
  cloud:
    gateway:
      discovery:
        locator:
          enabled: true
      routes:
      - id: account-service
        uri: lb://account-service
        predicates:
        - Path=/account/**
        filters:
        - RewritePath=/account/(?.*), /$\{path}
      - id: customer-service
        uri: lb://customer-service
        predicates:
        - Path=/customer/**
        filters:
        - RewritePath=/customer/(?.*), /$\{path}

Step 6. Testing reactive microservices with Spring Boot

Before making some tests let’s just recap our sample system. We have two microservices account-service, customer-service that use MongoDB as a database. Microservice customer-service calls endpoint GET /customer/{customer} exposed by account-service. The URL of account-service is taken from Eureka. The whole sample system is hidden behind the gateway, which is available under address localhost:8090.
Now, the first step is to run MongoDB on a Docker container. After executing the following command Mongo is available under address 192.168.99.100:27017.

$ docker run -d --name mongo -p 27017:27017 mongo

Then we may proceed to running discovery-service. Eureka is available under its default address localhost:8761. You may run it using your IDE or just by executing command java -jar target/discovery-service-1.0-SNAPHOT.jar. The same rule applies to our sample microservices. However, account-service needs to be multiplied in two instances, so you need to override default HTTP port when running second instance using -Dserver.port VM argument, for example java -jar -Dserver.port=2223 target/account-service-1.0-SNAPSHOT.jar. Finally, after running gateway-service we may add some test data.

$ curl --header "Content-Type: application/json" --request POST --data '{"firstName": "John","lastName": "Scott","age": 30}' http://localhost:8090/customer
{"id": "5aec1debfa656c0b38b952b4","firstName": "John","lastName": "Scott","age": 30,"accounts": null}
$ curl --header "Content-Type: application/json" --request POST --data '{"number": "1234567890","amount": 5000,"customerId": "5aec1debfa656c0b38b952b4"}' http://localhost:8090/account
{"id": "5aec1e86fa656c11d4c655fb","number": "1234567892","customerId": "5aec1debfa656c0b38b952b4","amount": 5000}
$ curl --header "Content-Type: application/json" --request POST --data '{"number": "1234567891","amount": 12000,"customerId": "5aec1debfa656c0b38b952b4"}' http://localhost:8090/account
{"id": "5aec1e91fa656c11d4c655fc","number": "1234567892","customerId": "5aec1debfa656c0b38b952b4","amount": 12000}
$ curl --header "Content-Type: application/json" --request POST --data '{"number": "1234567892","amount": 2000,"customerId": "5aec1debfa656c0b38b952b4"}' http://localhost:8090/account
{"id": "5aec1e99fa656c11d4c655fd","number": "1234567892","customerId": "5aec1debfa656c0b38b952b4","amount": 2000}

To test inter-service communication just call endpoint GET /customer/{id}/with-accounts on gateway-service. It forwards the request to customer-service, and then customer-service calls endpoint exposed by account-service using reactive WebClient. The result is visible below.

reactive-2

Conclusion

Since Spring 5 and Spring Boot 2.0 there is a full range of available ways to build microservices-based architecture. We can build standard synchronous system using one-to-one communication with Spring Cloud Netflix project, messaging microservices based on message broker and publish/subscribe communication model with Spring Cloud Stream, and finally asynchronous, reactive microservices with Spring WebFlux. The main goal of this article is to show you how to use Spring WebFlux together with Spring Cloud projects in order to provide such mechanisms like service discovery, load balancing or API gateway for reactive microservices built on top of Spring Boot. Before Spring 5 the lack of support for reactive microservices Spring Boot support was one of the drawbacks of Spring framework, but now with Spring WebFlux it is no longer the case. Not only that, we may leverage Spring reactive support for the most popular NoSQL databases like MongoDB or Cassandra, and easily place our reactive microservices inside one system together with synchronous REST microservices.

The post Reactive Microservices with Spring WebFlux and Spring Cloud appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2018/05/04/reactive-microservices-with-spring-webflux-and-spring-cloud/feed/ 4 6475
Asynchronous Microservices with Vertx https://piotrminkowski.com/2017/08/24/asynchronous-microservices-with-vert-x/ https://piotrminkowski.com/2017/08/24/asynchronous-microservices-with-vert-x/#respond Thu, 24 Aug 2017 10:57:02 +0000 https://piotrminkowski.wordpress.com/?p=5625 Preface I must admit that as soon as I saw Vertx documentation I liked this concept. This may have happened because I had previously used a very similar framework which I used to create simple and lightweight applications exposing REST APIs – Node.js. It is a really fine framework, but has one big disadvantage for […]

The post Asynchronous Microservices with Vertx appeared first on Piotr's TechBlog.

]]>
Preface

I must admit that as soon as I saw Vertx documentation I liked this concept. This may have happened because I had previously used a very similar framework which I used to create simple and lightweight applications exposing REST APIs – Node.js. It is a really fine framework, but has one big disadvantage for me – it is JavaScript runtime. What is worth mentioning Vertx is polyglot and asynchronous. It supports all the most popular JVM based languages like Java, Scala, Groovy, Kotlin, and even JavaScript. These are not all of its advantages. It’s lightweight, fast, and modular. I was pleasantly surprised when I added the main Vertx dependencies to my pom.xml and there were not downloaded many other dependencies, as is often the case when using Spring Boot framework.

Well, I will not elaborate on the advantages and key concepts of this toolkit. I think you can read more about it in other articles. The most important thing for us is that using Vertx we can create high performance and asynchronous microservices based on the Netty framework. In addition, we can use standardized microservices mechanisms such as service discovery, configuration server, or circuit breaking.

Sample application source code is available on Github. It consists of two modules account-vertx-service and customer-vertx-service. Customer service retrieves data from Consul registry and invokes account service API. The architecture of the sample solution is visible in the figure below.

vertx

Building Vertx asynchronous services

To be able to create HTTP service exposing REST API we need to include the following dependency into pom.xml.


<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-web</artifactId>
   <version>${vertx.version}</version>
</dependency>

Here’s the fragment from the account service where I defined all API methods. The first step (1) was to declare Router which is one of the core concepts of Vertx-Web. A router takes an HTTP request, finds the first matching route for that request, and passes the request to that route. The next step (2), (3) is to add some handlers, for example BodyHandler, which allows you to retrieve request bodies and has been added to the POST method. Then we can begin to define API methods (4), (5), (6), (7), (8). And finally (9) we are starting the HTTP server on the port retrieved from the configuration.

Router router = Router.router(vertx); // (1)
router.route("/account/*").handler(ResponseContentTypeHandler.create()); // (2)
router.route(HttpMethod.POST, "/account").handler(BodyHandler.create()); // (3)
router.get("/account/:id").produces("application/json").handler(rc -> { // (4)
   repository.findById(rc.request().getParam("id"), res -> {
      Account account = res.result();
      LOGGER.info("Found: {}", account);
      rc.response().end(account.toString());
   });
});
router.get("/account/customer/:customer").produces("application/json").handler(rc -> { // (5)
   repository.findByCustomer(rc.request().getParam("customer"), res -> {
      List<Account> accounts = res.result();
      LOGGER.info("Found: {}", accounts);
      rc.response().end(Json.encodePrettily(accounts));
   });
});
router.get("/account").produces("application/json").handler(rc -> { // (6)
   repository.findAll(res -> {
      List<Account> accounts = res.result();
      LOGGER.info("Found all: {}", accounts);
      rc.response().end(Json.encodePrettily(accounts));
   });
});
router.post("/account").produces("application/json").handler(rc -> { // (7)
   Account a = Json.decodeValue(rc.getBodyAsString(), Account.class);
   repository.save(a, res -> {
      Account account = res.result();
      LOGGER.info("Created: {}", account);
      rc.response().end(account.toString());
   });
});
router.delete("/account/:id").handler(rc -> { // (8)
   repository.remove(rc.request().getParam("id"), res -> {
      LOGGER.info("Removed: {}", rc.request().getParam("id"));
      rc.response().setStatusCode(200);
   });
});
...
vertx.createHttpServer().requestHandler(router::accept).listen(conf.result().getInteger("port")); // (9)

All API methods use a repository object to communicate with the data source. In this case, I decided to use Mongo. Vertx has a module for interacting with that database, we need to include as a new dependency.


<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-mongo-client</artifactId>
   <version>${vertx.version}</version>
</dependency>

Mongo client, same as all other Vertx modules, works asynchronously. That’s why we need to use the AsyncResult Handler to pass results from the repository object. To be able to pass custom object as AsyncResult we have to annotate it with @DataObject and add toJson method.

public AccountRepositoryImpl(final MongoClient client) {
   this.client = client;
}

@Override
public AccountRepository save(Account account, Handler<AsyncResult<Account>> resultHandler) {
   JsonObject json = JsonObject.mapFrom(account);
   client.save(Account.DB_TABLE, json, res -> {
      if (res.succeeded()) {
         LOGGER.info("Account created: {}", res.result());
         account.setId(res.result());
         resultHandler.handle(Future.succeededFuture(account));
      } else {
         LOGGER.error("Account not created", res.cause());
         resultHandler.handle(Future.failedFuture(res.cause()));
      }
   });
   return this;
}

@Override
public AccountRepository findAll(Handler<AsyncResult<List<Account>>> resultHandler) {
   client.find(Account.DB_TABLE, new JsonObject(), res -> {
      if (res.succeeded()) {
         List<Account> accounts = res.result().stream().map(it -> new Account(it.getString("_id"), it.getString("number"), it.getInteger("balance"), it.getString("customerId"))).collect(Collectors.toList());
         resultHandler.handle(Future.succeededFuture(accounts));
      } else {
         LOGGER.error("Account not found", res.cause());
         resultHandler.handle(Future.failedFuture(res.cause()));
      }
   });
   return this;
}

Here’s Account model class.

@DataObject
public class Account {

   public static final String DB_TABLE = "account";

   private String id;
   private String number;
   private int balance;
   private String customerId;

   public Account() {

   }

   public Account(String id, String number, int balance, String customerId) {
      this.id = id;
      this.number = number;
      this.balance = balance;
      this.customerId = customerId;
   }

   public Account(JsonObject json) {
      this.id = json.getString("id");
      this.number = json.getString("number");
      this.balance = json.getInteger("balance");
      this.customerId = json.getString("customerId");
   }

   public String getId() {
      return id;
   }

   public void setId(String id) {
      this.id = id;
   }

   public String getNumber() {
      return number;
   }

   public void setNumber(String number) {
      this.number = number;
   }

   public int getBalance() {
      return balance;
   }

   public void setBalance(int balance) {
      this.balance = balance;
   }

   public String getCustomerId() {
      return customerId;
   }

   public void setCustomerId(String customerId) {
      this.customerId = customerId;
   }

   public JsonObject toJson() {
      return JsonObject.mapFrom(this);
   }

   @Override
   public String toString() {
      return Json.encodePrettily(this);
   }

}

Verticles

It is worth mentioning a few words about running an application written in Vertx. It is based on verticles. Verticles are chunks of code that get deployed and run by Vertx. A Vertx instance maintains N event loop threads by default. When creating a verticle we have to extend abstract class AbstractVerticle.


public class AccountServer extends AbstractVerticle {

   @Override
   public void start() throws Exception {
      ...
   }
}

I created two verticles per microservice. First for HTTP server and second for communication with Mongo. Here’s the main application method where I’m deploying verticles.

public static void main(String[] args) throws Exception {
   Vertx vertx = Vertx.vertx();
   vertx.deployVerticle(new MongoVerticle());
   vertx.deployVerticle(new AccountServer());
}

Well, now we should obtain the reference inside AccountServer verticle to the service running on MongoVerticle. To achieve it we have to generate proxy classes using vertx-codegen module.

<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-service-proxy</artifactId>
   <version>${vertx.version}</version>
</dependency>
<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-codegen</artifactId>
   <version>${vertx.version}</version>
   <scope>provided</scope>
</dependency>

First, annotate repository interface with @ProxyGen ad all public methods with @Fluent.

@ProxyGen
public interface AccountRepository {

   @Fluent
   AccountRepository save(Account account, Handler<AsyncResult<Account>> resultHandler);

   @Fluent
   AccountRepository findAll(Handler<AsyncResult<List<Account>>> resultHandler);

   @Fluent
   AccountRepository findById(String id, Handler<AsyncResult<Account>> resultHandler);

   @Fluent
   AccountRepository findByCustomer(String customerId, Handler<AsyncResult<List<Account>>> resultHandler);

   @Fluent
   AccountRepository remove(String id, Handler<AsyncResult<Void>> resultHandler);

   static AccountRepository createProxy(Vertx vertx, String address) {
      return new AccountRepositoryVertxEBProxy(vertx, address);
   }

   static AccountRepository create(MongoClient client) {
      return new AccountRepositoryImpl(client);
   }

}

Generator needs additional configuration inside pom.xml file. After running command mvn clean install on the parent project all generated classes should be available under src/main/generated directory for every microservice module.

<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-compiler-plugin</artifactId>
   <version>3.6.2</version>
   <configuration>
      <encoding>${project.build.sourceEncoding}</encoding>
      <source>${java.version}</source>
      <target>${java.version}</target>
      <useIncrementalCompilation>false</useIncrementalCompilation>
      <annotationProcessors>      
         <annotationProcessor>io.vertx.codegen.CodeGenProcessor</annotationProcessor>
      </annotationProcessors>
      <generatedSourcesDirectory>${project.basedir}/src/main/generated</generatedSourcesDirectory>
      <compilerArgs>
         <arg>-AoutputDirectory=${project.basedir}/src/main</arg>
      </compilerArgs>
   </configuration>
</plugin>

Now we are able to obtain AccountRepository reference by calling createProxy with account-service name.


AccountRepository repository = AccountRepository.createProxy(vertx, "account-service");

Service Discovery with Consul

To use the Vertx service discovery, we have to add the following dependencies into pom.xml. In the first of them, there are mechanisms for built-in Vertx discovery, which is rather not usable if we would like to invoke microservices running on different hosts. Fortunately, there are also available some additional bridges, for example, Consul bridge.

<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-service-discovery</artifactId>
   <version>${vertx.version}</version>
</dependency>
<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-service-discovery-bridge-consul</artifactId>
   <version>${vertx.version}</version>
</dependency>

Great, we only have to declare service discovery and register service importer. Now, we can retrieve configuration from Consul, but I assume we also would like to register our service. Unfortunately, problems start here… Like the toolkit authors say It (Vert.x) does not export to Consul and does not support service modification. Maybe somebody will explain why this library can not also export data to Consul – I just do not understand it. I had the same problem with Apache Camel some months ago and I will use the same solution I developed that time. Fortunately, Consul has a simple API for service registration and deregistration. To use it in our application we need to include Vertx asynchronous HTTP client to our dependencies.

<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-web-client</artifactId>
   <version>${vertx.version}</version>
</dependency>

Then using declared WebClient while starting the application we can register service by invoking the Consul PUT method.


WebClient client = WebClient.create(vertx);
...
JsonObject json = new JsonObject().put("ID", "account-service-1").put("Name", "account-service").put("Address", "127.0.0.1").put("Port", 2222).put("Tags", new 		JsonArray().add("http-endpoint"));
client.put(discoveryConfig.getInteger("port"), discoveryConfig.getString("host"), "/v1/agent/service/register").sendJsonObject(json, res -> {
   LOGGER.info("Consul registration status: {}", res.result().statusCode());
});

Once the account-service have registered itself on discovery server we can invoke it from another microservice – in this case from customer-service. We only have to create ServiceDiscovery object and register Consul service importer.


ServiceDiscovery discovery = ServiceDiscovery.create(vertx);
...
discovery.registerServiceImporter(new ConsulServiceImporter(), new JsonObject().put("host", discoveryConfig.getString("host")).put("port", discoveryConfig.getInteger("port")).put("scan-period", 2000));

Here’s AccountClient fragment, which is responsile for invoking GET /account/customer/{customerId} from account-service. It obtains service reference from discovery object and cast it to WebClient instance. I don’t know if you have noticed that apart from the standard fields such as ID, Name or Port, I also set the Tags field to the value of the type of service that we register. In this case it will be an http-endpoint. Whenever Vert.x reads data from Consul, it will be able to automatically assign a service reference to WebClient object.

public AccountClient findCustomerAccounts(String customerId, Handler<AsyncResult<List<Account>>> resultHandler) {
   discovery.getRecord(r -> r.getName().equals("account-service"), res -> {
      LOGGER.info("Result: {}", res.result().getType());
      ServiceReference ref = discovery.getReference(res.result());
      WebClient client = ref.getAs(WebClient.class);
      client.get("/account/customer/" + customerId).send(res2 -> {
         LOGGER.info("Response: {}", res2.result().bodyAsString());
         List<Account> accounts = res2.result().bodyAsJsonArray().stream().map(it -> Json.decodeValue(it.toString(), Account.class)).collect(Collectors.toList());
         resultHandler.handle(Future.succeededFuture(accounts));
      });
   });
   return this;
}

Configuration

For configuration management within the application Vert.x Config module is responsible.


<dependency>
   <groupId>io.vertx</groupId>
   <artifactId>vertx-config</artifactId>
   <version>${vertx.version}</version>
</dependency>

There are many configuration stores, which can be used as configuration data location:

  • File
  • Environment Variables
  • HTTP
  • Event Bus
  • Git
  • Redis
  • Consul
  • Kubernetes
  • Spring Cloud Config Server

I selected the simplest one – file. But it can be easily changed only by defining another type on ConfigStoreOptions object. For loading configuration data from the store ConfigRetriever is responsible. It reads configuration as JsonObject.

ConfigStoreOptions file = new ConfigStoreOptions().setType("file").setConfig(new JsonObject().put("path", "application.json"));
ConfigRetriever retriever = ConfigRetriever.create(vertx, new ConfigRetrieverOptions().addStore(file));
retriever.getConfig(conf -> {
   JsonObject discoveryConfig = conf.result().getJsonObject("discovery");
   vertx.createHttpServer().requestHandler(router::accept).listen(conf.result().getInteger("port"));
   JsonObject json = new JsonObject().put("ID", "account-service-1").put("Name", "account-service").put("Address", "127.0.0.1").put("Port", 2222).put("Tags", new JsonArray().add("http-endpoint"));
   client.put(discoveryConfig.getInteger("port"), discoveryConfig.getString("host"), "/v1/agent/service/register").sendJsonObject(json, res -> {
      LOGGER.info("Consul registration status: {}", res.result().statusCode());
   });
});

Configuration file application.json is available under src/main/resources and it contains application port, service discovery and datasource adresses.

{
   "port" : 2222,
   "discovery" : {
      "host" : "192.168.99.100",
      "port" : 8500
   },
   "datasource" : {
      "host" : "192.168.99.100",
      "port" : 27017,
      "db_name" : "test"
   }
}

Final thoughts

Vertx authors wouldn’t like to define their solution as a framework but as a tool-kit. They don’t tell you what is a correct way to write an application, but only give you a lot of useful bricks helping to create your app. With Vertx you can create fast and lightweight APIs basing on non-blocking, asynchronous I/O. It gives you a lot of possibilities, as you can see on the Config module example, where you can even use Spring Cloud Config Server as a configuration store. But it is also not free from drawbacks, as I showed on the service registration with the Consul example. Vertx also allows to create reactive microservices with RxJava, what seems to be interesting option, I hope to describe in the future.

The post Asynchronous Microservices with Vertx appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/08/24/asynchronous-microservices-with-vert-x/feed/ 0 5625
Reactive microservices with Spring 5 https://piotrminkowski.com/2017/02/16/reactive-microservices-with-spring-5/ https://piotrminkowski.com/2017/02/16/reactive-microservices-with-spring-5/#comments Thu, 16 Feb 2017 15:51:34 +0000 https://piotrminkowski.wordpress.com/?p=855 Spring team has announced support for reactive programming model from 5.0 release. New Spring version will probably be released on March. Fortunately, milestone and snapshot versions with these changes are now available on public spring repositories. There is new Spring Web Reactive project with support for reactive @Controller and also new WebClient with client-side reactive support. […]

The post Reactive microservices with Spring 5 appeared first on Piotr's TechBlog.

]]>
Spring team has announced support for reactive programming model from 5.0 release. New Spring version will probably be released on March. Fortunately, milestone and snapshot versions with these changes are now available on public spring repositories. There is new Spring Web Reactive project with support for reactive @Controller and also new WebClient with client-side reactive support. Today I’m going to take a closer look on solutions suggested by Spring team.

Following Spring WebFlux documentation  the Spring Framework uses Reactor internally for its own reactive support. Reactor is a Reactive Streams implementation that further extends the basic Reactive Streams Publisher contract with the Flux and Mono composable API types to provide declarative operations on data sequences of 0..N and 0..1. On the server-side Spring supports annotation based and functional programming models. Annotation model use @Controller and the other annotations supported also with Spring MVC. Reactive controller will be very similar to standard REST controller for synchronous services instead of it uses Flux, Mono and Publisher objects. Today I’m going to show you how to develop simple reactive microservices using annotation model and MongoDB reactive module. Sample application source code is available on GitHub.

For our example we need to use snapshots of Spring Boot 2.0.0 and Spring Web Reactive 0.1.0. Here are main pom.xml fragment and single microservice pom.xml below. In our microservices we use Netty instead of default Tomcat server.

[code language=”xml”]
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.0.BUILD-SNAPSHOT</version>
</parent>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot.experimental</groupId>
<artifactId>spring-boot-dependencies-web-reactive</artifactId>
<version>0.1.0.BUILD-SNAPSHOT</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
[/code]

[code language=”xml”]
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb-reactive</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot.experimental</groupId>
<artifactId>spring-boot-starter-web-reactive</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>io.projectreactor.ipc</groupId>
<artifactId>reactor-netty</artifactId>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
</dependency>
<dependency>
<groupId>pl.piomin.services</groupId>
<artifactId>common</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.projectreactor.addons</groupId>
<artifactId>reactor-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
[/code]

We have two microservices: account-service and customer-service. Each of them have its own MongoDB database and they are exposing simple reactive API for searching and saving data. Also customer-service interacting with account-service to get all customer accounts and return them in customer-service method. Here’s our account controller code.

[code language=”java”]
@RestController
public class AccountController {

@Autowired
private AccountRepository repository;

@GetMapping(value = "/account/customer/{customer}")
public Flux<Account> findByCustomer(@PathVariable("customer") Integer customerId) {
return repository.findByCustomerId(customerId)
.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
}

@GetMapping(value = "/account")
public Flux<Account> findAll() {
return repository.findAll().map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
}

@GetMapping(value = "/account/{id}")
public Mono<Account> findById(@PathVariable("id") Integer id) {
return repository.findById(id)
.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
}

@PostMapping("/person")
public Mono<Account> create(@RequestBody Publisher<Account> accountStream) {
return repository
.save(Mono.from(accountStream)
.map(a -> new pl.piomin.services.account.model.Account(a.getNumber(), a.getCustomerId(),
a.getAmount())))
.map(a -> new Account(a.getId(), a.getCustomerId(), a.getNumber(), a.getAmount()));
}

}
[/code]

In all API methods we also perform mapping from Account entity (MongoDB @Document) to Account DTO available in our common module. Here’s account repository class. It uses ReactiveMongoTemplate for interacting with Mongo collections.

[code language=”java”]
@Repository
public class AccountRepository {

@Autowired
private ReactiveMongoTemplate template;

public Mono<Account> findById(Integer id) {
return template.findById(id, Account.class);
}

public Flux<Account> findAll() {
return template.findAll(Account.class);
}

public Flux<Account> findByCustomerId(String customerId) {
return template.find(query(where("customerId").is(customerId)), Account.class);
}

public Mono<Account> save(Mono<Account> account) {
return template.insert(account);
}

}
[/code]

In our Spring Boot main or @Configuration class we should declare spring beans for MongoDB with connection settings.

[code language=”java”]
@SpringBootApplication
public class Application {

public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}

public @Bean MongoClient mongoClient() {
return MongoClients.create("mongodb://192.168.99.100");
}

public @Bean ReactiveMongoTemplate reactiveMongoTemplate() {
return new ReactiveMongoTemplate(mongoClient(), "account");
}

}
[/code]

I used docker MongoDB container for working on this sample.

docker run -d --name mongo -p 27017:27017 mongo

In customer service we call endpoint /account/customer/{customer} from account service. I declared @Bean WebClient in our main class.

[code language=”java”]
public @Bean WebClient webClient() {
return WebClient.builder().clientConnector(new ReactorClientHttpConnector()).baseUrl("http://localhost:2222").build();
}
[/code]

Here’s customer controller fragment. @Autowired WebClient calls account service after getting customer from MongoDB.

[code language=”java”]
@Autowired
private WebClient webClient;

@GetMapping(value = "/customer/accounts/{pesel}")
public Mono<Customer> findByPeselWithAccounts(@PathVariable("pesel") String pesel) {
return repository.findByPesel(pesel).flatMap(customer -> webClient.get().uri("/account/customer/{customer}", customer.getId()).accept(MediaType.APPLICATION_JSON)
.exchange().flatMap(response -> response.bodyToFlux(Account.class))).collectList().map(l -> {return new Customer(pesel, l);});
}
[/code]

We can test GET calls using web browser or REST clients. With POST it’s not so simple. Here are two simple test cases for adding new customer and getting customer with accounts. Test getCustomerAccounts need account service running on port 2222.

[code language=”java”]
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class CustomerTest {

private static final Logger logger = Logger.getLogger("CustomerTest");

private WebClient webClient;

@LocalServerPort
private int port;

@Before
public void setup() {
this.webClient = WebClient.create("http://localhost:" + this.port);
}

@Test
public void getCustomerAccounts() {
Customer customer = this.webClient.get().uri("/customer/accounts/234543647565")
.accept(MediaType.APPLICATION_JSON).exchange().then(response -> response.bodyToMono(Customer.class))
.block();
logger.info("Customer: " + customer);
}

@Test
public void addCustomer() {
Customer customer = new Customer(null, "Adam", "Kowalski", "123456787654");
customer = webClient.post().uri("/customer").accept(MediaType.APPLICATION_JSON)
.exchange(BodyInserters.fromObject(customer)).then(response -> response.bodyToMono(Customer.class))
.block();
logger.info("Customer: " + customer);
}

}
[/code]

Conclusion

Spring initiative with support for reactive programming seems promising, but now it’s on early stage of development. There is no availibility to use it with popular projects from Spring Cloud like Eureka, Ribbon or Hystrix. When I tried to add this dependencies to pom.xml my service failed to start. I hope that in the near future such functionalities like service discovery and load balancing will be available also for reactive microservices same as for synchronous REST microservices. Spring has also support for reactive model in Spring Cloud Stream project. It’s more stable than WebFlux framework. I’ll try use it in the future.

The post Reactive microservices with Spring 5 appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/02/16/reactive-microservices-with-spring-5/feed/ 2 855