JUnit5 Archives - Piotr's TechBlog https://piotrminkowski.com/tag/junit5/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 19 Apr 2024 09:42:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 JUnit5 Archives - Piotr's TechBlog https://piotrminkowski.com/tag/junit5/ 32 32 181738725 Pact with Quarkus 3 https://piotrminkowski.com/2024/04/19/pact-with-quarkus-3/ https://piotrminkowski.com/2024/04/19/pact-with-quarkus-3/#respond Fri, 19 Apr 2024 09:42:36 +0000 https://piotrminkowski.com/?p=15216 This article will teach you how to write contract tests with Pact for the app built on top of version 3 of the Quarkus framework. It is an update to the previously described topic in the “Contract Testing with Quarkus and Pact” article. Therefore we will not focus on the details related to the integration […]

The post Pact with Quarkus 3 appeared first on Piotr's TechBlog.

]]>
This article will teach you how to write contract tests with Pact for the app built on top of version 3 of the Quarkus framework. It is an update to the previously described topic in the “Contract Testing with Quarkus and Pact” article. Therefore we will not focus on the details related to the integration between Pact and Quarkus, but rather on the migration from version 2 to 3 of the Quarkus framework. There are some issues worth discussing.

You can find several other articles about Quarkus on my blog. For example, you can read about advanced testing techniques with Quarkus here. There is also an interesting article about Quarkus the Testcontainer’s support in the local development with Kafka.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. It contains three microservices written in Quarkus. I migrated them from Quarkus 2 to 3, to the latest version of the Pact Quarkus extension, and from Java 17 to 21. In order to proceed with the exercise, you need to clone my GitHub repository. Then you should just follow my instructions.

Let’s do a quick recap before proceeding. We are implementing several contact tests with Pact to verify interactions between our three microservices: employee-service, department-service, and organization-service. We use Pact Broker to store and share contract definitions between the microservices. Here’s the diagram that illustrates the described architecture

pact-quarkus-3-arch

Update to Java 21

There are no issues with migration to Java 21 in Quarkus. We need to change the version of Java used in the Maven compilation inside the pom.xml file. However, the situation is more complicated with the CircleCI build. Firstly, we use the ubuntu-2204 machine in the builds to access the Docker daemon. We need Docker to run the container with the Pact broker. Although CircleCI provides the image for OpenJDK 21, there is still Java 17 installed on the latest version of ubuntu-2204. This situation will probably change during the next months. But now, we need to install OpenJDK 21 on that machine. After that, we may run Pact broker and JUnit tests using the latest Java LTS version. Here’s the CircleCI config.yaml file:

version: 2.1

jobs:
  analyze:
    executor:
      name: docker/machine
      image: ubuntu-2204:2024.01.2
    steps:
      - checkout
      - run:
          name: Install OpenJDK 21
          command: |
            java -version
            sudo apt-get update && sudo apt-get install openjdk-21-jdk
            sudo update-alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
            sudo update-alternatives --set javac /usr/lib/jvm/java-21-openjdk-amd64/bin/javac
            java -version
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
      - docker/install-docker-compose
      - maven/with_cache:
          steps:
            - run:
                name: Build Images
                command: mvn package -DskipTests -Dquarkus.container-image.build=true
      - run:
          name: Run Pact Broker
          command: docker-compose up -d
      - maven/with_cache:
          steps:
            - run:
                name: Run Tests
                command: mvn package pact:publish -Dquarkus.container-image.build=false
      - maven/with_cache:
          steps:
            - run:
                name: Sonar Analysis
                command: mvn package sonar:sonar -DskipTests -Dquarkus.container-image.build=false


orbs:
  maven: circleci/maven@1.4.1
  docker: circleci/docker@2.6.0

workflows:
  maven_test:
    jobs:
      - analyze:
          context: SonarCloud
YAML

Here’s the root Maven pom.xml. It declares the Maven plugin responsible for publishing contracts to the Pact broker. Each time the Pact JUnit is executed successfully, it tries to publish the JSON pacts to the broker. The ordering of Maven modules is not random. The organization-service generates and publishes pacts for verifying contracts with department-service and employee-service, so it has to be run at the beginning. As you see, we use the current latest version of Quarkus – 3.9.3.

<properties>
  <java.version>21</java.version>
  <surefire-plugin.version>3.2.5</surefire-plugin.version>
  <quarkus.version>3.9.3</quarkus.version>
  <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  <maven.compiler.source>${java.version}</maven.compiler.source>
  <maven.compiler.target>${java.version}</maven.compiler.target>
</properties>

<modules>
  <module>organization-service</module>
  <module>department-service</module>
  <module>employee-service</module>
</modules>

<build>
  <plugins>
    <plugin>
      <groupId>au.com.dius.pact.provider</groupId>
      <artifactId>maven</artifactId>
        <version>4.6.9</version>
      <configuration>
        <pactBrokerUrl>http://localhost:9292</pactBrokerUrl>
      </configuration>
    </plugin>
  </plugins>
</build>
XML

Here’s the part of the docker-compose.yml responsible for running a Pact broker. It requires a Postgres database.

version: "3.7"
services:
  postgres:
    container_name: postgres
    image: postgres
    environment:
      POSTGRES_USER: pact
      POSTGRES_PASSWORD: pact123
      POSTGRES_DB: pact
    ports:
      - "5432"
  pact-broker:
    container_name: pact-broker
    image: pactfoundation/pact-broker
    ports:
      - "9292:9292"
    depends_on:
      - postgres
    links:
      - postgres
    environment:
      PACT_BROKER_DATABASE_USERNAME: pact
      PACT_BROKER_DATABASE_PASSWORD: pact123
      PACT_BROKER_DATABASE_HOST: postgres
      PACT_BROKER_DATABASE_NAME: pact
YAML

Update Quarkus and Pact

Dependencies

Firstly, let’s take a look at the list of dependencies. With the latest versions of Quarkus, we should take care of the REST provider and client used in our app. For example, if we use the quarkus-resteasy-jackson module to expose REST services, we should also use the quarkus-resteasy-client-jackson module to call the services. On the other hand, if we use quarkus-rest-jackson on the server side, we should also use quarkus-rest-client-jackson on the client side. In order to implement Pact tests in our app, we need to include the quarkus-pact-consumer module for the contract consumer and the quarkus-pact-provider on the contract provider side. Finally, we will use Wiremock to replace a Pact mock server.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-resteasy-jackson</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-resteasy-client-jackson</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-junit5</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-junit5-mockito</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>io.rest-assured</groupId>
  <artifactId>rest-assured</artifactId>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>io.quarkiverse.pact</groupId>
  <artifactId>quarkus-pact-consumer</artifactId>
  <version>1.3.0</version>
  <scope>provided</scope>
</dependency>
<dependency>
  <groupId>io.quarkiverse.pact</groupId>
  <artifactId>quarkus-pact-provider</artifactId>
  <version>1.3.0</version>
  <scope>provided</scope>
</dependency>
<dependency>
  <groupId>com.github.tomakehurst</groupId>
  <artifactId>wiremock-jre8</artifactId>
  <version>3.0.1</version>
  <scope>test</scope>
</dependency>
XML

Tests Implementation with Quarkus 3 and Pact Consumer

In that exercise, I’m simplifying tests as much as possible. Therefore we will use the REST client directly to verify the contract on the consumer side. However, if you are looking for more advanced examples please go to that repository. Coming back to our exercise, let’s take a look at the example of a declarative REST client used in the department-service.

@ApplicationScoped
@Path("/employees")
@RegisterRestClient(configKey = "employee")
public interface EmployeeClient {

    @GET
    @Path("/department/{departmentId}")
    @Produces(MediaType.APPLICATION_JSON)
    List<Employee> findByDepartment(@PathParam("departmentId") Long departmentId);

}
Java

There are some significant changes in the Pact tests on the consumer side. Some of them were forced by the error related to migration to Quarkus 3 described here. I found a smart workaround for that problem proposed by one of the contributors (1). This workaround replaces the Pact built-in mock server with Wiremock. We will start Wiremock on the dynamic port (2). We also need to implement @QuarkusTestResource to start the Wiremock container before the tests and shut it down after the tests (3). Then, we can switch to the latest version of Pact API by returning the V4Pact object (4) in the @Pact method and updating the @PactTestFor annotation accordingly (5). Finally, instead of the Pact MockServer, we use the wrapper PactMockServer dedicated to Wiremock (6).

@QuarkusTest
@ExtendWith(PactConsumerTestExt.class)
@ExtendWith(PactMockServerWorkaround.class) // (1)
@MockServerConfig(port = "0") // (2)
@QuarkusTestResource(WireMockQuarkusTestResource.class) // (3)
public class EmployeeClientContractTests extends PactConsumerTestBase {

    @Pact(provider = "employee-service", consumer = "department-service")
    public V4Pact callFindDepartment(PactDslWithProvider builder) { // (4)
        DslPart body = PactDslJsonArray.arrayEachLike()
                .integerType("id")
                .stringType("name")
                .stringType("position")
                .numberType("age")
                .closeObject();
        return builder.given("findByDepartment")
                .uponReceiving("findByDepartment")
                    .path("/employees/department/1")
                    .method("GET")
                .willRespondWith()
                    .status(200)
                    .body(body).toPact(V4Pact.class);
    }

    @Test
    // (5)
    @PactTestFor(providerName = "employee-service", pactVersion = PactSpecVersion.V4)
    public void verifyFindDepartmentPact(final PactMockServer mockServer) { // (6)
        EmployeeClient client = RestClientBuilder.newBuilder()
                .baseUri(URI.create(mockServer.getUrl()))
                .build(EmployeeClient.class);
        List<Employee> employees = client.findByDepartment(1L);
        System.out.println(employees);
        assertNotNull(employees);
        assertTrue(employees.size() > 0);
        assertNotNull(employees.get(0).getId());
    }
}
Java

Here’s our PactMockServer wrapper:

public class PactMockServer {

    private final String url;
    private final int port;

    public PactMockServer(String url, int port) {
        this.url = url;
        this.port = port;
    }

    public String getUrl() {
        return url;
    }

    public int getPort() {
        return port;
    }
}
Java

Implement Mock Server with Wiremock

In the first step, we need to provide an implementation of the QuarkusTestResourceLifecycleManager for starting the Wiremock server during the tests.

public class WireMockQuarkusTestResource implements 
        QuarkusTestResourceLifecycleManager {
        
    private static final Logger LOGGER = Logger
       .getLogger(WireMockQuarkusTestResource.class);

    private WireMockServer wireMockServer;

    @Override
    public Map<String, String> start() {
        final HashMap<String, String> result = new HashMap<>();

        this.wireMockServer = new WireMockServer(options()
                .dynamicPort()
                .notifier(createNotifier(true)));
        this.wireMockServer.start();

        return result;
    }

    @Override
    public void stop() {
        if (this.wireMockServer != null) {
            this.wireMockServer.stop();
            this.wireMockServer = null;
        }
    }

    @Override
    public void inject(final TestInjector testInjector) {
        testInjector.injectIntoFields(wireMockServer,
          new TestInjector.AnnotatedAndMatchesType(InjectWireMock.class, 
                                                   WireMockServer.class));
    }

    private static Notifier createNotifier(final boolean verbose) {
        final String prefix = "[WireMock] ";
        return new Notifier() {

            @Override
            public void info(final String s) {
                if (verbose) {
                    LOGGER.info(prefix + s);
                }
            }

            @Override
            public void error(final String s) {
                LOGGER.warn(prefix + s);
            }

            @Override
            public void error(final String s, final Throwable throwable) {
                LOGGER.warn(prefix + s, throwable);
            }
        };
    }
}
Java

Let’s create the annotation:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.FIELD)
public @interface InjectWireMock {
}
Java

I’m not very sure it is required. But here’s the test base extended by our tests.

public class PactConsumerTestBase {

   @InjectWireMock
   protected WireMockServer wiremock;

   @BeforeEach
   void initWiremockBeforeEach() {
      wiremock.resetAll();
      configureFor(new WireMock(this.wiremock));
   }

   protected void forwardToPactServer(final PactMockServer wrapper) {
      wiremock.resetAll();  
      stubFor(any(anyUrl())
         .atPriority(1)
         .willReturn(aResponse().proxiedFrom(wrapper.getUrl()))
      );
   }

}
Java

Here’s the workaround implementation used as the test extension included with the @ExtendWith annotation:

public class PactMockServerWorkaround implements ParameterResolver {
    
  @Override
  public boolean supportsParameter(ParameterContext parameterContext, 
                                   ExtensionContext extensionContext)
      throws ParameterResolutionException {

     return parameterContext.getParameter().getType() == PactMockServer.class;
  }

  @Override
  @SuppressWarnings("unchecked")
  public Object resolveParameter(ParameterContext parameterContext, 
                                 ExtensionContext extensionContext)
      throws ParameterResolutionException {

      final ExtensionContext.Store store = extensionContext
           .getStore(ExtensionContext.Namespace.create("pact-jvm"));

      if (store.get("providers") == null) {
         return null;
      }

      final List<Pair<ProviderInfo, List<String>>> providers = store
         .get("providers", List.class);
      var pair = providers.get(0);
      final ProviderInfo providerInfo = pair.getFirst();

      var mockServer = store.get("mockServer:" + providerInfo.getProviderName(),
                MockServer.class);

      return new PactMockServer(mockServer.getUrl(), mockServer.getPort());
   }
}
Java

I intentionally do not comment on this workaround. Maybe it could be somehow improved. I wish that everything would work fine just after migrating the Pact extension to Quarkus 3 without any workarounds. However, thanks to the workaround, I was able to run my Pact tests successfully and then update all the required dependencies to the latest.

Final Thoughts

This article guides you through the changes required to migrate your microservices and Pact contract tests from Qaurkus 2 to 3. For me, it is important to automatically update all the dependencies in my demo projects to be up-to-date as described here. I’m using Renovate to automatically scan and update Maven pom.xml dependencies. Once it updates the version of the dependency it runs all the JUnit tests for the verification. The process is automatically performed on the CircleCI. You can view the history of builds of the sample repository used in that article.

The post Pact with Quarkus 3 appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/04/19/pact-with-quarkus-3/feed/ 0 15216
Testing Java Apps on Kubernetes with Testkube https://piotrminkowski.com/2023/11/27/testing-java-apps-on-kubernetes-with-testkube/ https://piotrminkowski.com/2023/11/27/testing-java-apps-on-kubernetes-with-testkube/#respond Mon, 27 Nov 2023 09:32:12 +0000 https://piotrminkowski.com/?p=14684 In this article, you will learn how to test Java apps on Kubernetes with Testkube automatically. We will build the tests for the typical Spring REST-based app. In the first scenario, Testkube runs the JUnit tests using its Maven support. After that, we will run the load tests against the running instance of our app […]

The post Testing Java Apps on Kubernetes with Testkube appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to test Java apps on Kubernetes with Testkube automatically. We will build the tests for the typical Spring REST-based app. In the first scenario, Testkube runs the JUnit tests using its Maven support. After that, we will run the load tests against the running instance of our app using the Grafana k6 tool. Once again, Kubetest provides a standard mechanism for that, no matter which tool we use for testing.

If you are interested in testing on Kubernetes you can also read my article about integration tests with JUnit. There is also a post about contract testing on Kubernetes with Microcks available here.

Introduction

Testkube is a Kubernetes native test orchestration and execution framework. It allows us to run automated tests inside the Kubernetes cluster. It supports several popular testing or build tools like JMeter, Grafana k6, and Maven. We can easily integrate with the CI/CD pipelines or GitOps workflows. We can manage Kubetest by using the CRD objects directly, with the CLI, or through the UI dashboard. Let’s check how it works.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. It contains only a single app. Once you clone it you can go to the src/test directory. You will find there both the JUnit tests written in Java and the k6 tests written in JavaScript. After that, you should just follow my instructions. Let’s begin.

Run Kubetest on Kubernetes

In the first step, we are going to install Testkube on Kubernetes using its Helm chart. Let’s add the kubeshop Helm repository and fetch latest charts info:

$ helm repo add kubeshop https://kubeshop.github.io/helm-charts
$ helm repo update

Then, we can install Testkube in the testkube namespace by executing the following helm command:

$ helm install testkube kubeshop/testkube \
    --create-namespace --namespace testkube

This will add custom resource definitions (CRD), RBAC roles, and role bindings to the Kubernetes cluster. This installation requires having cluster administrative rights.

Once the installation is finished, we can verify a list of running in the testkube namespace. The testkube-api-server and testkube-dashboard are the most important components. However, there are also some additional tools installed like Mongo database or Minio.

$ oc get po -n testkube
NAME                                                    READY   STATUS    RESTARTS        AGE
testkube-api-server-d4d7f9f8b-xpxc9                     1/1     Running   1 (6h17m ago)   6h18m
testkube-dashboard-64578877c7-xghsz                     1/1     Running   0               6h18m
testkube-minio-testkube-586877d8dd-8pmmj                1/1     Running   0               6h18m
testkube-mongodb-dfd8c7878-wzkbp                        1/1     Running   0               6h18m
testkube-nats-0                                         3/3     Running   0               6h18m
testkube-nats-box-567d94459d-6gc4d                      1/1     Running   0               6h18m
testkube-operator-controller-manager-679b998f58-2sv2x   2/2     Running   0               6h18m

We can also install testkube CLI on our laptop. It is not required, but we will use it during the exercise just try the full spectrum of options. You can find CLI installation instructions here. I’m installing it on macOS:

$ brew install testkube

Once the installation is finished, you can run the testkube version command to see that warm “Hello” screen 🙂

testkube-kubernetes-cli

Run Maven Tests with Testkube

Firstly, let’s take a look at the JUnit tests inside our sample Spring Boot app. We are using the TestRestTemplate bean to call all the exposed REST endpoints exposed. There are three JUnit tests for testing adding, getting, and removing the Person objects.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@TestMethodOrder(MethodOrderer.OrderAnnotation::class)
class PersonControllerTests {

   @Autowired
   lateinit var template: TestRestTemplate

   @Test
   @Order(1)
   fun shouldAddPerson() {
      var person = Instancio.of(Person::class.java)
         .ignore(Select.field("id"))
         .create()
      person = template
         .postForObject("/persons", person, Person::class.java)
      Assertions.assertNotNull(person)
      Assertions.assertNotNull(person.id)
      Assertions.assertEquals(1001, person.id)
   }

   @Test
   @Order(2)
   fun shouldUpdatePerson() {
      var person = Instancio.of(Person::class.java)
         .set(Select.field("id"), 1)
         .create()
      template.put("/persons", person)
      var personRemote = template
         .getForObject("/persons/{id}", Person::class.java, 1)
      Assertions.assertNotNull(personRemote)
      Assertions.assertEquals(person.age, personRemote.age)
   }

   @Test
   @Order(3)
   fun shouldDeletePerson() {
      template.delete("/persons/{id}", 1)
      val person = template
         .getForObject("/persons/{id}", Person::class.java, 1)
      Assertions.assertNull(person)
   }

}

We are using Maven as a build tool. The current version of Spring Boot is 3.2.0. The version of JDK used for the compilation is 17. Here’s the fragment of our pom.xml in the repository root directory:

<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>3.2.0</version>
  </parent>
  <groupId>pl.piomin.services</groupId>
  <artifactId>sample-spring-kotlin-microservice</artifactId>
  <version>1.5.3</version>

  <properties>
    <java.version>17</java.version>
    <kotlin.version>1.9.21</kotlin.version>
  </properties>

  <dependencies>
    ...   
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.instancio</groupId>
      <artifactId>instancio-junit</artifactId>
      <version>3.6.0</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
</project>

Testkube provides the Executor CRD for defining a way of running each test. There are several default executors per each type of supported build or test tool. We can display a list of provided executors by running the testkube get executor command. You will see the list of all tools supported by Testkube. Of course, the most interesting executors for us are k6-executor and maven-executor.

$ testkube get executor

Context:  (1.16.8)   Namespace: testkube
----------------------------------------

  NAME                 | URI | LABELS
-----------------------+-----+-----------------------------------
  artillery-executor   |     |
  curl-executor        |     |
  cypress-executor     |     |
  ginkgo-executor      |     |
  gradle-executor      |     |
  jmeter-executor      |     |
  jmeterd-executor     |     |
  k6-executor          |     |
  kubepug-executor     |     |
  maven-executor       |     |
  playwright-executor  |     |
  postman-executor     |     |
  soapui-executor      |     |
  tracetest-executor   |     |
  zap-executor         |     |

By default, maven-executor uses JDK 11 for running Maven tests. Moreover, it still doesn’t provide images for running tests against JDK19+. For me, this is quite a big drawback since the latest LTS version of Java is 21. The maven-executor-jdk17 Executor contains the name of the running image (1) and a list of supported test types (2).

apiVersion: executor.testkube.io/v1
kind: Executor
metadata:
  name: maven-executor-jdk17
  namespace: testkube
spec:
  args:
    - '--settings'
    - <settingsFile>
    - <goalName>
    - '-Duser.home'
    - <mavenHome>
  command:
    - mvn
  content_types:
    - git-dir
    - git
  executor_type: job
  features:
    - artifacts
  # (1)
  image: kubeshop/testkube-maven-executor:jdk17 
  meta:
    docsURI: https://kubeshop.github.io/testkube/test-types/executor-maven
    iconURI: maven
  # (2)
  types:
    - maven:jdk17/project
    - maven:jdk17/test
    - maven:jdk17/integration-test

Finally, we just need to define the Test object that references to maven-executor-jdk17 by the type parameter. Of course, we also need to set the address of the Git repository and the name of the branch.

apiVersion: tests.testkube.io/v3
kind: Test
metadata:
  name: sample-spring-kotlin
  namespace: testkube
spec:
  content:
    repository:
      branch: master
      type: git
      uri: https://github.com/piomin/sample-spring-kotlin-microservice.git
    type: git
  type: maven:jdk17/test

Finally, we can run the sample-spring-kotlin test using the following command:

$ testkube run test sample-spring-kotlin

Using UI Dashboard

First of all, let’s expose the Testkube UI dashboard on the local port. The dashboard also requires a connection to the testkube-api-server from the web browser. After exposing the dashboard with the following port-forward command we can access it under the http://localhost:8080 address:

$ kubectl port-forward svc/testkube-dashboard 8080 -n testkube
$ kubectl port-forward svc/testkube-api-server 8088 -n testkube

Once we access the Testkube dashboard we will see a list of all defined tests:

testkube-kubernetes-ui

Then, we can click the selected tile with the test to see the details. You will be redirected to the history of previous executions available in the “Recent executions” tab. There are six previous executions of our sample-spring-kotlin test. Two of them were finished successfully, the four others were failed.

Let’s take a look at the logs of the last one execution. As you see, all three JUnit tests were successful.

testkube-kubernetes-test-logs

Run Load Tests with Testkube and Grafana k6

In this section, we will create the tests for the instance of our sample app running on Kubernetes. So, in the first step, we need to deploy the app. Here’s the Deployment manifest. We can apply it to the default namespace. The manifest uses the latest image of the sample app available in the registry under the quay.io/pminkows/sample-kotlin-string:1.5.3 address.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-kotlin-spring
  labels:
    app: sample-kotlin-spring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sample-kotlin-spring
  template:
    metadata:
      labels:
        app: sample-kotlin-spring
    spec:
      containers:
      - name: sample-kotlin-spring
        image: quay.io/pminkows/sample-kotlin-spring:1.5.3
        ports:
        - containerPort: 8080

Let’s also create the Kubernetes Service that exposes app pods internally:

apiVersion: v1
kind: Service
metadata:
  name: sample-kotlin-spring
spec:
  selector:
    app: sample-kotlin-spring
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080

After that, we can proceed to the Test manifest. This time, we don’t have to override the default executor, since the k6 version is not important. The test source is located inside the sample Git repository in the src/test/resources/k6/load-tests-get.js (1) file in the master branch. In that case, the repository type is git (2). The k6 test should run for 5 seconds and should use 5 concurrent threads (3). We also need to set the address of a target service as the PERSONS_URI environment variable (4). Of course, we are testing through the Kubernetes Service visible internally under the sample-kotlin-spring.default.svc host and port 8080. The type of the test is k6/script (5).

apiVersion: tests.testkube.io/v3
kind: Test
metadata:
  labels:
    executor: k6-executor
    test-type: k6-script
  name: load-tests-gets
  namespace: testkube
spec:
  content:
    repository:
      branch: master
      # (1)
      path: src/test/resources/k6/load-tests-get.js
      # (2) 
      type: git
      uri: https://github.com/piomin/sample-spring-kotlin-microservice.git
    type: git
  executionRequest:
    # (3)
    args:
      - '-u'
      - '5'
      - '-d'
      - 10s
    # (4)
    variables:
      PERSONS_URI:
        name: PERSONS_URI
        type: basic
        value: http://sample-kotlin-spring.default.svc:8080
        valueFrom: {}
  # (5)
  type: k6/script

Let’s take a look at the k6 test file written in JavaScript. As I mentioned before, you can find it in the src/test/resources/k6/load-tests-get.js file. The test calls the GET /persons/{id} endpoint. It sets the random number between 1 and 1000 as the id path parameter and reads a target service URL from the PERSONS_URI environment variable.

import http from 'k6/http';
import { check } from 'k6';
import { randomIntBetween } from 'https://jslib.k6.io/k6-utils/1.2.0/index.js';

export default function () {
  const id = randomIntBetween(1, 1000);
  const res = http.get(`${__ENV.PERSONS_URI}/persons/${id}`);
  check(res, {
    'is status 200': (res) => res.status === 200,
    'body size is > 0': (r) => r.body.length > 0,
  });
}

Finally, we can run the load-tests-gets test with the following command:

$ testkube run test load-tests-gets

The same as for the Maven test we can verify the execution history in the Testkube dashboard:

We can also display all the logs from the test:

Final Thoughts

Testkube provides a unified way to run Kubernetes tests for the several most popular testing tools. It may be a part of your CI/CD pipeline or a GitOps process. Honestly, I’m still not very convinced if I need a dedicated Kubernetes-native solution for automated tests, instead e.g. a stage in my pipeline that runs test commands. However, you can also use Testkube to execute load or integration tests against the app running on Kubernetes. It is possible to schedule them periodically. Thanks to that you can verify your apps continuously using a single, central tool.

The post Testing Java Apps on Kubernetes with Testkube appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/11/27/testing-java-apps-on-kubernetes-with-testkube/feed/ 0 14684
Integration Testing on Kubernetes with JUnit5 https://piotrminkowski.com/2020/09/01/integration-testing-on-kubernetes-with-junit5/ https://piotrminkowski.com/2020/09/01/integration-testing-on-kubernetes-with-junit5/#respond Tue, 01 Sep 2020 08:40:11 +0000 https://piotrminkowski.com/?p=8688 With Hoverfly you can easily mock HTTP traffic during automated tests. Kubernetes is also based on the REST API. Today, I’m going to show you how to use both these tools together to improve integration testing on Kubernetes. In the first step, we will build an application that uses the fabric8 Kubernetes Client. We don’t […]

The post Integration Testing on Kubernetes with JUnit5 appeared first on Piotr's TechBlog.

]]>
With Hoverfly you can easily mock HTTP traffic during automated tests. Kubernetes is also based on the REST API. Today, I’m going to show you how to use both these tools together to improve integration testing on Kubernetes.
In the first step, we will build an application that uses the fabric8 Kubernetes Client. We don’t have to use it directly. Therefore, I’m going to include Spring Cloud Kubernetes. It uses the fabric8 client for integration with Kubernetes API. Moreover, the fabric8 client provides a mock server for the integration tests. In the beginning, we will use it, but then I’m going to replace it with Hoverfly. Let’s begin!

Source code

The source code is available on GitHub. If you want to clone the repository or just give me a star go here 🙂

Building applications with Spring Cloud Kubernetes

Spring Cloud Kubernetes provides implementations of well known Spring Cloud components based on Kubernetes API. It includes a discovery client, load balancer, and property sources support. We should add the following Maven dependency to enable it in our project.

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-kubernetes-all</artifactId>
</dependency>

Our application connects to the Mongo database, exposes REST API, and communicates with other applications over HTTP. Therefore we need to include some additional dependencies.

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

The overview of our system is visible in the picture below. We need to mock communication between applications and with Kubernetes API. We will also run an embedded in-memory Mongo database during tests. For more details about building microservices with Spring Cloud Kubernetes read the following article.

integration-testing-on-kubernetes-architecture

Testing API with Kubernetes MockServer

First, we need to include a Spring Boot Test starter, that contains basic dependencies used for JUnit tests implementation. Since our application is connected to Mongo and Kubernetes API, we should also mock them during the test. Here’s the full list of required dependencies.

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>de.flapdoodle.embed</groupId>
    <artifactId>de.flapdoodle.embed.mongo</artifactId>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>io.fabric8</groupId>
    <artifactId>kubernetes-server-mock</artifactId>
    <version>4.10.3</version>
    <scope>test</scope>
</dependency>

Let’s discuss what exactly is happening during our test.
(1) First, we are enabling fabric8 Kubernetes Client JUnit5 extension in CRUD mode. It means that we can create a Kubernetes object on the mocked server.
(2) Then the KubernetesClient is injected to the test by the JUnit5 extension.
(3) TestRestTemplate is able to call endpoints exposed by the application that is started during the test.
(4) We need to set the basic properties for KubernetesClient like a default namespace name, master URL.
(5) We are creating ConfigMap that contains application.properties file. ConfigMap with name employee is automatically read by the application employee.
(6) In the test method we are using TestRestTemplate to call REST endpoints. We are mocking Kubernetes API and running Mongo database in the embedded mode.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@EnableKubernetesMockClient(crud = true) // (1)
@TestMethodOrder(MethodOrderer.Alphanumeric.class)
class EmployeeAPITest {

    static KubernetesClient client; // (2)

    @Autowired
    TestRestTemplate restTemplate; // (3)

    @BeforeAll
    static void init() {
        System.setProperty(Config.KUBERNETES_MASTER_SYSTEM_PROPERTY,
            client.getConfiguration().getMasterUrl());
        System.setProperty(Config.KUBERNETES_TRUST_CERT_SYSTEM_PROPERTY,
            "true");
        System.setProperty(
            Config.KUBERNETES_AUTH_TRYKUBECONFIG_SYSTEM_PROPERTY, "false");
        System.setProperty(
            Config.KUBERNETES_AUTH_TRYSERVICEACCOUNT_SYSTEM_PROPERTY, "false");
        System.setProperty(Config.KUBERNETES_HTTP2_DISABLE, "true");
        System.setProperty(Config.KUBERNETES_NAMESPACE_SYSTEM_PROPERTY,
            "default"); // (4)
        client.configMaps().inNamespace("default").createNew()
            .withNewMetadata().withName("employee").endMetadata()
            .addToData("application.properties",
                "spring.data.mongodb.uri=mongodb://localhost:27017/test")
            .done(); // (5)
    }

    @Test // (6)
    void addEmployeeTest() {
        Employee employee = new Employee(1L, 1L, "Test", 30, "test");
        employee = restTemplate.postForObject("/", employee, Employee.class);
        Assertions.assertNotNull(employee);
        Assertions.assertNotNull(employee.getId());
    }

    @Test
    void addAndThenFindEmployeeByIdTest() {
        Employee employee = new Employee(1L, 2L, "Test2", 20, "test2");
        employee = restTemplate.postForObject("/", employee, Employee.class);
        Assertions.assertNotNull(employee);
        Assertions.assertNotNull(employee.getId());
        employee = restTemplate
            .getForObject("/{id}", Employee.class, employee.getId());
        Assertions.assertNotNull(employee);
        Assertions.assertNotNull(employee.getId());
    }

    @Test
    void findAllEmployeesTest() {
        Employee[] employees =
            restTemplate.getForObject("/", Employee[].class);
        Assertions.assertEquals(2, employees.length);
    }

    @Test
    void findEmployeesByDepartmentTest() {
        Employee[] employees =
            restTemplate.getForObject("/department/1", Employee[].class);
        Assertions.assertEquals(1, employees.length);
    }

    @Test
    void findEmployeesByOrganizationTest() {
        Employee[] employees =
            restTemplate.getForObject("/organization/1", Employee[].class);
        Assertions.assertEquals(2, employees.length);
    }

}

Integration Testing on Kubernetes with Hoverfly

To test HTTP communication between applications we usually need to use an additional tool for mocking API. Hoverfly is an ideal solution for such a use case. It is a lightweight, open-source API simulation tool not only for REST-based applications. It allows you to write tests in Java and Python. In addition, it also supports JUnit5. You need to include the following dependencies to enable it in your project.

<dependency>
	<groupId>io.specto</groupId>
	<artifactId>hoverfly-java-junit5</artifactId>
	<version>0.13.0</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>io.specto</groupId>
	<artifactId>hoverfly-java</artifactId>
	<version>0.13.0</version>
	<scope>test</scope>
</dependency>

You can enable Hoverfly in your tests with @ExtendWith annotation. It automatically starts Hoverfly proxy during a test. Our main goal is to mock the Kubernetes client. To do that we still need to set some properties inside @BeforeAll method. The default URL used by KubernetesClient is kubernetes.default.svc. In the first step, we are mocking configmap endpoint and returning predefined Kubernetes ConfigMap with application.properties. The name of ConfigMap is the same as the application name. We are testing communication from the department application to the employee application.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ExtendWith(HoverflyExtension.class)
public class DepartmentAPIAdvancedTest {

    @Autowired
    KubernetesClient client;

    @BeforeAll
    static void setup(Hoverfly hoverfly) {
        System.setProperty(Config.KUBERNETES_TRUST_CERT_SYSTEM_PROPERTY, "true");
        System.setProperty(Config.KUBERNETES_AUTH_TRYKUBECONFIG_SYSTEM_PROPERTY, "false");
        System.setProperty(Config.KUBERNETES_AUTH_TRYSERVICEACCOUNT_SYSTEM_PROPERTY,
            "false");
        System.setProperty(Config.KUBERNETES_HTTP2_DISABLE, "true");
        System.setProperty(Config.KUBERNETES_NAMESPACE_SYSTEM_PROPERTY, "default");
        hoverfly.simulate(dsl(service("kubernetes.default.svc")
            .get("/api/v1/namespaces/default/configmaps/department")
            .willReturn(success().body(json(buildConfigMap())))));
    }

    private static ConfigMap buildConfigMap() {
        return new ConfigMapBuilder().withNewMetadata()
            .withName("department").withNamespace("default")
            .endMetadata()
            .addToData("application.properties",
                "spring.data.mongodb.uri=mongodb://localhost:27017/test")
            .build();
    }
	
    // TESTS ...
	
}

After application startup, we may use TestRestTemplate to call a test endpoint. The endpoint GET /organization/{organizationId}/with-employees retrieves data from the employee application. It finds the department by organization id and then finds all employees assigned to the department. We need to mock a target endpoint using Hoverfly. But before that, we are mocking Kubernetes APIs responsible for getting service and endpoint by name. The address and port returned by the mocked endpoints must be the same as the address of a target application endpoint.

@Autowired
TestRestTemplate restTemplate;

private final String EMPLOYEE_URL = "employee.default:8080";

@Test
void findByOrganizationWithEmployees(Hoverfly hoverfly) {
    Department department = new Department(1L, "Test");
    department = restTemplate.postForObject("/", department, Department.class);
    Assertions.assertNotNull(department);
    Assertions.assertNotNull(department.getId());

    hoverfly.simulate(
        dsl(service(prepareUrl())
            .get("/api/v1/namespaces/default/endpoints/employee")
            .willReturn(success().body(json(buildEndpoints())))),
        dsl(service(prepareUrl())
            .get("/api/v1/namespaces/default/services/employee")
            .willReturn(success().body(json(buildService())))),
        dsl(service(EMPLOYEE_URL)
            .get("/department/" + department.getId())
            .willReturn(success().body(json(buildEmployees())))));

    Department[] departments = restTemplate
        .getForObject("/organization/{organizationId}/with-employees", Department[].class, 1L);
    Assertions.assertEquals(1, departments.length);
    Assertions.assertEquals(1, departments[0].getEmployees().size());
}

private Service buildService() {
    return new ServiceBuilder().withNewMetadata().withName("employee")
            .withNamespace("default").withLabels(new HashMap<>())
            .withAnnotations(new HashMap<>()).endMetadata().withNewSpec().addNewPort()
            .withPort(8080).endPort().endSpec().build();
}

private Endpoints buildEndpoints() {
    return new EndpointsBuilder().withNewMetadata()
        .withName("employee").withNamespace("default")
        .endMetadata()
        .addNewSubset().addNewAddress()
        .withIp("employee.default").endAddress().addNewPort().withName("http")
        .withPort(8080).endPort().endSubset()
        .build();
}

private List<Employee> buildEmployees() {
    List<Employee> employees = new ArrayList<>();
    Employee employee = new Employee();
    employee.setId("abc123");
    employee.setAge(30);
    employee.setName("Test");
    employee.setPosition("test");
    employees.add(employee);
    return employees;
}

private String prepareUrl() {
    return client.getConfiguration().getMasterUrl()
        .replace("/", "")
        .replace("https:", "");
}

Conclusion

The approach described in this article allows you to create integration tests without running a Kubernetes instance. On the other hand, you could start a single-node Kubernetes instance like Microk8s and deploy your application there. You could as well use an existing cluster, and implement your tests with Arquillian Cube. It is able to communicate directly to the Kubernetes API.
Another key point is testing communication between applications. In my opinion, Hoverfly is the best tool for that. It is able to mock the whole traffic over HTTP in the single test. With Hoverfly, fabric8 and Spring Cloud you can improve your integration testing on Kubernetes.

The post Integration Testing on Kubernetes with JUnit5 appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/09/01/integration-testing-on-kubernetes-with-junit5/feed/ 0 8688
Part 1: Testing Kafka Microservices With Micronaut https://piotrminkowski.com/2019/10/09/part-1-testing-kafka-microservices-with-micronaut/ https://piotrminkowski.com/2019/10/09/part-1-testing-kafka-microservices-with-micronaut/#respond Wed, 09 Oct 2019 09:08:26 +0000 https://piotrminkowski.wordpress.com/?p=7305 I have already described how to build microservices architecture entirely based on message-driven communication through Apache Kafka in one of my previous articles Kafka In Microservices With Micronaut. As you can see in the article title the sample applications and integration with Kafka has been built on top of Micronaut Framework. I described some interesting […]

The post Part 1: Testing Kafka Microservices With Micronaut appeared first on Piotr's TechBlog.

]]>
I have already described how to build microservices architecture entirely based on message-driven communication through Apache Kafka in one of my previous articles Kafka In Microservices With Micronaut. As you can see in the article title the sample applications and integration with Kafka has been built on top of Micronaut Framework. I described some interesting features of Micronaut, that can be used for building message-driven microservices, but I didn’t specifically write anything about testing. In this article I’m going to show you example of testing your Kafka microservices using Micronaut Test core features (Component Tests), Testcontainers (Integration Tests) and Pact (Contract Tests).

Generally, automated testing is one of the biggest challenges related to microservices architecture. Therefore the most popular microservice frameworks like Micronaut or Spring Boot provide some useful features for that. There are also some dedicated tools which help you to use Docker containers in your tests or provide mechanisms for verifying the contracts between different applications. For the purpose of current article demo applications I’m using the same repository as for the previous article: https://github.com/piomin/sample-kafka-micronaut-microservices.git.

Sample Architecture

The architecture of sample applications has been described in the previous article but let me perform a quick recap. We have 4 microservices: order-service, trip-service, driver-service and passenger-service. The implementation of these applications is very simple. All of them have in-memory storage and connect to the same Kafka instance.
A primary goal of our system is to arrange a trip for customers. The order-service application also acts as a gateway. It is receiving requests from customers, saving history and sending events to orders topic. All the other microservices are listening on this topic and processing orders sent by order-service. Each microservice has its own dedicated topic, where it sends events with information about changes. Such events are received by some other microservices. The architecture is presented in the picture below.

micronaut-kafka-1

Embedded Kafka – Component Testing with Micronaut

After a short description of the architecture we may proceed to the key point of this article – testing. Micronaut allows you to start an embedded Kafka instance for the purpose of testing. To do that you should first include the following dependencies to your Maven pom.xml:

<dependency>
   <groupId>org.apache.kafka</groupId>
   <artifactId>kafka-clients</artifactId>
   <version>2.3.0</version>
   <classifier>test</classifier>
</dependency>
<dependency>
   <groupId>org.apache.kafka</groupId>
   <artifactId>kafka_2.12</artifactId>
   <version>2.3.0</version>
</dependency>
<dependency>
   <groupId>org.apache.kafka</groupId>
   <artifactId>kafka_2.12</artifactId>
   <version>2.3.0</version>
   <classifier>test</classifier>
</dependency>

To enable embedded Kafka for a test class we have to set property kafka.embedded.enabled to true. Because I have run Kafka on Docker container, which is by default available on address 192.168.99.100 I also need to change dynamically the value of property kafka.bootstrap.servers to localhost:9092 for a given test. The test implementation class uses embedded Kafka for testing three basic scenarios for order-service: sending orders with new trip, and receiving orders for trip cancellation and completion from other microservices. Here’s the full code of my OrderKafkaEmbeddedTest

@MicronautTest
@Property(name = "kafka.embedded.enabled", value = "true")
@Property(name = "kafka.bootstrap.servers", value = "localhost:9092")
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class OrderKafkaEmbeddedTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(OrderKafkaEmbeddedTest.class);

    @Inject
    OrderClient client;
    @Inject
    OrderInMemoryRepository repository;
    @Inject
    OrderHolder orderHolder;
    @Inject
    KafkaEmbedded kafkaEmbedded;

    @BeforeAll
    public void init() {
        LOGGER.info("Topics: {}", kafkaEmbedded.getKafkaServer().get().zkClient().getAllTopicsInCluster());
    }

    @Test
    @org.junit.jupiter.api.Order(1)
    public void testAddNewTripOrder() throws InterruptedException {
        Order order = new Order(OrderType.NEW_TRIP, 1L, 50, 30);
        order = repository.add(order);
        client.send(order);
        Order orderSent = waitForOrder();
        Assertions.assertNotNull(orderSent);
        Assertions.assertEquals(order.getId(), orderSent.getId());
    }

    @Test
    @org.junit.jupiter.api.Order(2)
    public void testCancelTripOrder() throws InterruptedException {
        Order order = new Order(OrderType.CANCEL_TRIP, 1L, 50, 30);
        client.send(order);
        Order orderReceived = waitForOrder();
        Optional<Order> oo = repository.findById(1L);
        Assertions.assertTrue(oo.isPresent());
        Assertions.assertEquals(OrderStatus.REJECTED, oo.get().getStatus());
    }

    @Test
    @org.junit.jupiter.api.Order(3)
    public void testPaymentTripOrder() throws InterruptedException {
        Order order = new Order(OrderType.PAYMENT_PROCESSED, 1L, 50, 30);
        order.setTripId(1L);
        order = repository.add(order);
        client.send(order);
        Order orderSent = waitForOrder();
        Optional<Order> oo = repository.findById(order.getId());
        Assertions.assertTrue(oo.isPresent());
        Assertions.assertEquals(OrderStatus.COMPLETED, oo.get().getStatus());
    }

    private Order waitForOrder() throws InterruptedException {
        Order orderSent = null;
        for (int i = 0; i < 10; i++) {
            orderSent = orderHolder.getCurrentOrder();
            if (orderSent != null)
                break;
            Thread.sleep(1000);
        }
        orderHolder.setCurrentOrder(null);
        return orderSent;
    }

}

At that stage some things require clarification – especially the mechanism of verifying sending and receiving messages. I’ll describe it in the example of driver-service. When a message is incoming to the order topic it is received by OrderListener, which is annotated with @KafkaListener as shown below. It gets the order type and forwards the NEW_TRIP request to DriverService bean.

@KafkaListener(groupId = "driver")
public class OrderListener {

    private static final Logger LOGGER = LoggerFactory.getLogger(OrderListener.class);

    private DriverService service;

    public OrderListener(DriverService service) {
        this.service = service;
    }

    @Topic("orders")
    public void receive(@Body Order order) {
        LOGGER.info("Received: {}", order);
        switch (order.getType()) {
            case NEW_TRIP -> service.processNewTripOrder(order);
        }
    }
}

The DriverService is processing order. It is trying to find the driver located closest to the customer, changing found driver’s status to unavailable and sending events with change with the current driver state.

@Singleton
public class DriverService {

    private static final Logger LOGGER = LoggerFactory.getLogger(DriverService.class);

    private DriverClient client;
    private OrderClient orderClient;
    private DriverInMemoryRepository repository;

    public DriverService(DriverClient client, OrderClient orderClient, DriverInMemoryRepository repository) {
        this.client = client;
        this.orderClient = orderClient;
        this.repository = repository;
    }

    public void processNewTripOrder(Order order) {
        LOGGER.info("Processing: {}", order);
        Optional<Driver> driver = repository.findNearestDriver(order.getCurrentLocationX(), order.getCurrentLocationY());
        if (driver.isPresent()) {
            Driver driverLocal = driver.get();
            driverLocal.setStatus(DriverStatus.UNAVAILABLE);
            repository.updateDriver(driverLocal);
            client.send(driverLocal, String.valueOf(order.getId()));
            LOGGER.info("Message sent: {}", driverLocal);
        }
    }
   
   // OTHER METHODS ...
}

To verify that a final message with change notification has been sent to the drivers topic we have to create our own listener for the test purposes. It receives the message and writes it in @Singleton holder class which is then accessed by a single-thread test class. The described process is visualized in the picture below.
kafka-micronaut-testing-1.png
Here’s the implementation of test listener which is responsible just for receiving the message sent to drivers topic and writing it to DriverHolder bean.

@KafkaListener(groupId = "driverTest")
public class DriverConfirmListener {

   private static final Logger LOGGER = LoggerFactory.getLogger(DriverConfirmListener.class);

   @Inject
   DriverHolder driverHolder;

   @Topic("orders")
   public void receive(@Body Driver driver) {
      LOGGER.info("Confirmed: {}", driver);
      driverHolder.setCurrentDriver(driver);
   }

}

Here’s the implementation of DriverHolder class.

@Singleton
public class DriverHolder {

   private Driver currentDriver;

   public Driver getCurrentDriver() {
      return currentDriver;
   }

   public void setCurrentDriver(Driver currentDriver) {
      this.currentDriver = currentDriver;
   }

}

No matter if you are using embedded Kafka, Testcontainers or just manually started a Docker container you can use the verification mechanism described above.

Kafka with Testcontainers

We will use the Testcontainers framework for running Docker containers of Zookeeper and Kafka during JUnit tests. Testcontainers is a Java library that provides lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. To use it in your project together with JUnit 5, which is already used for our sample Micronaut application, you have to add the following dependencies to your Maven pom.xml:

<dependency>
   <groupId>org.testcontainers</groupId>
   <artifactId>kafka</artifactId>
   <version>1.12.2</version>
   <scope>test</scope>
</dependency>
<dependency>
   <groupId>org.testcontainers</groupId>
   <artifactId>junit-jupiter</artifactId>
   <version>1.12.2</version>
   <scope>test</scope>
</dependency>

The declared library org.testcontainers:kafka:1.12.2 provides KafkaContainer class that allows to define and start a Kafka container with embedded Zookeeper in your tests. However, I decided to use GenericContainer class and run two containers wurstmeister/zookeeper and wurstmeister/kafka. Because Kafka needs to communicate with Zookeeper both containers should be run in the same network. We will also have to override Zookeeper container’s name and host name to allow Kafka to call it by the hostname.
When running a Kafka container we need to set some important environment variables. Variable KAFKA_ADVERTISED_HOST_NAME sets the hostname under which Kafka is visible for external client and KAFKA_ZOOKEEPER_CONNECT Zookeeper lookup address. Although it is not recommended we should disable dynamic exposure port generation by setting static port number equal to the container binding port 9092. That helps us to avoid some problems with setting Kafka advertised port and injecting it into Micronaut configuration.

@MicronautTest
@Testcontainers
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class OrderKafkaContainerTest {

    private static final Logger LOGGER = LoggerFactory.getLogger(OrderKafkaContainerTest.class);

    static Network network = Network.newNetwork();

   @Container
   public static final GenericContainer ZOOKEEPER = new GenericContainer("wurstmeister/zookeeper")
      .withCreateContainerCmdModifier(it -> ((CreateContainerCmd) it).withName("zookeeper").withHostName("zookeeper"))
      .withExposedPorts(2181)
      .withNetworkAliases("zookeeper")
      .withNetwork(network);

   @Container
   public static final GenericContainer KAFKA_CONTAINER = new GenericContainer("wurstmeister/kafka")
      .withCreateContainerCmdModifier(it -> ((CreateContainerCmd) it).withName("kafka").withHostName("kafka")
         .withPortBindings(new PortBinding(Ports.Binding.bindPort(9092), new ExposedPort(9092))))
      .withExposedPorts(9092)
      .withNetworkAliases("kafka")
      .withEnv("KAFKA_ADVERTISED_HOST_NAME", "192.168.99.100")
      .withEnv("KAFKA_ZOOKEEPER_CONNECT", "zookeeper:2181")
      .withNetwork(network);
      
   // TESTS ...
   
}

The test scenarios may be the same as for embedded Kafka or we may attempt to define some more advanced integration tests. To do that we first create a Docker image of every microservice during the build. We can use io.fabric8:docker-maven-plugin for that. Here’s the example for driver-service.

<plugin>
   <groupId>io.fabric8</groupId>
   <artifactId>docker-maven-plugin</artifactId>
   <version>0.31.0</version>
   <configuration>
      <images>
         
      </images>
   </configuration>
   <executions>
      <execution>
         <id>start</id>
         <phase>pre-integration-test</phase>
         <goals>
            <goal>build</goal>
            <goal>start</goal>
         </goals>
      </execution>
      <execution>
         <id>stop</id>
         <phase>post-integration-test</phase>
         <goals>
            <goal>stop</goal>
         </goals>
      </execution>
   </executions>
</plugin>

If we have a Docker image of every microservice we can easily run it using Testcontainers during our integration tests. In the fragment of test class visible below I’m running the container with driver-service in addition to Kafka and Zookeeper containers. The test is implemented inside order-service. We are building the same scenario as in the test with embedded Kafka – sending the NEW_TRIP order. But this time we are verifying if the message has been received and processed by the driver-service. This verification is performed by listening for notification events sent by driver-service started on Docker container to the drivers topic. Normally, order-service does not listen for messages incoming to drivers topic, but we created such integration just for the integration test purpose.

@Container
public static final GenericContainer DRIVER_CONTAINER = new GenericContainer("piomin/driver-service")
   .withNetwork(network);

@Inject
OrderClient client;
@Inject
OrderInMemoryRepository repository;
@Inject
DriverHolder driverHolder;

@Test
@org.junit.jupiter.api.Order(1)
public void testNewTrip() throws InterruptedException {
   Order order = new Order(OrderType.NEW_TRIP, 1L, 50, 30);
   order = repository.add(order);
   client.send(order);
   Driver driverReceived = null;
   for (int i = 0; i < 10; i++) {
      driverReceived = driverHolder.getCurrentDriver();
      if (driverReceived != null)
         break;
      Thread.sleep(1000);
   }
   driverHolder.setCurrentDriver(null);
   Assertions.assertNotNull(driverReceived);
}

Summary

In this article, I have described an approach to component testing with embedded Kafka, and Micronaut, and also integration tests with Docker and Testcontainers. This is the first part of the article, in the second I’m going to show you how to build contract tests for Micronaut applications with Pact.

The post Part 1: Testing Kafka Microservices With Micronaut appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/10/09/part-1-testing-kafka-microservices-with-micronaut/feed/ 0 7305
Micronaut Tutorial: Server Application https://piotrminkowski.com/2019/04/23/micronaut-tutorial-server-application/ https://piotrminkowski.com/2019/04/23/micronaut-tutorial-server-application/#comments Tue, 23 Apr 2019 07:30:34 +0000 https://piotrminkowski.wordpress.com/?p=7115 In this part of my tutorial to Micronaut framework we are going to create a simple HTTP server-side application running on Netty. We have already discussed the most interesting core features of Micronaut like beans, scopes or unit testing in the first part of that tutorial. For more details you may refer to my article […]

The post Micronaut Tutorial: Server Application appeared first on Piotr's TechBlog.

]]>
In this part of my tutorial to Micronaut framework we are going to create a simple HTTP server-side application running on Netty. We have already discussed the most interesting core features of Micronaut like beans, scopes or unit testing in the first part of that tutorial. For more details you may refer to my article Micronaut Tutorial: Beans and Scopes.

Assuming we have a basic knowledge about core mechanisms of Micronaut we may proceed to the key part of that framework and discuss how to build a simple microservice application exposing REST API over HTTP.

Embedded Micronaut HTTP Server

First, we need to include dependency to our pom.xml responsible for running an embedded server during application startup. By default, Micronaut starts on Netty server, so we only need to include the following dependency:

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-http-server-netty</artifactId>
</dependency>

Assuming, we have the following main class defined, we only need to run it:

public class MainApp {

    public static void main(String[] args) {
        Micronaut.run(MainApp.class);
    }

}

By default Netty server runs on port 8080. You may override it to force the server to run on a specific port by setting the following property in your application.yml or bootstrap.yml. You can also set the value of this property to -1 to run the server on a randomly generated port.

micronaut:
  server:
    port: 8100

Creating HTTP Web Application

If you are already familiar with Spring Boot you should not have any problems with building a simple REST server-side application using Micronaut. The approach is almost identical. We just have to create a controller class and annotate it with @Controller. Micronaut supports all HTTP method types. You will probably use: @Get, @Post, @Delete, @Put or @Patch. Here’s our sample controller class that implements methods for adding new Person object, finding all persons or a single person by id:

@Controller("/persons")
public class PersonController {

    List<Person> persons = new ArrayList<>();

    @Post
    public Person add(Person person) {
        person.setId(persons.size() + 1);
        persons.add(person);
        return person;
    }

    @Get("/{id}")
    public Optional<Person> findById(Integer id) {
        return persons.stream()
                .filter(it -> it.getId().equals(id))
                .findFirst();
    }

    @Get
    public List<Person> findAll() {
        return persons;
    }

}

Request variables are resolved automatically and bind to the variable with the same name. Micronaut populates methods arguments from URI variables like /{variableName} and GET query parameters like ?paramName=paramValue. If the request contains JSON body you should annotate it with @Body. Our sample controller is very simple. It does not perform any input data validation. Let’s change it.

Validation

To be able to perform HTTP requests validation we should first include the following dependencies to our pom.xml:

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-validation</artifactId>
</dependency>
<dependency>
   <groupId>io.micronaut.configuration</groupId>
   <artifactId>micronaut-hibernate-validator</artifactId>
</dependency>

Validation in Micronaut is based on JSR-380, also known as Bean Validation 2.0. We can use javax.validation annotations such as @NotNull, @Min or @Max. Micronaut uses an implementation provided by Hibernate Validator, so even if you don’t use any JPA in your project, you have to include micronaut-hibernate-validator to your dependencies. After that we may add a validation to our model class using some javax.validation annotations. Here’s a Person model class with validation. All the fields are required: firstName and lastName cannot be blank, id cannot be greater than 10000, age cannot be lower than 0.

public class Person {

    @Max(10000)
    private Integer id;
    @NotBlank
    private String firstName;
    @NotBlank
    private String lastName;
    @PositiveOrZero
    private int age;
    @NotNull
    private Gender gender;

    public Integer getId() {
        return id;
    }

    public void setId(Integer id) {
        this.id = id;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }

    public int getAge() {
        return age;
    }

    public void setAge(int age) {
        this.age = age;
    }

    public Gender getGender() {
        return gender;
    }

    public void setGender(Gender gender) {
        this.gender = gender;
    }
   
}

Now, we need to modify the code of our controller. First, it needs to be annotated with @Validated. Also @Body parameter of POST method should be annotated with @Valid. The REST method argument may also be validated using JSR-380 annotation. Alternatively, we may configure validation using URI templates. The annotation @Get("/{id:4}") indicates that a variable can contain 4 characters max (is lower than 10000) or a query parameter is optional as shown here: @Get("{?max,offset}").
Here’s the current implementation of our controller. Besides validation, I have also implemented pagination for findAll based on offset and limit optional parameters:

@Controller("/persons")
@Validated
public class PersonController {

    List<Person> persons = new ArrayList<>();

    @Post
    public Person add(@Body @Valid Person person) {
        person.setId(persons.size() + 1);
        persons.add(person);
        return person;
    }

    @Get("/{id:4}")
    public Optional<Person> findById(@NotNull Integer id) {
        return persons.stream()
                .filter(it -> it.getId().equals(id))
                .findFirst();
    }

    @Get("{?max,offset}")
    public List<Person> findAll(@Nullable Integer max, @Nullable Integer offset) {
        return persons.stream()
                .skip(offset == null ? 0 : offset)
                .limit(max == null ? 10000 : max)
                .collect(Collectors.toList());
    }

}

Since we have finished the implementation of our controller, it is the right time to test it.

Testing Micronaut with embedded HTTP server

We have already discussed testing with Micronaut in the first part of my tutorial. The only difference in comparison to those tests is the necessity of running an embedded server and call endpoint via HTTP. To do that we have to include the dependency with Micronaut HTTP client:

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-http-client</artifactId>
   <scope>test</scope>
</dependency>

We should also inject an instance of embedded server in order to be able to detect its address (for example if a port number is generated automatically):

@MicronautTest
public class PersonControllerTests {

    @Inject
    EmbeddedServer server;
   
   // tests implementation ...
   
}

We are building Micronaut HTTP Client programmatically by calling static method create. It is also possible to obtain a reference to HttpClient by annotating it with @Client.
The following test implementation is based on JUnit 5. I have provided the positive test for all the exposed endpoints and one negative scenario with not valid input data (age field lower than zero). Micronaut HTTP client can be used in both asynchronous non-blocking mode and synchronous blocking mode. In that case we force it to work in blocking mode.

@MicronautTest
public class PersonControllerTests {

    @Inject
    EmbeddedServer server;

    @Test
    public void testAdd() throws MalformedURLException {
        HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        Person person = new Person();
        person.setFirstName("John");
        person.setLastName("Smith");
        person.setAge(33);
        person.setGender(Gender.MALE);
        person = client.toBlocking().retrieve(HttpRequest.POST("/persons", person), Person.class);
        Assertions.assertNotNull(person);
        Assertions.assertEquals(Integer.valueOf(1), person.getId());
    }

    @Test
    public void testAddNotValid() throws MalformedURLException {
        HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        final Person person = new Person();
        person.setFirstName("John");
        person.setLastName("Smith");
        person.setAge(-1);
        person.setGender(Gender.MALE);

        Assertions.assertThrows(HttpClientResponseException.class,
                () -> client.toBlocking().retrieve(HttpRequest.POST("/persons", person), Person.class),
                "person.age: must be greater than or equal to 0");
    }

    @Test
    public void testFindById() throws MalformedURLException {
        HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        Person person = client.toBlocking().retrieve(HttpRequest.GET("/persons/1"), Person.class);
        Assertions.assertNotNull(person);
    }

    @Test
    public void testFindAll() throws MalformedURLException {
        HttpClient client = HttpClient.create(new URL("http://" + server.getHost() + ":" + server.getPort()));
        Person[] persons = client.toBlocking().retrieve(HttpRequest.GET("/persons"), Person[].class);
        Assertions.assertEquals(1, persons.length);
    }

}

We have already built the simple web application that exposes some methods over REST API, validates input data and includes JUnit API tests. Now, we may discuss some more advanced, interesting Micronaut features. First of them is built-in support for API versioning.

API versioning

Since 1.1, Micronaut supports API versioning via a dedicated @Version annotation. To test this feature we will add a new version of findAll method to our controller class. The new version of this method requires to set input parameters max and offset, which were optional for the first version of the method.

@Version("1")
@Get("{?max,offset}")
public List<Person> findAll(@Nullable Integer max, @Nullable Integer offset) {
   return persons.stream()
         .skip(offset == null ? 0 : offset)
         .limit(max == null ? 10000 : max)
         .collect(Collectors.toList());
}

@Version("2")
@Get("?max,offset")
public List<Person> findAllV2(@NotNull Integer max, @NotNull Integer offset) {
   return persons.stream()
         .skip(offset == null ? 0 : offset)
         .limit(max == null ? 10000 : max)
         .collect(Collectors.toList());
}

Versioning feature is not enabled by default. To do that, you need to set property micronaut.router.versioning.enabled to true in application.yml. We will also set default version to 1, which is compatible with tests created in the previous section that does not use versioning feature:

micronaut:
  router:
    versioning:
      enabled: true
      default-version: 1

Micronaut automatic versioning is supported by a declarative HTTP client. To create such a client we need to define an interface that contains signature of target server-side method, and is annotated with @Client. Here’s declarative client interface responsible only for communicating with version 2 of findAll method:

@Client("/persons")
public interface PersonClient {

    @Version("2")
    @Get("?max,offset")
    List<Person> findAllV2(Integer max, Integer offset);

}

The PersonClient declared above may be injected into the test and used for calling API method exposed by server-side application:


@Inject
PersonClient client;

@Test
public void testFindAllV2() {
   List<Person> persons = client.findAllV2(10, 0);
   Assertions.assertEquals(1, persons.size());
}

API Documentation with Swagger

Micronaut provides built-in support for generating Open API / Swagger YAML documentation at compilation time. We can customize produced documentation using standard Swagger annotations. To enable this support for our application we should add the following swagger-annotations dependency to pom.xml, and enable annotation processing for micronaut-openapi module inside Maven compiler plugin configuration:

<dependency>
   <groupId>io.swagger.core.v3</groupId>
   <artifactId>swagger-annotations</artifactId>
</dependency>
...
<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-compiler-plugin</artifactId>
   <version>3.7.0</version>
   <configuration>
      <source>${jdk.version}</source>
      <target>${jdk.version}</target>
      <compilerArgs>
         <arg>-parameters</arg>
      </compilerArgs>
      <annotationProcessorPaths>
         <path>
            <groupId>io.micronaut</groupId>
            <artifactId>micronaut-inject-java</artifactId>
            <version>${micronaut.version}</version>
         </path>
         <path>
            <groupId>io.micronaut.configuration</groupId>
            <artifactId>micronaut-openapi</artifactId>
            <version>${micronaut.version}</version>
         </path>
      </annotationProcessorPaths>
   </configuration>
   ...
</plugin>

We have to include some basic information to the generated Swagger YAML like application name, description, version number or author name using @OpenAPIDefinition annotation:

@OpenAPIDefinition(
   info = @Info(
      title = "Sample Application",
      version = "1.0",
      description = "Sample API",
      contact = @Contact(url = "https://piotrminkowski.wordpress.com", name = "Piotr Mińkowski", email = "piotr.minkowski@gmail.com")
   )
)
public class MainApp {

    public static void main(String[] args) {
        Micronaut.run(MainApp.class);
    }

}

Micronaut generates the Swagger manifest based on title and version fields inside @Info annotation. In that case our YAML definition file is available under name sample-application-1.0.yml, and will be generated to the META-INF/swagger directory. We can expose it outside the application using HTTP endpoint. Here’s the appropriate configuration provided inside application.yml file.

micronaut:
  static-resources:
    swagger:
     paths: classpath:META-INF/swagger
     mapping: /swagger/**

Assuming our application is running on port 8100 Swagger definition is available under the path http://localhost:8100/swagger/sample-application-1.0.yml. You can call this endpoint and copy the response to any Swagger editor as shown below.

micronaut-6

Management and Monitoring Endpoints

Micronaut provides some built-in HTTP endpoints used for management and monitoring. To enable them for the application we first need to include the following dependency:

<dependency>
   <groupId>io.micronaut</groupId>
   <artifactId>micronaut-management</artifactId>
</dependency>

There are no endpoints exposed by default outside application. If you would like to expose them all you should set property endpoints.all.enabled to true. Alternatively, you can enable or disable the single endpoint just by using its id instead of all in the name of property. Also, some of built-in endpoints require authentication, and some not. You may enable/disable it for all endpoints using property endpoints.all.enabled. The following configuration inside application.yaml enables all built-in endpoints except stop endpoints using for graceful shutdown of application, and disables authentication for all the enabled endpoints:

endpoints:
  all:
    enabled: true
    sensitive: false
  stop:
    enabled: false

You may use one of the following:

  • GET /beans – returns information about the loaded bean definitions
  • GET /info – returns static information from the state of the application
  • GET /health – exposes “healthcheck”
  • POST /refresh – it is refresh the application state, all the beans annotated with @Refreshable will be reloaded
  • GET /routes – returns information about URIs exposed by the application
  • GET /logger – returns information about the available loggers
  • GET /caches – returns information about the caches
  • POST /stop – it shuts down the application server

Summary

In this tutorial you have learned how to:

  • Build a simple application that exposes some HTTP endpoints
  • Validate input data inside controller
  • Test your controller with JUnit 5 on embedded Netty using Micronaut HTTP client
  • Use built-in API versioning
  • Generate Swagger API documentation automatically
  • Using build-in management and monitoring endpoints

The first part of my tutorial is available here: https://piotrminkowski.wordpress.com/2019/04/15/micronaut-tutorial-beans-and-scopes/. It uses the same repository as the current part: https://github.com/piomin/sample-micronaut-applications.git.

The post Micronaut Tutorial: Server Application appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/04/23/micronaut-tutorial-server-application/feed/ 3 7115
Micronaut Tutorial: Beans and Scopes https://piotrminkowski.com/2019/04/15/micronaut-tutorial-beans-and-scopes/ https://piotrminkowski.com/2019/04/15/micronaut-tutorial-beans-and-scopes/#comments Mon, 15 Apr 2019 21:04:00 +0000 https://piotrminkowski.wordpress.com/?p=7108 Micronaut is a relatively new JVM-based framework. It is especially designed for building modular, easy testable microservice applications. Micronaut is heavily inspired by Spring and Grails frameworks, which is not a surprise, if we consider it has been developed by the creators of Grails framework. It is based on Java’s annotation processing, IoC (Inversion of […]

The post Micronaut Tutorial: Beans and Scopes appeared first on Piotr's TechBlog.

]]>
Micronaut is a relatively new JVM-based framework. It is especially designed for building modular, easy testable microservice applications. Micronaut is heavily inspired by Spring and Grails frameworks, which is not a surprise, if we consider it has been developed by the creators of Grails framework. It is based on Java’s annotation processing, IoC (Inversion of Control) and DI (Dependency Injection).

Micronaut implements the JSR-330 (java.inject) specification for dependency injection. It supports constructor injection, field injection, JavaBean and method parameter injection. In this part of tutorial I’m going to give some tips on how to:

  • define and register beans in the application context
  • use built-in scopes
  • inject configuration to your application
  • automatically test your beans during application build with JUnit 5

Prequirements

Before we proceed to the development we need to create sample project with dependencies. Here’s the list of Maven dependencies used in this the application created for this tutorial:

<dependencies>
   <dependency>
      <groupId>io.micronaut</groupId>
      <artifactId>micronaut-inject</artifactId>
   </dependency>
   <dependency>
      <groupId>io.micronaut</groupId>
      <artifactId>micronaut-runtime</artifactId>
   </dependency>
   <dependency>
      <groupId>io.micronaut</groupId>
      <artifactId>micronaut-inject-java</artifactId>
      <scope>provided</scope>
   </dependency>
   <dependency>
      <groupId>ch.qos.logback</groupId>
      <artifactId>logback-classic</artifactId>
      <version>1.2.3</version>
      <scope>runtime</scope>
   </dependency>
   <dependency>
      <groupId>io.micronaut.test</groupId>
      <artifactId>micronaut-test-junit5</artifactId>
      <scope>test</scope>
   </dependency>
</dependencies>

We will use the newest stable version of Micronaut – 1.1.0:

<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>io.micronaut</groupId>
         <artifactId>micronaut-bom</artifactId>
         <version>1.1.0</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

The sample application source code is available on Github in the repository https://github.com/piomin/sample-micronaut-applications.git.

Scopes

Micronaut provides 6 built-in scopes for beans. Following JSR-330 additional scopes can be added by defining a @Singleton bean that implements the CustomScope interface. Here’s the list of built-in scopes:

    • Singleton – singleton pattern for bean
    • Prototype – a new instance of the bean is created each time it is injected. It is default scope for bean
    • ThreadLocal – is a custom scope that associates a bean per thread via a ThreadLocal
    • Context – a bean is created at the same time as the ApplicationContext
    • Infrastructure – the @Context bean cannot be replaced
    • Refreshable – a custom scope that allows a bean’s state to be refreshed via the /refresh endpoint
Thread Local

Two of those scopes are really interesting. Let’s begin from @ThreadLocal scope. That’s something that is not available for beans in Spring. We can associate bean with thread using single annotation.
How does it work? First, let’s define the bean with @ThreadLocal scope. It holds the single value in the field correlationId. The main function of this bean is to pass the same id between different singleton beans within a single thread. Here’s our sample bean:

@ThreadLocal
public class MiddleService {

    private String correlationId;

    public String getCorrelationId() {
        return correlationId;
    }

    public void setCorrelationId(String correlationId) {
        this.correlationId = correlationId;
    }

}

Every singleton bean injects our bean annotated with @ThreadLocal. There are 2 sample singleton beans defined:

@Singleton
public class BeginService {

    @Inject
    MiddleService service;

    public void start(String correlationId) {
        service.setCorrelationId(correlationId);
    }

}

@Singleton
public class FinishService {

    @Inject
    MiddleService service;

    public String finish() {
        return service.getCorrelationId();
    }

}
Testing

Testing with Micronaut and JUnit 5 is very simple. We have already included micronaut-test-junit5 dependency to our pom.xml. Now, we only have to annotate the test class with @MicronautTest.
Here’s our test. I have run 20 threads which use @ThreadLocal through BeginService and FinishService singletons. Each thread sets randomly generated correlation id and checks if those two singleton beans use the same correlationId.

@MicronautTest
public class ScopesTests {

    @Inject
    BeginService begin;
    @Inject
    FinishService finish;

    @Test
    public void testThreadLocalScope() {
        final Random r = new Random();
        ExecutorService executor = Executors.newFixedThreadPool(10);
        for (int i = 0; i < 20; i++) {
            executor.execute(() -> {
                String correlationId = "abc" + r.nextInt(10000);
                begin.start(correlationId);
                Assertions.assertEquals(correlationId, finish.finish());
            });
        }
        executor.shutdown();
        while (!executor.isTerminated()) {
        }
        System.out.println("Finished all threads");
    }
   
}
Refreshable

The @Refreshable is another interesting scope offered by Micronaut. You can refresh the state of such a bean by calling HTTP endpoint /refresh or by publishing RefreshEvent to application context. Because we don’t use HTTP server the second option is a right choice for us. First, let’s define a bean with @Refreshable scope. It inject value from configuration property and returns it:

@Refreshable
public class RefreshableService {

    @Property(name = "test.property")
    String testProperty;

    @PostConstruct
    public void init() {
        System.out.println("Property: " + testProperty);
    }

    public String getTestProperty() {
        return testProperty;
    }

}

To test it we should first replace the value of test.property. After injecting ApplicationContext into the test we may add new property source programmatically by calling method addPropertySource. Because this type of property has a higher loading priority than the properties from application.yml it would be overridden. Now, we just need to publish new refresh event to context, and call method from sample bean one more time:

@Inject
ApplicationContext context;
@Inject
RefreshableService refreshable;

@Test
public void testRefreshableScope() {
   String testProperty = refreshable.getTestProperty();
   Assertions.assertEquals("hello", testProperty);
   context.getEnvironment().addPropertySource(PropertySource.of(CollectionUtils.mapOf("test.property", "hi")));
   context.publishEvent(new RefreshEvent());
   try {
      Thread.sleep(1000);
   } catch (InterruptedException e) {
      e.printStackTrace();
   }
   testProperty = refreshable.getTestProperty();
   Assertions.assertEquals("hi", testProperty);
}

Beans

In the previous section we have already been defining simple beans with different scopes. Micronaut provides some more advanced features that can be used while defining new beans. You can create conditional beans, define replacement for existing beans or different methods of injecting configuration into the bean.

Conditions and Replacements

In order to define conditions for the newly created bean we need to annotate it with @Requires. Micronaut offers many possibilities of defining configuration requirements. You will always use the same annotation, but the different field for each option. You can require:
You can require:

  • the presence of one more classes – @Requires(classes=...)
  • the absence of one more classes – @Requires(missing=...)
  • the presence one or more beans – @Requires(beans=...)
  • the absence of one or more beans – @Requires(missingBeans=...)
  • a property with an optional value – @Requires(property="...")
  • a property to not be part of the configuration – @Requires(missingProperty="...")
  • the presence of one of more files in the file system – @Requires(resources="...")
  • the presence of one of more classpath resources – @Requires(resources="...")

And some others. Now, let’s consider the simple sample including some selected conditional strategies. Here’s the class that requires the property test.property to be available in the environment.

@Prototype
@Requires(property = "test.property")
public class TestPropertyRequiredService {

    @Property(name = "test.property")
    String testProperty;

    public String getTestProperty() {
        return testProperty;
    }

}

Here’s another bean definition. It requires that property test.property2 is not available in the environment. The following bean is being replaced by the other bean through annotation @Replaces(bean = TestPropertyRequiredValueService.class).

@Prototype
@Requires(missingProperty = "test.property2")
@Replaces(bean = TestPropertyRequiredValueService.class)
public class TestPropertyNotRequiredService {

    public String getTestProperty() {
        return "None";
    }
    
}

Here’s the last sample bean declaration. There is one interesting option related to conditional beans depending from property. You can require the property to be a certain value, not be a certain value, and use a default in those checks if its not set. Also, the following bean is replacing TestPropertyNotRequiredService bean.

@Prototype
@Requires(property = "test.property", value = "hello", defaultValue = "Hi!")
public class TestPropertyRequiredValueService {

    @Property(name = "test.property")
    String testProperty;

    public String getTestProperty() {
        return testProperty;
    }

}

The result of the following test is predictable:

@Inject
TestPropertyRequiredService service1;
@Inject
TestPropertyNotRequiredService service2;
@Inject
TestPropertyRequiredValueService service3;

@Test
public void testPropertyRequired() {
   String testProperty = service1.getTestProperty();
   Assertions.assertNotNull(testProperty);
   Assertions.assertEquals("hello", testProperty);
}

@Test
public void testPropertyNotRequired() {
   String testProperty = service2.getTestProperty();
   Assertions.assertNotNull(testProperty);
   Assertions.assertEquals("None", testProperty);
}

@Test
public void testPropertyValueRequired() {
   String testProperty = service3.getTestProperty();
   Assertions.assertNotNull(testProperty);
   Assertions.assertEquals("hello", testProperty);
}
Application Configuration

Configuration in Micronaut takes inspiration from both Spring Boot and Grails, integrating configuration properties from multiple sources directly into the core IoC container. Configuration can by default be provided in either Java properties, YAML, JSON or Groovy files. There are 7 levels of priority for property sources (for comparison – Spring Boot provides 17 levels):

  1. Command line arguments
  2. Properties from SPRING_APPLICATION_JSON
  3. Properties from MICRONAUT_APPLICATION_JSON
  4. Java System Properties
  5. OS environment variables
  6. Enviroment-specific properties from application-{environment}.{extension}
  7. Application-specific properties from application.{extension}

One of more interesting options related to Micronaut configuration is @EachProperty and @EachBean. Both these annotations are used for defining multiple instances of bean, each with their own distinct configuration.
In order to show you the sample use case for those annotations, we should first imagine that we are building a simple client-side load balancer that connects with multiple instances of the service. The configuration is available under property test.url.* and contains only target URL:

@EachProperty("test.url")
public class ClientConfig {

    private String name;
    private String url;

    public ClientConfig(@Parameter String name) {
        this.name = name;
    }

    public String getUrl() {
        return url;
    }

    public void setUrl(String url) {
        this.url = url;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

Assuming we have the following configuration properties, Micronaut creates three instances of our configuration under the names: client1, client2 and client3.

test:
  url:
    client1.url: http://localhost:8080
    client2.url: http://localhost:8090
    client3.url: http://localhost:8100

Using @EachProperty annotation was only the first step. We also ClientService responsible for performing interaction with the target service.

public class ClientService {

    private String url;

    public ClientService(String url) {
        this.url = url;
    }

    public String connect() {
        return url;
    }
}

The ClientService is still not registered as a bean, since it is not annotated. Our goal is to inject three beans ClientConfig containing the distinct configuration, and register three instances of ClientService bean. That’s why we will define a bean factory with the method annotated with @EachBean. In Micronaut, a factory usually allows you to register the bean, which is not a part of your codebase, but it is also useful in that case.

@Factory
public class ClientFactory {

    @EachBean(ClientConfig.class)
    ClientService client(ClientConfig config) {
        String url = config.getUrl();
        return new ClientService(url);
    }
}

Finally, we may proceed to the test. We have injected all the three instances of ClientService. Each of them contains configuration injected from a different instance of ClientConfig bean. If you won’t set any qualifier, Micronaut injects the bean containing configuration defined first. For injecting other instances of bean we should use qualifier, which is the name of configuration property.

@Inject
ClientService client;
@Inject
@Named("client2")
ClientService client2;
@Inject
@Named("client3")
ClientService client3;

@Test
public void testClient() {
   String url = client.connect();
   Assertions.assertEquals("http://loalhost:8080", url);
   url = client2.connect();
   Assertions.assertEquals("http://loalhost:8090", url);
   url = client3.connect();
   Assertions.assertEquals("http://loalhost:8100", url);
}

The post Micronaut Tutorial: Beans and Scopes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/04/15/micronaut-tutorial-beans-and-scopes/feed/ 2 7108