Spring Cloud Contract Archives - Piotr's TechBlog https://piotrminkowski.com/tag/spring-cloud-contract/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 29 Dec 2020 15:23:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Spring Cloud Contract Archives - Piotr's TechBlog https://piotrminkowski.com/tag/spring-cloud-contract/ 32 32 181738725 Spring Boot Best Practices for Microservices https://piotrminkowski.com/2019/12/06/spring-boot-best-practices-for-microservices/ https://piotrminkowski.com/2019/12/06/spring-boot-best-practices-for-microservices/#comments Fri, 06 Dec 2019 11:14:24 +0000 http://piotrminkowski.com/?p=7514 In this article I’m going to propose my list of “golden rules” for building Spring Boot applications, which are a part of a microservices-based system. I’m basing on my experience in migrating monolithic SOAP applications running on JEE servers into REST-based small applications built on top of Spring Boot. This list of Spring Boot best […]

The post Spring Boot Best Practices for Microservices appeared first on Piotr's TechBlog.

]]>
In this article I’m going to propose my list of “golden rules” for building Spring Boot applications, which are a part of a microservices-based system. I’m basing on my experience in migrating monolithic SOAP applications running on JEE servers into REST-based small applications built on top of Spring Boot. This list of Spring Boot best practices assumes you are running many microservices on the production under huge incoming traffic. Let’s begin.

1. Collect metrics

It is just amazing how metrics visualization can change an approach to the systems monitoring in the organization. After setting up monitoring in Grafana we are able to recognize more than 90% of bigger problems in our systems before they are reported by customers to our support team. Thanks to those two monitors with plenty of diagrams and alerts we may react much faster than earlier. If you have microservices-based architecture metrics become even more important than for monoliths.
The good news for us is that Spring Boot comes with a built-in mechanism for collecting the most important metrics. In fact, we just need to set some configuration properties to expose a predefined set of metrics provided by the Spring Boot Actuator. To use it we need to include Actuator starter as dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

To enable metrics endpoint we have to set property management.endpoint.metrics.enabled to true. Now you may check out the full list of generated metrics by calling endpoint GET /actuator/metrics. One of the most important metrics for us is http.server.requests, which provides statistics with the number of incoming requests and response time. It is automatically tagged with method type (POST, GET, etc.), HTTP status, and URI.
Metrics have to be stored somewhere. The most popular tools for that are InfluxDB and Prometheus. They are representing two different models of collecting data. Prometheus periodically retrieves data from the endpoint exposed by the application, while InfluxDB provides REST API that has to be called by the application. The integration with those two tools and several others is realized with the Micrometer library. To enable support for InfluxDB we have to include the following dependency.

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-influx</artifactId>
</dependency>

We also have to provide at least URL and Influx database name inside application.yml file.

management:
  metrics:
    export:
      influx:
        db: springboot
        uri: http://192.168.99.100:8086

To enable Prometheus HTTP endpoint we first need to include the appropriate Micrometer module and also set property management.endpoint.prometheus.enabled to true.

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

By default, Prometheus tries to collect data from defined target endpoint once a minute. A rest of configuration has to be provided inside Prometheus. A scrape_config section is responsible for specifying a set of targets and parameters describing how to connect with them.

scrape_configs:
  - job_name: 'springboot'
    metrics_path: '/actuator/prometheus'
    static_configs:
    - targets: ['person-service:2222']

Sometimes it is useful to provide additional tags to metrics, especially if we have many instances of a single microservice that logs to a single Influx database. Here’s the sample of tagging for applications running on Kubernetes.

@Configuration
class ConfigurationMetrics {

    @Value("\${spring.application.name}")
    lateinit var appName: String
    @Value("\${NAMESPACE:default}")
    lateinit var namespace: String
    @Value("\${HOSTNAME:default}")
    lateinit var hostname: String

    @Bean
    fun tags(): MeterRegistryCustomizer<InfluxMeterRegistry> {
        return MeterRegistryCustomizer { registry ->
            registry.config().commonTags("appName", appName).commonTags("namespace", namespace).commonTags("pod", hostname)
        }
    }

}

Here’s a diagram from Grafana created for http.server.requests metric of a single application.

spring-boot-best-practices-metrics

2. Don’t forget about logging

Logging is something that is not very important during development, but is the key point during maintenance. It is worth to remember that in the organization your application would be viewed through the logs quality. Usually, an application is maintenanced by the support team, so your logs should be significant. Don’t try to put everything there, only the most important events should be logged.
It is also important to use the same standard of logging for all the microservices. For example, if you are logging information in JSON format, do the same for every single application. If you use tag appName for indicating application name or instanceId to distinguish different instances of the same application do it everywhere. Why? You usually want to store the logs collected from all microservices in a single, central place. The most popular tool for that (or rather the collection of tools) is Elastic Stack (ELK). To take advantage of storing logs in a central place, you should ensure that query criteria and response structure would be the same for all the applications, especially that you will correlate the logs between different microservices. How is that? Of course by using the external library. I can recommend my library for Spring Boot logging. To use it you should include it to your dependencies.

<dependency>
  <groupId>com.github.piomin</groupId>
  <artifactId>logstash-logging-spring-boot-starter</artifactId>
  <version>1.2.2.RELEASE</version>
</dependency>

This library will force you to use some good logging practices and automatically integrate with Logstash (one of three ELK tools responsible for collecting logs). Its main features are:

  • an ability to log all incoming HTTP requests and outgoing HTTP responses with full body, and send those logs to Logstash with the proper tags indicating calling method name or response HTTP status
  • it is able to calculate and store an execution time for each request
  • an ability to generate and propagate correlationId for downstream services calling with Spring RestTemplate

To enable sending logs to Logstash we should at least provide its address and property logging.logstash.enabled to true.

logging.logstash:
  enabled: true
  url: 192.168.99.100:5000

After including the library logstash-logging-spring-boot-starter you may take advantage of logs tagging in Logstash. Here’s the screen from Kibana for single response log entry.

logstash-2

We may also include Spring Cloud Sleuth library to our dependencies.

<dependency> 
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>

Spring Cloud Sleuth propagates headers compatible with Zipkin – a popular tool for distributed tracing. Its main features are:

  • adding trace (correlating requests) and span IDs to the Slf4J MDC
  • recording timing information to aid in latency analysis
  • it modifies a pattern of log entry to add some informations like additional MDC fields
  • it provides integration with other Spring components like OpenFeign, RestTemplate or Spring Cloud Netflix Zuul

3. Make your API usable

In most cases, your application will be called by other applications through REST-based API. Therefore, it is worth taking care of proper and clear documentation. The documentation should be generated along with the code. Of course there are some tools for that. One of the most popular of them is Swagger. You can easily integrate Swagger 2 with your Spring Boot application using SpringFox project. In order to expose a Swagger HTML site with API documentation we need to include the following dependencies. The first library is responsible for generating Swagger descriptor from Spring MVC controllers code, while the second embeds Swagger UI to display representation of Swagger YAML descriptor in your web browser.

<dependency>
   <groupId>io.springfox</groupId>
   <artifactId>springfox-swagger2</artifactId>
   <version>2.9.2</version>
</dependency>
<dependency>
   <groupId>io.springfox</groupId>
   <artifactId>springfox-swagger-ui</artifactId>
   <version>2.9.2</version>
</dependency>

It’s not all. We also have to provide some beans to customize default Swagger generation behaviour. It should document only methods implemented inside our controllers, for example not the methods provided by Spring Boot automatically like /actuator/* endpoints. We may also customize UI appearance by defining UiConfiguration bean.

@Configuration
@EnableSwagger2
public class ConfigurationSwagger {

    @Autowired
    Optional<BuildProperties> build;

    @Bean
    public Docket api() {
        String version = "1.0.0";
        if (build.isPresent())
            version = build.get().getVersion();
        return new Docket(DocumentationType.SWAGGER_2)
                .apiInfo(apiInfo(version))
                .select()
                .apis(RequestHandlerSelectors.any())
                .paths(PathSelectors.regex("(/components.*)"))
                .build()
                .useDefaultResponseMessages(false)
                .forCodeGeneration(true);
    }

    @Bean
    public UiConfiguration uiConfig() {
        return UiConfigurationBuilder.builder().docExpansion(DocExpansion.LIST).build();
    }

    private ApiInfo apiInfo(String version) {
        return new ApiInfoBuilder()
                .title("API - Components Service")
                .description("Managing Components.")
                .version(version)
                .build();
    }
}

Here’s an example of Swagger 2 UI for a single microservice.

spring-boot-best-practices-swagger

The next case is to define the same REST API guideline for all microservices. If you are building an API of your microservices consistently, it is much simpler to integrate with it for both external and internal clients. The guideline should contain instructions on how to build your API, which headers need to be set on the request and response, how to generate error codes etc. Such a guideline should be shared with all developers and vendors in your organization. For more detailed explanation of generating Swagger documentation for Spring Boot microservices including exposing it for all the application on API gateway you may refer to my article Microservices API Documentation with Swagger2.

4. Don’t afraid of using circuit breaker

If you are using Spring cloud for communication between microservices, you may leverage Spring Cloud Netflix Hystrix or Spring Cloud Circuit Breaker to implement circuit breaking. However, the first solution has been already moved to the maintenance mode by Pivotal team, since Netflix does not develop Hystrix anymore. The recommended solution is the new Spring Cloud Circuit Breaker built on top of resilience4j project.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-circuitbreaker-resilience4j</artifactId>
</dependency>

Then we need to configure required settings for circuit breaker by defining Customizer bean that is passed a Resilience4JCircuitBreakerFactory. We are using default values as shown below.

@Bean
public Customizer<Resilience4JCircuitBreakerFactory> defaultCustomizer() {
    return factory -> factory.configureDefault(id -> new Resilience4JConfigBuilder(id)
            .timeLimiterConfig(TimeLimiterConfig.custom().timeoutDuration(Duration.ofSeconds(5)).build())
            .circuitBreakerConfig(CircuitBreakerConfig.ofDefaults())
            .build());
}

For more details about integrating Hystrix circuit breaker with Spring Boot application you may refer to my article Part 3: Creating Microservices: Circuit Breaker, Fallback and Load Balancing with Spring Cloud.

5. Make your application transparent

Another important rule amongst Spring Boot best practices is transparency. We should not forget that one of the most important reasons for migration into microservices architecture is a requirement of continuous delivery. Today, the ability to deliver changes fast gives the advantage on the market. You should be able even to deliver changes several times during a day. Therefore, it is important what’s the current version, where it has been released and what changes it includes.
When working with Spring Boot and Maven we may easily publish such information like a date of last changes, Git commit id or numerous version of application. To achieve that we just need to include following Maven plugins to our pom.xml.

<plugins>
   <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      <executions>
         <execution>
            <goals>
               <goal>build-info</goal>
            </goals>
         </execution>
      </executions>
   </plugin>
   <plugin>
      <groupId>pl.project13.maven</groupId>
      <artifactId>git-commit-id-plugin</artifactId>
      <configuration>
         <failOnNoGitDirectory>false</failOnNoGitDirectory>
      </configuration>
   </plugin>
</plugins>

Assuming you have already included Spring Boot Actuator (see Section 1), you have to enable /info endpoint to be able to display all interesting data.


management.endpoint.info.enabled: true

Of course, we have many microservices consisting of our system, and there are a few running instances of every single microservice. It is desirable to monitor our instances in a single, central place – the same as with collecting metrics and logs. Fortunately, there is a tool dedicated for Spring Boot application, that is able to collect data from all Actuator endpoints and display them in UI. It is Spring Boot Admin developed by Codecentric. The most comfortable way to run it is by creating a dedicated Spring Boot application that includes Spring Boot Admin dependencies and integrates with a discovery server, for example Spring Cloud Netflix Eureka.

<dependency>
    <groupId>de.codecentric</groupId>
    <artifactId>spring-boot-admin-starter-server</artifactId>
    <version>2.1.6</version>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>

Then we should enable it for Spring Boot application by annotating the main class with @EnableAdminServer.

@SpringBootApplication
@EnableDiscoveryClient
@EnableAdminServer
@EnableAutoConfiguration
public class Application {
 
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
 
}

With Spring Boot Admin we may easily browse a list of applications registered in the discovery server and check out the version or commit info for each of them.

boot-admin-2

We can expand details to see all elements retrieved from /info endpoint and much more data collected from other Actuator endpoints.

boot-admin-3-details

6. Write contract tests

Consumer Driven Contract (CDC) testing is one of the methods that allows you to verify integration between applications within your system. The number of such interactions may be really large especially if you maintain microservices-based architecture. It is relatively easy to start with contract testing in Spring Boot thanks to the Spring Cloud Contract project. There are some other frameworks designed especially for CDC like Pact, but Spring Cloud Contract would probably be the first choice, since we are using Spring Boot.
To use it on the producer side we need to include Spring Cloud Contract Verifier.

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-contract-verifier</artifactId>
    <scope>test</scope>
</dependency>

On the consumer side we should include Spring Cloud Contract Stub Runner.


<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
    <scope>test</scope>
</dependency>

The first step is to define a contract. One of the options to write it is by using Groovy language. The contract should be verified on the both producer and consumer side. Here’s


import org.springframework.cloud.contract.spec.Contract
Contract.make {
    request {
        method 'GET'
        urlPath('/persons/1')
    }
    response {
        status OK()
        body([
            id: 1,
            firstName: 'John',
            lastName: 'Smith',
            address: ([
                city: $(regex(alphaNumeric())),
                country: $(regex(alphaNumeric())),
                postalCode: $(regex('[0-9]{2}-[0-9]{3}')),
                houseNo: $(regex(positiveInt())),
                street: $(regex(nonEmpty()))
            ])
        ])
        headers {
            contentType(applicationJson())
        }
    }
}

The contract is packaged inside the JAR together with stubs. It may be published to a repository manager like Artifactory or Nexus, and then consumers may download it from there during the JUnit test. Generated JAR file is suffixed with stubs.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.NONE)
@AutoConfigureStubRunner(ids = {"pl.piomin.services:person-service:+:stubs:8090"}, consumerName = "letter-consumer",  stubsPerConsumer = true, stubsMode = StubsMode.REMOTE, repositoryRoot = "http://192.168.99.100:8081/artifactory/libs-snapshot-local")
@DirtiesContext
public class PersonConsumerContractTest {
 
    @Autowired
    private PersonClient personClient;
     
    @Test
    public void verifyPerson() {
        Person p = personClient.findPersonById(1);
        Assert.assertNotNull(p);
        Assert.assertEquals(1, p.getId().intValue());
        Assert.assertNotNull(p.getFirstName());
        Assert.assertNotNull(p.getLastName());
        Assert.assertNotNull(p.getAddress());
        Assert.assertNotNull(p.getAddress().getCity());
        Assert.assertNotNull(p.getAddress().getCountry());
        Assert.assertNotNull(p.getAddress().getPostalCode());
        Assert.assertNotNull(p.getAddress().getStreet());
        Assert.assertNotEquals(0, p.getAddress().getHouseNo());
    }
     
}

Contract testing will not verify sophisticated use cases in your microservices-based system. However, it is the first phase of testing interaction between microservices. Once you ensure the API contracts between applications are valid, you proceed to more advanced integration or end-to-end tests. For more detailed explanation of continuous integration with Spring Cloud Contract you may refer to my article Continuous Integration with Jenkins, Artifactory and Spring Cloud Contract.

7. Be up-to-date

Spring Boot and Spring Cloud relatively often release the new versions of their framework. Assuming that your microservices have a small codebase it is easy to up a version of used libraries. Spring Cloud releases new versions of projects using release train pattern, to simplify dependencies management and avoid problems with conflicts between incompatible versions of libraries.
Moreover, Spring Boot systematically improves startup time and memory footprint of applications, so it is worth updating it just because of that. Here’s the current stable release of Spring Boot and Spring Cloud.

<parent>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-parent</artifactId>
   <version>2.2.1.RELEASE</version>
</parent>
<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-dependencies</artifactId>
         <version>Hoxton.RELEASE</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

Conclusion

I showed you that it is not hard to follow best practices with Spring Boot features and some additional libraries being a part of Spring Cloud. These Spring Boot best practices will make it easier for you to migrate into microservices-based architecture and also to run your applications in containers.

The post Spring Boot Best Practices for Microservices appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/12/06/spring-boot-best-practices-for-microservices/feed/ 11 7514
Continuous Integration with Jenkins, Artifactory and Spring Cloud Contract https://piotrminkowski.com/2018/07/04/continuous-integration-with-jenkins-artifactory-and-spring-cloud-contract/ https://piotrminkowski.com/2018/07/04/continuous-integration-with-jenkins-artifactory-and-spring-cloud-contract/#comments Wed, 04 Jul 2018 13:58:09 +0000 https://piotrminkowski.wordpress.com/?p=6709 Consumer Driven Contract (CDC) testing is one of the methods that allows you to verify integration between applications within your system. The number of such interactions may be really large especially if you maintain microservices-based architecture. Assuming that every microservice is developed by different teams or sometimes even different vendors, it is important to automate […]

The post Continuous Integration with Jenkins, Artifactory and Spring Cloud Contract appeared first on Piotr's TechBlog.

]]>
Consumer Driven Contract (CDC) testing is one of the methods that allows you to verify integration between applications within your system. The number of such interactions may be really large especially if you maintain microservices-based architecture. Assuming that every microservice is developed by different teams or sometimes even different vendors, it is important to automate the whole testing process. As usual, we can use Jenkins server for running contract tests within our Continuous Integration (CI) process.

The sample scenario has been visualized in the picture below. We have one application (person-service) that exposes API leveraged by three different applications. Each application is implemented by a different development team. Consequently, every application is stored in the separated Git repository and has a dedicated pipeline in Jenkins for building, testing and deploying.

contracts-3 (1)

The source code of sample applications is available on GitHub in the repository sample-spring-cloud-contract-ci (https://github.com/piomin/sample-spring-cloud-contract-ci.git). I placed all the sample microservices in the single Git repository only for our demo simplification. We will still treat them as separate microservices, developed and built independently.

In this article I used Spring Cloud Contract for CDC implementation. It is the first choice solution for JVM applications written in Spring Boot. Contracts can be defined using Groovy or YAML notation. After building on the producer side Spring Cloud Contract generates a special JAR file with stubs suffix, that contains all defined contracts and JSON mappings. Such a JAR file can be built on Jenkins and then published on Artifactory. Contract consumers also use the same Artifactory server, so they can use the latest version of stubs file. Because every application expects different response from person-service, we have to define three different contracts between person-service and a target consumer.

contracts-1

Let’s analyze the sample scenario. Assuming we have performed some changes in the API exposed by person-service and we have modified contracts on the producer side, we would like to publish them on a shared server. First, we need to verify contracts against producers (1), and in case of success publish an artifact with stubs to Artifactory (2). All the pipelines defined for applications that use this contract are able to trigger the build on a new version of JAR file with stubs (3). Then, the newest version contract is verifying against consumer (4). If contract testing fails, pipeline is able to notify the responsible team about this failure.

contracts-2

1. Pre-requirements

Before implementing and running any sample we need to prepare our environment. We need to launch Jenkins and Artifactory servers on the local machine. The most suitable way for this is through Docker containers. Here are the commands required for running these containers.

$ docker run --name artifactory -d -p 8081:8081 docker.bintray.io/jfrog/artifactory-oss:latest
$ docker run --name jenkins -d -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts

I don’t know if you are familiar with such tools like Artifactory and Jenkins. But after starting them we need to configure some things. First you need to initialize Maven repositories for Artifactory. You will be prompted for that just after a first launch. It also automatically add one remote repository: JCenter Bintray (https://bintray.com/bintray/jcenter), which is enough for our build. Jenkins also comes with a default set of plugins, which you can install just after first launch (Install suggested plugins). For this demo, you will also have to install plugin for integration with Artifactory (https://wiki.jenkins.io/display/JENKINS/Artifactory+Plugin). If you need more details about Jenkins and Artifactory configuration you can refer to my older article How to setup Continuous Delivery environment.

2. Building contracts

We are beginning contract definition from the producer side application. Producer exposes only one GET /persons/{id} method that returns Person object. Here are the fields contained by Person class.

public class Person {

   private Integer id;
   private String firstName;
   private String lastName;
   @JsonFormat(pattern = "yyyy-MM-dd")
   private Date birthDate;
   private Gender gender;
   private Contact contact;
   private Address address;
   private String accountNo;

   // ...
}

The following picture illustrates, which fields of Person object are used by consumers. As you see, some of the fields are shared between consumers, while some others are required only by single consuming applications.

contracts-4

Now we can take a look at the contract definition between person-service and bank-service.

import org.springframework.cloud.contract.spec.Contract

Contract.make {
   request {
      method 'GET'
      urlPath('/persons/1')
   }
   response {
      status OK()
      body([
         id: 1,
         firstName: 'Piotr',
         lastName: 'Minkowski',
         gender: $(regex('(MALE|FEMALE)')),
         contact: ([
            email: $(regex(email())),
            phoneNo: $(regex('[0-9]{9}$'))
         ])
      ])
      headers {
         contentType(applicationJson())
      }
   }
}

For comparison, here’s the definition of contract between person-service and letter-service.

import org.springframework.cloud.contract.spec.Contract

Contract.make {
   request {
      method 'GET'
      urlPath('/persons/1')
   }
   response {
      status OK()
      body([
         id: 1,
         firstName: 'Piotr',
         lastName: 'Minkowski',
         address: ([
            city: $(regex(alphaNumeric())),
            country: $(regex(alphaNumeric())),
            postalCode: $(regex('[0-9]{2}-[0-9]{3}')),
            houseNo: $(regex(positiveInt())),
            street: $(regex(nonEmpty()))
         ])
      ])
      headers {
         contentType(applicationJson())
      }
   }
}

3. Implementing Spring Cloud Contract tests on the producer side

Ok, we have three different contracts assigned to the single endpoint exposed by person-service. We need to publish them in such a way that they are easily available for consumers. In that case Spring Cloud Contract comes with a handy solution. We may define contracts with different response for the same request, and then choose the appropriate definition on the consumer side. All those contract definitions will be published within the same JAR file. Because we have three consumers we define three different contracts placed in directories bank-consumer, contact-consumer and letter-consumer.

contracts-5

All the contracts will use a single base test class. To achieve it we need to provide a fully qualified name of that class for Spring Cloud Contract Verifier plugin in pom.xml.

<plugin>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-contract-maven-plugin</artifactId>
   <extensions>true</extensions>
   <configuration>
      <baseClassForTests>pl.piomin.services.person.BasePersonContractTest</baseClassForTests>
   </configuration>
</plugin>

Here’s the full definition of base class for our contract tests. We will mock the repository bean with the answer matching to the rules created inside contract files.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.DEFINED_PORT)
public abstract class BasePersonContractTest {

   @Autowired
   WebApplicationContext context;
   @MockBean
   PersonRepository repository;
   
   @Before
   public void setup() {
      RestAssuredMockMvc.webAppContextSetup(this.context);
      PersonBuilder builder = new PersonBuilder()
         .withId(1)
         .withFirstName("Piotr")
         .withLastName("Minkowski")
         .withBirthDate(new Date())
         .withAccountNo("1234567890")
         .withGender(Gender.MALE)
         .withPhoneNo("500070935")
         .withCity("Warsaw")
         .withCountry("Poland")
         .withHouseNo(200)
         .withStreet("Al. Jerozolimskie")
         .withEmail("piotr.minkowski@gmail.com")
         .withPostalCode("02-660");
      when(repository.findById(1)).thenReturn(builder.build());
   }
   
}

Spring Cloud Contract Maven plugin visible above is responsible for generating stubs from contract definitions. It is executed during Maven build after running mvn clean install command. The build is performed on Jenkins CI. Jenkins pipeline is responsible for updating remote Git repositories, building binaries from source code, running automated tests and finally publishing JAR files containing stubs on a remote artifact repository – Artifactory. Here’s Jenkins pipeline created for the contract producer side (person-service).

node {
  withMaven(maven:'M3') {
    stage('Checkout') {
      git url: 'https://github.com/piomin/sample-spring-cloud-contract-ci.git', credentialsId: 'piomin-github', branch: 'master'
    }
    stage('Publish') {
      def server = Artifactory.server 'artifactory'
      def rtMaven = Artifactory.newMavenBuild()
      rtMaven.tool = 'M3'
      rtMaven.resolver server: server, releaseRepo: 'libs-release', snapshotRepo: 'libs-snapshot'
      rtMaven.deployer server: server, releaseRepo: 'libs-release-local', snapshotRepo: 'libs-snapshot-local'
      rtMaven.deployer.artifactDeploymentPatterns.addInclude("*stubs*")
      def buildInfo = rtMaven.run pom: 'person-service/pom.xml', goals: 'clean install'
      rtMaven.deployer.deployArtifacts buildInfo
      server.publishBuildInfo buildInfo
    }
  }
}

We also need to include dependency spring-cloud-starter-contract-verifier to the producer app to enable Spring Cloud Contract Verifier.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-contract-verifier</artifactId>
   <scope>test</scope>
</dependency>

4. Implementing Spring Cloud Contract tests on the consumer side

To enable Spring Cloud Contract on the consumer side we need to include artifact spring-cloud-starter-contract-stub-runner to the project dependencies.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
   <scope>test</scope>
</dependency>

Then, the only thing left is to build a JUnit test, which verifies our contract by calling it through OpenFeign client. The configuration of that test is provided inside annotation @AutoConfigureStubRunner. We select the latest version of person-service stubs artifact by setting + in the version section of ids parameter. Because we have multiple contracts defined inside person-service we need to choose the right for current service by setting consumer-name parameter. All the contract definitions are downloaded from the Artifactory server, so we set stubsMode parameters to REMOTE. The address of the Artifactory server has to be set using repositoryRoot property.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.NONE)
@AutoConfigureStubRunner(ids = {"pl.piomin.services:person-service:+:stubs:8090"}, consumerName = "letter-consumer",  stubsPerConsumer = true, stubsMode = StubsMode.REMOTE, repositoryRoot = "http://192.168.99.100:8081/artifactory/libs-snapshot-local")
@DirtiesContext
public class PersonConsumerContractTest {

   @Autowired
   private PersonClient personClient;
   
   @Test
   public void verifyPerson() {
      Person p = personClient.findPersonById(1);
      Assert.assertNotNull(p);
      Assert.assertEquals(1, p.getId().intValue());
      Assert.assertNotNull(p.getFirstName());
      Assert.assertNotNull(p.getLastName());
      Assert.assertNotNull(p.getAddress());
      Assert.assertNotNull(p.getAddress().getCity());
      Assert.assertNotNull(p.getAddress().getCountry());
      Assert.assertNotNull(p.getAddress().getPostalCode());
      Assert.assertNotNull(p.getAddress().getStreet());
      Assert.assertNotEquals(0, p.getAddress().getHouseNo());
   }
   
}

Here’s Feign client implementation responsible for calling endpoint exposed by person-service

@FeignClient("person-service")
public interface PersonClient {

   @GetMapping("/persons/{id}")
   Person findPersonById(@PathVariable("id") Integer id);
   
}

5. Setup of Continuous Integration process

Ok, we have already defined all the contracts required for our exercise. We have also built a pipeline responsible for building and publishing stubs with contracts on the producer side (person-service). It always publishes the newest version of stubs generated from source code. Now, our goal is to launch pipelines defined for three consumer applications, each time when new stubs would be published to the Artifactory server by the producer pipeline.
The best solution for that would be to trigger a Jenkins build when you deploy an artifact. To achieve it we use Jenkins plugin called URLTrigger, that can be configured to watch for changes on a certain URL, in that case REST API endpoint exposed by Artifactory for selected repository path.
After installing URLTrigger plugin we have to enable it for all consumer pipelines. You can configure it to watch for changes in the returned JSON file from the Artifactory File List REST API, that is accessed via the following URI: http://192.168.99.100:8081/artifactory/api/storage/[PATH_TO_FOLDER_OR_REPO]/. The file maven-metadata.xml will change every time you deploy a new version of application to Artifactory. We can monitor the change of response’s content between the last two polls. The last field that has to be filled is Schedule. If you set it to * * * * * it will poll for a change every minute.

contracts-6

Our three pipelines for consumer applications are ready. The first run was finished with success.

contracts-7

If you have already built a person-service application and publish stubs to Artifactory you will see the following structure in libs-snapshot-local repository. I have deployed three different versions of API exposed by person-service. Each time I publish a new version of a contract all the dependent pipelines are triggered to verify it.

contracts-8

The JAR file with contracts is published under classifier stubs.

contracts-9

Spring Cloud Contract Stub Runner tries to find the latest version of contracts.

[code]2018-07-04 11:46:53.273 INFO 4185 — [ main] o.s.c.c.stubrunner.AetherStubDownloader : Desired version is [+] – will try to resolve the latest version
2018-07-04 11:46:54.752 INFO 4185 — [ main] o.s.c.c.stubrunner.AetherStubDownloader : Resolved version is [1.3-SNAPSHOT]
2018-07-04 11:46:54.823 INFO 4185 — [ main] o.s.c.c.stubrunner.AetherStubDownloader : Resolved artifact [pl.piomin.services:person-service:jar:stubs:1.3-SNAPSHOT] to /var/jenkins_home/.m2/repository/pl/piomin/services/person-service/1.3-SNAPSHOT/person-service-1.3-SNAPSHOT-stubs.jar

6. Testing changes in contract

Ok, we have already prepared contracts and configured our CI environment. Now, let’s perform change in the API exposed by person-service. We will just change the name of one field: accountNo to accountNumber.

contracts-12

This change requires a change in contract definition created on the producer side. If you modify the field name there person-service will build successfully and a new version of contract will be published to Artifactory. Because all other pipelines listen for changes in the latest version of JAR files with stubs, the build will be started automatically. Microservices letter-service and contact-service do not use field accountNo, so their pipelines will not fail. Only bank-service pipeline report error in contract as shown on the picture below.

contracts-10

Now, if you were notified about failed verification of the newest contract version between person-service and bank-service, you can perform required changes on the consumer side.

contracts-11

The post Continuous Integration with Jenkins, Artifactory and Spring Cloud Contract appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2018/07/04/continuous-integration-with-jenkins-artifactory-and-spring-cloud-contract/feed/ 2 6709
Testing REST API with Hoverfly https://piotrminkowski.com/2017/08/02/testing-rest-api-with-hoverfly/ https://piotrminkowski.com/2017/08/02/testing-rest-api-with-hoverfly/#respond Wed, 02 Aug 2017 10:33:02 +0000 https://piotrminkowski.wordpress.com/?p=5363 Hoverfly is an open source API simulation tool for automated tests. It is written in Go, but also has native support for Java and can be run inside JUnit test. Hoverfly can be used for testing REST API, but can also be useful for testing calls between microservices. We have two running modes available: simulating […]

The post Testing REST API with Hoverfly appeared first on Piotr's TechBlog.

]]>
Hoverfly is an open source API simulation tool for automated tests. It is written in Go, but also has native support for Java and can be run inside JUnit test. Hoverfly can be used for testing REST API, but can also be useful for testing calls between microservices. We have two running modes available: simulating and capturing. In simulating mode we just simulate interaction with other service by creating response sources, in capturing mode requests will be made to the real service as normal, only they will be intercepted and recorded by Hoverfly.

In one of my previous article Testing Java Microservices I described the competitive tool for testing – Spring Cloud Contract. In the article about Hoverfly I will use the same sample application based on Spring Boot, which I created for the needs of that previous article. Source code is available on GitHub in hoverfly branch. We have some microservices which interact between each other and basing on this sample I’m going to show how to use Hoverfly for component testing.

To enable testing with Hoverfly we have to include the following dependency in pom.xml file.

[code language=”xml”]
<dependency>
<groupId>io.specto</groupId>
<artifactId>hoverfly-java</artifactId>
<version>0.8.0</version>
<scope>test</scope>
</dependency>
[/code]

Hoverfly can be easily integrated with JUnit. We can orchestrate it using JUnit @ClassRule. Like I mentioned before we can switch between two different modes. In the code fragment below I decided two use mixed strategy inCaptureOrSimulationMode, where Hoverfly Rule is started in capture mode if the simulation file does not exist and in simulate mode if the file does exist. The default location of output JSON file is src/test/resources/hoverfly. By calling printSimulationData on HoverflyRule we are printing all simulation data on the console.

[code language=”java”]
@RunWith(SpringRunner.class)
@SpringBootTest(classes = { Application.class }, webEnvironment = WebEnvironment.DEFINED_PORT)
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class AccountApiFullTest {

protected Logger logger = Logger.getLogger(AccountApiFullTest.class.getName());

@Autowired
TestRestTemplate template;

@ClassRule
public static HoverflyRule hoverflyRule = HoverflyRule
.inCaptureOrSimulationMode("account.json", HoverflyConfig.configs().proxyLocalHost()).printSimulationData();

@Test
public void addAccountTest() {
Account a = new Account("1234567890", 1000, "1");
ResponseEntity<Account> r = template.postForEntity("/accounts", a, Account.class);
Assert.assertNotNull(r.getBody().getId());
logger.info("New account: " + r.getBody().getId());
}

@Test
public void findAccountByNumberTest() {
Account a = template.getForObject("/accounts/number/{number}", Account.class, "1234567890");
Assert.assertNotNull(a);
logger.info("Found account: " + a.getId());
}

@Test
public void findAccountByCustomerTest() {
Account[] a = template.getForObject("/accounts/customer/{customer}", Account[].class, "1");
Assert.assertTrue(a.length > 0);
logger.info("Found accounts: " + a);
}

}
[/code]

Now, let’s run our JUnit test class twice. During first attempt all requests are forwarded to the Spring @RestController which connects to embedded Mongo database. At the same time all requests and responses are recorded by Hoverfly and saved in the account.json file. This file fragment is visible below. During the second attempt all data is loaded from source file, there is no interaction with AccountController.

[code language=”java”]
"request" : {
"path" : {
"exactMatch" : "/accounts/number/1234567890"
},
"method" : {
"exactMatch" : "GET"
},
"destination" : {
"exactMatch" : "localhost:2222"
},
"scheme" : {
"exactMatch" : "http"
},
"query" : {
"exactMatch" : ""
},
"body" : {
"exactMatch" : ""
}
},
"response" : {
"status" : 200,
"body" : "{\"id\":\"5980642bc96045216447023b\",\"number\":\"1234567890\",\"balance\":1000,\"customerId\":\"1\"}",
"encodedBody" : false,
"templated" : false,
"headers" : {
"Content-Type" : [ "application/json;charset=UTF-8" ],
"Date" : [ "Tue, 01 Aug 2017 11:21:15 GMT" ],
"Hoverfly" : [ "Was-Here" ]
}
}
[/code]

Now, let’s take a look on customer-service tests. Inside GET /customer/{id} we are invoking method GET /accounts/customer/{customerId} from account-service. This method is simulating by Hoverfly with success response as you can see below.

[code language=”java”]
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.DEFINED_PORT)
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class CustomerControllerTest {

@Autowired
TestRestTemplate template;

@ClassRule
public static HoverflyRule hoverflyRule = HoverflyRule
.inSimulationMode(dsl(service("account-service:2222").get(startsWith("/accounts/customer/"))
.willReturn(success("[{\"id\":\"1\",\"number\":\"1234567890\"}]", "application/json"))))
.printSimulationData();

@Test
public void addCustomerTest() {
Customer c = new Customer("1234567890", "Jan Testowy", CustomerType.INDIVIDUAL);
c = template.postForObject("/customers", c, Customer.class);
}

@Test
public void findCustomerWithAccounts() {
Customer c = template.getForObject("/customers/pesel/{pesel}", Customer.class, "1234567890");
Customer cc = template.getForObject("/customers/{id}", Customer.class, c.getId());
Assert.assertTrue(cc.getAccounts().size() > 0);
}
}
[/code]

To run this test successfully we should override some properties from application.yml in src/test/resources/application.yml. Eureka discovery from Ribbon client should be disabled and the same for Hystrix in @FeignClient. Ribbon listOfServers property should has same value as service address inside HoverflyRule.

[code]
eureka:
client:
enabled: false

ribbon:
eureka:
enable: false
listOfServers: account-service:2222

feign:
hystrix:
enabled: false
[/code]

Here’s @FeignClient implementation for invoking API method from account-service.

[code language=”java”]
@FeignClient("account-service")
public interface AccountClient {

@RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}", consumes = {MediaType.APPLICATION_JSON_VALUE})
List<Account> getAccounts(@PathVariable("customerId") String customerId);

}
[/code]

When using simulation mode there is no need to start @SpringBootTest. Hoverfly has also some interesting capabilities like response templating, for example basing on path parameter, like in the fragment below.

[code language=”java”]
public class AccountApiTest {

TestRestTemplate template = new TestRestTemplate();

@ClassRule
public static HoverflyRule hoverflyRule = HoverflyRule.inSimulationMode(dsl(service("http://account-service")
.post("/accounts").anyBody().willReturn(success("{\"id\":\"1\"}", "application/json"))
.get(startsWith("/accounts/")).willReturn(success("{\"id\":\"{{Request.Path.[1]}}\",\"number\":\"123456789\"}", "application/json"))));

@Test
public void addAccountTest() {
Account a = new Account("1234567890", 1000, "1");
ResponseEntity<Account> r = template.postForEntity("http://account-service/accounts", a, Account.class);
System.out.println(r.getBody().getId());
}

@Test
public void findAccountByIdTest() {
Account a = template.getForObject("http://account-service/accounts/{id}", Account.class, new Random().nextInt(10));
Assert.assertNotNull(a.getId());
}

}
[/code]

We can simulate fixed method delay using DSL. Delay be set for all requests or for a particular HTTP method. Our delayed @ClassRule for CustomerControllerTest will now look like in the fragment below.

[code language=”java”]
@ClassRule
public static HoverflyRule hoverflyRule = HoverflyRule
.inSimulationMode(dsl(service("account-service:2222").andDelay(3000, TimeUnit.MILLISECONDS).forMethod("GET").get(startsWith("/accounts/customer/"))
.willReturn(success("[{\"id\":\"1\",\"number\":\"1234567890\"}]", "application/json"))));
[/code]

And now you can add ReadTimeout property into your Ribbon client configuration and run JUnit test again. You should receive the follwoing exception: java.net.SocketTimeoutException: Read timed out

[code]
ribbon:
eureka:
enable: false
ReadTimeout: 1000
listOfServers: account-service:2222
[/code]

Conclusion

In the post I showed you the most typical usage of Hoverfly library in microservices tests. However, this library is not dedicated to microservice testing as opposed to the Spring Cloud Contract previously described by me. For example, there is no mechanisms for sharing test stubs between different microservices like in Spring Cloud Contract (@AutoConfigureStubRunner). But there is an interesting feature for delaying responses thanks to which we can simulate some timeouts for Ribbon client or Hystrix fallback.

The post Testing REST API with Hoverfly appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/08/02/testing-rest-api-with-hoverfly/feed/ 0 5363
Testing Java Microservices https://piotrminkowski.com/2017/04/26/testing-java-microservices/ https://piotrminkowski.com/2017/04/26/testing-java-microservices/#respond Wed, 26 Apr 2017 16:58:03 +0000 https://piotrminkowski.wordpress.com/?p=2521 While developing a new application we should never forget about testing. This term seems to be particularly important when working with microservices. Microservices testing requires a different approach than test designing for monolithic applications. As far as monolithic testing is concerned, the main focus is put on unit testing and also in most cases integration […]

The post Testing Java Microservices appeared first on Piotr's TechBlog.

]]>
While developing a new application we should never forget about testing. This term seems to be particularly important when working with microservices. Microservices testing requires a different approach than test designing for monolithic applications. As far as monolithic testing is concerned, the main focus is put on unit testing and also in most cases integration tests with the database layer. In the case of microservices, the most important test seems to be interactions between those microservices. Although every microservice is independently developed and released the change in one of them can affect all which are interacting with that service. Interaction between them is realized by messages. Usually, these are messages send via REST or AMQP protocols.

We can divide five different layers of microservices tests. The first three of them are the same as for monolith applications.

Unit tests – we are testing the smallest pieces of code, for example, a single method or component, and mocking every call of other methods or components. There are many popular frameworks that supporting unit tests in java like JUnit, TestNG, and Mockito for mocking. The main task of this type of testing is to confirm that the implementation meets the requirements.

Integration tests – we are testing interaction and communication between components basing on their interfaces with external services mocked out.

End-to-end test – also known as functional tests. The main goal of that tests is to verify if the system meets the external requirements. It means that we should design test scenarios which test all the microservices take a part in that process.

Contract tests – test at the boundary of an external service verifying that it meets the contract expected by a consuming service

Component tests – limits the scope of the exercised software to a portion of the system under test, manipulating the system through internal code interfaces and using test doubles to isolate the code under test from other components.

In the figure below we can see the component diagram of the one sample microservice (customer service). That architecture is similar for all other sample microservices described in that post. Customer service is interacting with the Mongo database and storing there all its customers. The mapping between object and database is realized by Spring Data @Document. We also use @Repository component as a DAO for Customer entity. Communication with other microservices is realized by @Feign REST client. Customer service collects all customer’s accounts and products from external microservices. @Repository and @Feign clients are injected into the @Controller which is exposed outside via REST resource.

testingmicroservices1

In this article, I’ll show you contract and component tests for sample microservices architecture. In the figure below you can see the test strategy for architecture showed in the previous picture. For our tests, we use an embedded in-memory Mongo database and RESTful stubs generated with the Spring Cloud Contract framework.

testingmicroservices2

Now, let’s take a look at the big picture. We have four microservices interacting with each other as we see in the figure below. Spring Cloud Contract uses WireMock in the background for recording and matching requests and responses. For testing purposes, Eureka discovering on all microservices needs to be disabled.

testingmicroservices3

Sample application source code is available on GitHub. All microservices are basing on Spring Boot and Spring Cloud (Eureka, Zuul, Feign, Ribbon) frameworks. Interaction with Mongo database is realized with Spring Data MongoDB (spring-boot-starter-data-mongodb dependency in pom.xml) library. DAO is really simple. It extends MongoRepository CRUD component. @Repository and @Feign clients are injected into CustomerController.


public interface CustomerRepository extends MongoRepository<Customer, String> {

   public Customer findByPesel(String pesel);
   public Customer findById(String id);

}

Here’s the full controller code.

@RestController
public class CustomerController {

   @Autowired
   private AccountClient accountClient;
   @Autowired
   private ProductClient productClient;

   @Autowired
   CustomerRepository repository;

   protected Logger logger = Logger.getLogger(CustomerController.class.getName());

   @RequestMapping(value = "/customers/pesel/{pesel}", method = RequestMethod.GET)
   public Customer findByPesel(@PathVariable("pesel") String pesel) {
      logger.info(String.format("Customer.findByPesel(%s)", pesel));
      return repository.findByPesel(pesel);
   }

   @RequestMapping(value = "/customers", method = RequestMethod.GET)
   public List<Customer> findAll() {
      logger.info("Customer.findAll()");
      return repository.findAll();
   }

   @RequestMapping(value = "/customers/{id}", method = RequestMethod.GET)
   public Customer findById(@PathVariable("id") String id) {
      logger.info(String.format("Customer.findById(%s)", id));
      Customer customer = repository.findById(id);
      List<Account> accounts =  accountClient.getAccounts(id);
      logger.info(String.format("Customer.findById(): %s", accounts));
      customer.setAccounts(accounts);
      return customer;
   }

   @RequestMapping(value = "/customers/withProducts/{id}", method = RequestMethod.GET)
   public Customer findWithProductsById(@PathVariable("id") String id) {
      logger.info(String.format("Customer.findWithProductsById(%s)", id));
      Customer customer = repository.findById(id);
      List<Product> products =  productClient.getProducts(id);
      logger.info(String.format("Customer.findWithProductsById(): %s", products));
      customer.setProducts(products);
      return customer;
   }

   @RequestMapping(value = "/customers", method = RequestMethod.POST)
   public Customer add(@RequestBody Customer customer) {
      logger.info(String.format("Customer.add(%s)", customer));
      return repository.save(customer);
   }

   @RequestMapping(value = "/customers", method = RequestMethod.PUT)
   public Customer update(@RequestBody Customer customer) {
      logger.info(String.format("Customer.update(%s)", customer));
      return repository.save(customer);
   }

}

To replace the external Mongo database with an embedded in-memory instance during automated tests we only have to add the following dependency to pom.xml.

<dependency>
   <groupId>de.flapdoodle.embed</groupId>
   <artifactId>de.flapdoodle.embed.mongo</artifactId>
   <scope>test</scope>
</dependency>

If we using different addresses and connection credentials also application settings should be overriden in src/test/resources. Here’s application.yml file for testing. In the bottom there is a configuration for disabling Eureka discovering.

server:
  port: ${PORT:3333}

spring:
  application:
    name: customer-service
  data:
    mongodb:
    host: localhost
    port: 27017
logging:
  level:
    org.springframework.cloud.contract: TRACE

eureka:
  client:
    enabled: false

In-memory MongoDB instance is started automatically during the Spring Boot JUnit test. The next step is to add Spring Cloud Contract dependencies.


<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
   <scope>test</scope>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-contract-verifier</artifactId>
   <scope>test</scope>
</dependency>

To enable automated test generation by Spring Cloud Contract we also have to add the following plugin into pom.xml.

<plugin>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-contract-maven-plugin</artifactId>
   <version>1.1.0.RELEASE</version>
   <extensions>true</extensions>
   <configuration>
      <packageWithBaseClasses>pl.piomin.microservices.advanced.customer.api</packageWithBaseClasses>
   </configuration>
</plugin>

Property packageWithBaseClasses defines package where base classes extended by generated test classes are stored. Here’s the base test class for account service tests. In our sample architecture, account service is only a produces it does not consume any services.

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class})
public class ApiScenario1Base {

   @Autowired
   private WebApplicationContext context;

   @Before
   public void setup() {
      RestAssuredMockMvc.webAppContextSetup(context);
   }

}

As opposed to the account service customer service consumes some services for collecting customer’s accounts and products. That’s why the base test class for customer service needs to define stub artifacts data.

@RunWith(SpringRunner.class)
@SpringBootTest(classes = {Application.class})
@AutoConfigureStubRunner(ids = {"pl.piomin:account-service:+:stubs:2222"}, workOffline = true)
public class ApiScenario1Base {

   @Autowired
   private WebApplicationContext context;

   @Before
   public void setup() {
      RestAssuredMockMvc.webAppContextSetup(context);
   }

}

Test classes are generated on the basis of contracts defined in src/main/resources/contracts. Such contracts can be implemented using Groovy language. Here’s a sample contract for adding a new account.

org.springframework.cloud.contract.spec.Contract.make {
   request {
      method 'POST'
      url '/accounts'
      body([
         id: "1234567890",
         number: "12345678909",
         balance: 1234,
         customerId: "123456789"
      ])
      headers {
         contentType('application/json')
      }
   }
   response {
      status 200
      body([
         id: "1234567890",
         number: "12345678909",
         balance: 1234,
         customerId: "123456789"
      ])
      headers {
         contentType('application/json')
      }
   }
}

Test class are generated under target/generated-test-sources catalog. Here’s generated class for the code above.

@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class Scenario1Test extends ApiScenario1Base {

   @Test
   public void validate_1_postAccount() throws Exception {
      // given:
      MockMvcRequestSpecification request = given()
         .header("Content-Type", "application/json")
         .body("{\"id\":\"1234567890\",\"number\":\"12345678909\",\"balance\":1234,\"customerId\":\"123456789\"}");

      // when:
      ResponseOptions response = given().spec(request)
         .post("/accounts");

      // then:
      assertThat(response.statusCode()).isEqualTo(200);
      assertThat(response.header("Content-Type")).matches("application/json.*");
      // and:
      DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
      assertThatJson(parsedJson).field("id").isEqualTo("1234567890");
      assertThatJson(parsedJson).field("number").isEqualTo("12345678909");
      assertThatJson(parsedJson).field("balance").isEqualTo(1234);
      assertThatJson(parsedJson).field("customerId").isEqualTo("123456789");
   }

   @Test
   public void validate_2_postAccount() throws Exception {
      // given:
      MockMvcRequestSpecification request = given()
         .header("Content-Type", "application/json")
         .body("{\"id\":\"1234567891\",\"number\":\"12345678910\",\"balance\":4675,\"customerId\":\"123456780\"}");

      // when:
      ResponseOptions response = given().spec(request)
         .post("/accounts");

      // then:
      assertThat(response.statusCode()).isEqualTo(200);
      assertThat(response.header("Content-Type")).matches("application/json.*");
      // and:
      DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
      assertThatJson(parsedJson).field("id").isEqualTo("1234567891");
      assertThatJson(parsedJson).field("customerId").isEqualTo("123456780");
      assertThatJson(parsedJson).field("number").isEqualTo("12345678910");
      assertThatJson(parsedJson).field("balance").isEqualTo(4675);
   }

   @Test
   public void validate_3_getAccounts() throws Exception {
      // given:
      MockMvcRequestSpecification request = given();

      // when:
      ResponseOptions response = given().spec(request)
         .get("/accounts");

      // then:
      assertThat(response.statusCode()).isEqualTo(200);
      assertThat(response.header("Content-Type")).matches("application/json.*");
      // and:
      DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
      assertThatJson(parsedJson).array().contains("balance").isEqualTo(1234);
      assertThatJson(parsedJson).array().contains("customerId").isEqualTo("123456789");
      assertThatJson(parsedJson).array().contains("id").matches("[0-9]{10}");
      assertThatJson(parsedJson).array().contains("number").isEqualTo("12345678909");
   }

}

In the generated class there are three JUnit tests because I used scenario mechanisms available in Spring Cloud Contract. There are three groovy files inside the scenario1 directory like we can see in the picture below. The number in every file’s prefix defines test order. The second scenario has only one definition file and is also used in the customer service (find by id API method). The third scenario has four definition files and is used in the transfer service (execute API method).

scenarios

Like I mentioned before interaction between microservices is realized by @FeignClient. WireMock used by Spring Cloud Contract records request/response defined in scenario2 inside account service. Then recorded interaction is used by @FeignClient during tests instead of calling real service which is not available.

@FeignClient("account-service")
public interface AccountClient {

   @RequestMapping(method = RequestMethod.GET, value = "/accounts/customer/{customerId}", consumes = {MediaType.APPLICATION_JSON_VALUE})
   List<Account> getAccounts(@PathVariable("customerId") String customerId);

}

All the tests are generated and run during Maven build, for example mvn clean install command. If you are interested in more details and features of Spring Cloud Contract you can it here.

Finally, we can define the Continuous Integration pipeline for our microservices. Each of them should be built independently. More about Continuous Integration / Continuous Delivery environment could be read in one of previous post How to setup Continuous Delivery environment. Here’s a sample pipeline created with Jenkins Pipeline Plugin for account service. In Checkout stage, we are updating our source code working for the newest version from the repository. In the Build stage we are starting from checking out the project version set inside pom.xml, then we build the application using mvn clean install command. Finally, we are recording the unit test result using JUnit pipeline method. The same pipelines can be configured for all other microservices. In the described sample, all microservices are placed in the same Git repository with one Maven version for simplicity. But we can imagine that every microservice could be inside a different repository with an independent version in pom.xml. Tests will always be run with the newest version of stubs, which is set in that fragment of base test class with +: @AutoConfigureStubRunner(ids = {“pl.piomin:account-service:+:stubs:2222”}, workOffline = true)

node {

   withMaven(maven: 'Maven') {

      stage ('Checkout') {
         git url: 'https://github.com/piomin/sample-spring-microservices-advanced.git', credentialsId: 'github-piomin', branch: 'testing'
      }

      stage ('Build') {
         def pom = readMavenPom file: 'pom.xml'
         def version = pom.version.replace("-SNAPSHOT", ".${currentBuild.number}")
         env.pom_version = version
         print 'Build version: ' + version
         currentBuild.description = "v${version}"

         dir('account-service') {
         bat "mvn clean install -Dmaven.test.failure.ignore=true"
         }

         junit '**/target/surefire-reports/TEST-*.xml'
      }

   }

}

Here’s pipeline visualization on Jenkins Management Dashboard.

account-pipeline

The post Testing Java Microservices appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2017/04/26/testing-java-microservices/feed/ 0 2521