Spring Boot Actuator Archives - Piotr's TechBlog https://piotrminkowski.com/tag/spring-boot-actuator/ Java, Spring, Kotlin, microservices, Kubernetes, containers Thu, 05 Sep 2024 16:10:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Spring Boot Actuator Archives - Piotr's TechBlog https://piotrminkowski.com/tag/spring-boot-actuator/ 32 32 181738725 SBOM with Spring Boot https://piotrminkowski.com/2024/09/05/sbom-with-spring-boot/ https://piotrminkowski.com/2024/09/05/sbom-with-spring-boot/#comments Thu, 05 Sep 2024 16:09:48 +0000 https://piotrminkowski.com/?p=15361 This article will teach you, how to leverage SBOM support in Spring Boot to implement security checks for your apps. A Software Bill of Materials (SBOM) lists all your app codebase’s open-source and third-party components. As a result, it allows us to perform vulnerability scanning, license checks, and risk analysis. Spring Boot 3.3 introduces built-in […]

The post SBOM with Spring Boot appeared first on Piotr's TechBlog.

]]>
This article will teach you, how to leverage SBOM support in Spring Boot to implement security checks for your apps. A Software Bill of Materials (SBOM) lists all your app codebase’s open-source and third-party components. As a result, it allows us to perform vulnerability scanning, license checks, and risk analysis. Spring Boot 3.3 introduces built-in support for generating SBOMs during app build and exposing them through the actuator endpoint. In this article, we will analyze the app based on the latest version of Spring Boot and another one using the outdated version of the libraries. You will see how to use the snyk CLI to verify the generated SBOM files.

It is the first article on my blog after a long holiday break. I hope you enjoy it. If you are interested in Spring Boot you can find several other posts about it on my blog. I can recommend my article about another fresh Spring Boot feature related to security, that describes SSL certs hot reload on Kubernetes.

Source Code

If you would like to try this exercise by yourself, you may always take a look at my source code. Today you will have to clone two sample Git repositories. The first one contains automatically updated source code of microservices based on the latest version of the Spring Boot framework. The second repository contains an archived version of microservices based on the earlier, unsupported version of Spring Boot. Once you clone both of these repositories, you just need to follow my instructions.

By the way, you can verify SBOMs generated for your Spring Boot apps in various ways. I decided to use snyk CLI for that. Alternatively, you can use the web version of the Snyk SBOM checker available here. In order to install the snyk CLI on your machine you need to follow its documentation. I used homebrew to install it on my macOS:

$ brew tap snyk/tap
$ brew install snyk
ShellSession

Enable SBOM Support in Spring Boot

By default, Spring Boot supports the CycloneDX format for generating SBOMs. In order to enable it, we need to include the cyclonedx-maven-plugin Maven plugin in your project root pom.xml.

<build>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
    <plugin>
      <groupId>org.cyclonedx</groupId>
      <artifactId>cyclonedx-maven-plugin</artifactId>
    </plugin>
  </plugins>
</build>
XML

There are several microservices defined in the same Git repository. They are all using the same root pom.xml. Each of them defines its list of dependencies. For this exercise, we need to have at least the Spring Boot Web and Actuator starters. However, let’s take a look at the whole list of dependencies for the employee-service (one of our sample microservices):

<dependencies>
  <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-config</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
  </dependency>
  <dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-tracing-bridge-otel</artifactId>
  </dependency>
  <dependency>
    <groupId>io.opentelemetry</groupId>
    <artifactId>opentelemetry-exporter-zipkin</artifactId>
  </dependency>
  <dependency>
    <groupId>io.github.openfeign</groupId>
    <artifactId>feign-micrometer</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springdoc</groupId>
    <artifactId>springdoc-openapi-starter-webmvc-api</artifactId>
    <version>2.6.0</version>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
  </dependency>
  <dependency>
    <groupId>org.instancio</groupId>
    <artifactId>instancio-junit</artifactId>
    <version>4.8.1</version>
    <scope>test</scope>
  </dependency>
</dependencies>
XML

After including the cyclonedx-maven-plugin plugin we need to execute the mvn package command in the repository root directory:

$ mvn clean package -DskipTests
ShellSession

The plugin will generate SBOM files for all the existing microservices and place them in the target/classes/META-INF/sbom directory for each Maven module.

spring-boot-sbom-maven

The generated SBOM file will always be placed inside the JAR file as well. Let’s take a look at the location of the SBOM file inside the employee-service uber JAR.

In order to expose the actuator SBOM endpoint we need to include the following configuration property. Since our configuration is stored by the Spring Cloud Config server, we need to put such a property in the YAML files inside the config-service/src/main/resources/config directory.

management:
  endpoints:
    web:
      exposure:
        include: health,sbom
ShellSession

Then, let’s start the config-service with the following Maven command:

$ cd config-service
$ mvn clean spring-boot:run
ShellSession

After that, we can start our sample microservice. It loads the configuration properties from the config-service. It listens on the dynamically generated port number. For me, it is 53498. In order to see the contents of the generated SBOM file, we need to call the GET /actuator/sbom/application path.

Generate and Verify SBOMs with the Snyk CLI

The exact structure of the SBOM file is not very important from our perspective. We need a tool that allows us to verify components and dependencies published inside that file. As I mentioned before, we can use the snyk CLI for that. We will examine the file generated in the repository root directory. Here’s the snyk command that allows us to print all the detected vulnerabilities in the SBOM file:

$ snyk sbom test \
   --file=target/classes/META-INF/sbom/application.cdx.json \
   --experimental
ShellSession

Here’s the report created as the output of the command executed above. As you see, there are two detected issues related to the included dependencies. Of course, I’m not including those dependencies directly in the Maven pom.xml. They were automatically included by the Spring Boot starters used by the microservices. By the way, I was even not aware that Spring Boot includes kotlin-stdlib even if I’m not using any Kotlin library directly in the app.

spring-boot-sbom-snyk

Although there are two issues detected in the report, it doesn’t look very bad. Now, let’s try to analyze something more outdated. I have already mentioned my old repository with microservices: sample-spring-microservices. It is already in the archived status and uses Spring Boot in the 1.5 version. If we don’t want to modify anything there, we can also use snyk CLI to generate SBOM instead of the Maven plugin. Since built-in support for SBOM comes with Spring Boot 3.3 there is no sense in including a plugin for the apps with the 1.5 version. Here’s the snyk command that generates SBOM for all the projects inside the repository and exports it to the application.cdx.json file:

$ snyk sbom --format=cyclonedx1.4+json --all-projects > application.cdx.json
ShellSession

Then, let’s examine the SBOM file using the same command as before:

$ snyk sbom test --file=application.cdx.json --experimental
ShellSession

Now, the results are much more pessimistic. There are 211 detected issues, including 6 critical.

Final Thoughts

SBOMs allow organizations to identify and address potential security risks more effectively. Spring Boot support for generating SBOM files simplifies the process when incorporating them into the organizational software development life cycle.

The post SBOM with Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/09/05/sbom-with-spring-boot/feed/ 2 15361
Spring Boot Autoscaling on Kubernetes https://piotrminkowski.com/2020/11/05/spring-boot-autoscaling-on-kubernetes/ https://piotrminkowski.com/2020/11/05/spring-boot-autoscaling-on-kubernetes/#comments Thu, 05 Nov 2020 09:39:05 +0000 https://piotrminkowski.com/?p=9051 Autoscaling based on the custom metrics is one of the features that may convince you to run your Spring Boot application on Kubernetes. By default, horizontal pod autoscaling can scale the deployment based on the CPU and memory. However, such an approach is not enough for more advanced scenarios. In that case, you can use […]

The post Spring Boot Autoscaling on Kubernetes appeared first on Piotr's TechBlog.

]]>
Autoscaling based on the custom metrics is one of the features that may convince you to run your Spring Boot application on Kubernetes. By default, horizontal pod autoscaling can scale the deployment based on the CPU and memory. However, such an approach is not enough for more advanced scenarios. In that case, you can use Prometheus to collect metrics from your applications. Then, you may integrate Prometheus with the Kubernetes custom metrics mechanism.

Preface

In this article, you will learn how to run Prometheus on Kubernetes using the Helm package manager. You will use the chart that causes Prometheus to scrape a variety of Kubernetes resource types. Thanks to that you won’t have to configure it. In the next step, you will install the Prometheus Adapter. You can also do that using the Helm package manager. The adapter acts as a bridge between the Prometheus instance and the custom metrics API. Our Spring Boot application exposes metrics through the HTTP endpoint. You will learn how to configure autoscaling on Kubernetes based on the number of incoming requests.

Source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my repository sample-spring-boot-on-kubernetes. Then you should go to the k8s directory. You can find there all the Kubernetes manifests and configuration files required for that exercise.

Our Spring Boot application is ready to be deployed on Kubernetes with Skaffold. You can find the skaffold.yaml file in the project root directory. Skaffold uses Jib Maven Plugin for building a Docker image. It deploys not only the Spring Boot application but also the Mongo database.

apiVersion: skaffold/v2beta5
kind: Config
metadata:
  name: sample-spring-boot-on-kubernetes
build:
  artifacts:
    - image: piomin/sample-spring-boot-on-kubernetes
      jib:
        args:
          - -Pjib
  tagPolicy:
    gitCommit: {}
deploy:
  kubectl:
    manifests:
      - k8s/mongodb-deployment.yaml
      - k8s/deployment.yaml

The only thing you need to do to build and deploy the application is to execute the following command. It also allows you to access HTTP API through the local port.

$ skaffold dev --port-forward

For more information about Skaffold, Jib and a local development of Java applications, you may refer to the article Local Java Development on Kubernetes

Kubernetes Autoscaling with Spring Boot – Architecture

The picture visible below shows the architecture of our sample system. The horizontal pod autoscaler (HPA) automatically scales the number of pods based on CPU, memory, or other custom metrics. It obtains the value of the metric by pulling the Custom Metrics API. In the beginning, we are running a single instance of our Spring Boot application on Kubernetes. Prometheus gathers and stores metrics from the application by calling HTTP endpoint /actuator/promentheus. Consequently, the HPA scales up the number of pods if the value of the metric exceeds the assumed value.

spring-boot-autoscaler-on-kubernetes-arch

Run Prometheus on Kubernetes

Let’s start by running the Prometheus instance on Kubernetes. In order to do that, you should use the official Prometheus Helm chart. Firstly, you need to add the required repository.

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo add stable https://charts.helm.sh/stable
$ helm repo update

Then, you should execute the Helm install command and provide the name of your installation.

$ helm install prometheus prometheus-community/prometheus

In a moment, the Prometheus instance is ready to use. You can access it through the Kubernetes Service prometheus-server. It is available on port 443. By default, the type of service is ClusterIP. Therefore, you should execute the kubectl port-forward command to access it on the local port.

Deploy Spring Boot on Kubernetes

In order to enable Prometheus support in Spring Boot, you need to include Spring Boot Actuator and the Micrometer Prometheus library. The full list of required dependencies also contains the Spring Web and Spring Data MongoDB modules.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
   <groupId>io.micrometer</groupId>
   <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-devtools</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

After enabling Prometheus support, the application exposes metrics through the /actuator/prometheus endpoint.

Our example application is very simple. It exposes the REST API for CRUD operations and connects to the Mongo database. Just to clarify, here’s the REST controller implementation.

@RestController
@RequestMapping("/persons")
public class PersonController {

   private PersonRepository repository;
   private PersonService service;

   PersonController(PersonRepository repository, PersonService service) {
      this.repository = repository;
      this.service = service;
   }

   @PostMapping
   public Person add(@RequestBody Person person) {
      return repository.save(person);
   }

   @PutMapping
   public Person update(@RequestBody Person person) {
      return repository.save(person);
   }

   @DeleteMapping("/{id}")
   public void delete(@PathVariable("id") String id) {
      repository.deleteById(id);
   }

   @GetMapping
   public Iterable<Person> findAll() {
      return repository.findAll();
   }

   @GetMapping("/{id}")
   public Optional<Person> findById(@PathVariable("id") String id) {
      return repository.findById(id);
   }

}

You may add a new person, modify and delete it through the HTTP API. You can also find it by id or just find all available persons. The Spring Boot Actuator generates HTTP traffic statistics per each endpoint and exposes it in the form readable by Prometheus. The number of incoming requests is available in the http_server_requests_seconds_count metric. Consequently, we will use this metric for Spring Boot autoscaling on Kubernetes.

Prometheus collects a pretty large set of metrics from the whole cluster. However, by default, it is not gathering the logs from applications. In order to force Prometheus to scrape the particular pods, you must add annotations to the Deployment as shown below. The annotation prometheus.io/path indicates the context path with the metrics endpoint. Of course, you have to enable scraping for the application using the annotation prometheus.io/scrape. Finally, you need to set the number of HTTP port with prometheus.io/port.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-boot-on-kubernetes-deployment
spec:
  selector:
    matchLabels:
      app: sample-spring-boot-on-kubernetes
  template:
    metadata:
      annotations:
        prometheus.io/path: /actuator/prometheus
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
      labels:
        app: sample-spring-boot-on-kubernetes
    spec:
      containers:
      - name: sample-spring-boot-on-kubernetes
        image: piomin/sample-spring-boot-on-kubernetes
        ports:
        - containerPort: 8080
        env:
          - name: MONGO_DATABASE
            valueFrom:
              configMapKeyRef:
                name: mongodb
                key: database-name
          - name: MONGO_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-user
          - name: MONGO_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-password

Install Prometheus Adapter on Kubernetes

The Prometheus adapter pulls metrics from the Prometheus instance and exposes them as the Custom Metrics API. In this step, you will have to provide configuration for pulling a custom metric exposed by the Spring Boot Actuator. The http_server_requests_seconds_count metric contains a number of requests received by the particular HTTP endpoint. To clarify, let’s take a look at the list of http_server_requests_seconds_count metrics for the multiple /persons endpoints.

You need to override some configuration settings for the Prometheus adapter. Firstly, you should change the default address of the Prometheus instance. Since the name of the Prometheus Service is prometheus-server, you should change it to prometheus-server.default.svc. The number of HTTP port is 80. Then, you have to define a custom rule for pulling the required metric from Prometheus. It is important to override the name of the Kubernetes pod and namespace used as a metric tag by Prometheus. There are multiple entries for http_server_requests_seconds_count, so you must calculate the sum. The name of the custom Kubernetes metric is http_server_requests_seconds_count_sum.

prometheus:
  url: http://prometheus-server.default.svc
  port: 80
  path: ""

rules:
  default: true
  custom:
  - seriesQuery: '{__name__=~"^http_server_requests_seconds_.*"}'
    resources:
      overrides:
        kubernetes_namespace:
          resource: namespace
        kubernetes_pod_name:
          resource: pod
    name:
      matches: "^http_server_requests_seconds_count(.*)"
      as: "http_server_requests_seconds_count_sum"
    metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>,uri=~"/persons.*"}) by (<<.GroupBy>>)

Now, you just need to execute the Helm install command with the location of the YAML configuration file.

$ helm install -f k8s\helm-config.yaml prometheus-adapter prometheus-community/prometheus-adapter

Finally, you can verify if metrics have been successfully pulled by executing the following command.

Create Kubernetes Horizontal Pod Autoscaler

In the last step of this tutorial, you will create a Kubernetes HorizontalPodAutoscaler. HorizontalPodAutoscaler automatically scales up the number of pods if the average value of the http_server_requests_seconds_count_sum metric exceeds 100. In other words, if your instance of application receives more than 100 requests, HPA automatically runs a new instance. Then, after sending another 100 requests, an average value of metric exceeds 100 once again. So, HPA runs a third instance of the application. Consequently, after sending 1k requests you should have 10 pods. The definition of our HorizontalPodAutoscaler is visible below.

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: sample-hpa
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: sample-spring-boot-on-kubernetes-deployment
  minReplicas: 1
  maxReplicas: 10
  metrics:
    - type: Pods
      pods:
        metric:
          name: http_server_requests_seconds_count_sum
        target:
          type: AverageValue
          averageValue: 100

Testing Kubernetes autoscaling with Spring Boot

After deploying all the required components you may verify a list of running pods. As you see, there is a single instance of our Spring Boot application sample-spring-boot-on-kubernetes.

In order to check the current value of the http_server_requests_seconds_count_sum metric you can display the details about HorizontalPodAutoscaler. As you see I have already sent 15 requests to the different HTTP endpoints.

spring-boot-autoscaling-on-kubernetes-hpa

Here’s the sequence of requests you may send to the application to test autoscaling behavior.

$ curl http://localhost:8080/persons -d "{\"firstName\":\"Test\",\"lastName\":\"Test\",\"age\":20,\"gender\":\"MALE\"}" -H "Content-Type: application/
json"
{"id":"5fa334d149685f24841605a9","firstName":"Test","lastName":"Test","age":20,"gender":"MALE"}
$ curl http://localhost:8080/persons/5fa334d149685f24841605a9
{"id":"5fa334d149685f24841605a9","firstName":"Test","lastName":"Test","age":20,"gender":"MALE"}
$ curl http://localhost:8080/persons
[{"id":"5fa334d149685f24841605a9","firstName":"Test","lastName":"Test","age":20,"gender":"MALE"}]
$ curl -X DELETE http://localhost:8080/persons/5fa334d149685f24841605a9

After sending many HTTP requests to our application, you may verify the number of running pods. In that case, we have 5 instances.

spring-boot-autoscaling-on-kubernetes-hpa-finish

You can also display a list of running pods.

Conclusion

In this article, I showed you a simple scenario of Kubernetes autoscaling with Spring Boot based on the number of incoming requests. You may easily create more advanced scenarios just by modifying a metric query in the Prometheus Adapter configuration file. I run all my tests on Google Cloud with GKE. For more information about running JVM applications on GKE please refer to Running Kotlin Microservice on Goggle Kubernetes Engine.

The post Spring Boot Autoscaling on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/11/05/spring-boot-autoscaling-on-kubernetes/feed/ 4 9051
RabbitMQ Monitoring on Kubernetes https://piotrminkowski.com/2020/09/29/rabbitmq-monitoring-on-kubernetes/ https://piotrminkowski.com/2020/09/29/rabbitmq-monitoring-on-kubernetes/#comments Tue, 29 Sep 2020 10:55:02 +0000 https://piotrminkowski.com/?p=8883 RabbitMQ monitoring can be a key point of your system management. Therefore we should use the right tools for that. To enable them on RabbitMQ we need to install some plugins. In this article, I will show you how to use Prometheus and Grafana to monitor the key metrics of RabbitMQ. Of course, we will […]

The post RabbitMQ Monitoring on Kubernetes appeared first on Piotr's TechBlog.

]]>
RabbitMQ monitoring can be a key point of your system management. Therefore we should use the right tools for that. To enable them on RabbitMQ we need to install some plugins. In this article, I will show you how to use Prometheus and Grafana to monitor the key metrics of RabbitMQ. Of course, we will build the applications that send and receive messages. We will use Kubernetes as a target platform for our system. In the last step, we are going to enable the tracing plugin. It helps us in collecting the list of incoming messages.

Source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-spring-amqp. Inside k8s directory, you will find all the required deployment manifests. Both Spring Boot test applications are available inside listener and producer directories. If you would like to test the cluster of RabbitMQ please refer to the article RabbitMQ in cluster.

Step 1 – Building a RabbitMQ image

In the first step, we are overriding the Docker image of RabbitMQ. In that case, we need to extend the base image with a tag 3-management and add two plugins. The plugin rabbitmq_prometheus adds Prometheus exporter of core RabbitMQ metrics. The second of them, rabbitmq_tracing allows us to log the payloads of incoming messages. That’s all that we need to define in our Dockerfile, which is visible below.

FROM rabbitmq:3-management
RUN rabbitmq-plugins enable --offline rabbitmq_prometheus rabbitmq_tracing

Then you just need to build an already defined image. Let’s say its name is piomin/rabbitmq-monitoring. After building I’m pushing it to my remote Docker registry.

$ docker build -t piomin/rabbitmq-monitoring .
$ docker push piomin/rabbitmq-monitoring

Step 2 – Deploying RabbitMQ on Kubernetes

Now, we are going to deploy our custom image of RabbitMQ on Kubernetes. For the purpose of this article, we will run a standalone version of RabbitMQ. In that case, the only thing we need to do is to override some configuration properties in the rabbitmq.conf and enabled_plugins files. First, I’m enabling logging to the console at the debug level. Then I’m also enabling all the required plugins.

apiVersion: v1
kind: ConfigMap
metadata:
  name: rabbitmq
  labels:
    name: rabbitmq
data:
  rabbitmq.conf: |-
    loopback_users.guest = false
    log.console = true
    log.console.level = debug
    log.exchange = true
    log.exchange.level = debug
  enabled_plugins: |-
    [rabbitmq_management,rabbitmq_prometheus,rabbitmq_tracing].

Both rabbitmq.conf and enabled_plugins files should be placed inside the /etc/rabbitmq directory. Therefore, I’m mounting them inside the volume assigned to the RabbitMQ Deployment. Additionally, we are exposing three ports outside the container. The port 5672 is used in communication with applications through AMQP protocol. The Prometheus plugin exposes metrics on the dedicated port 15692. In order to access Management UI and HTTP endpoints, you should use the 15672 port.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rabbitmq
  labels:
    app: rabbitmq
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rabbitmq
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      containers:
        - name: rabbitmq
          image: piomin/rabbitmq-monitoring:latest
          ports:
            - containerPort: 15672
              name: http
            - containerPort: 5672
              name: amqp
            - containerPort: 15692
              name: prometheus
          volumeMounts:
            - name: rabbitmq-config-map
              mountPath: /etc/rabbitmq/
      volumes:
        - name: rabbitmq-config-map
          configMap:
            name: rabbitmq

Step 3 – Building Spring Boot listener application

Our sample listener application uses Spring Boot AMQP project for integration with RabbitMQ. Thanks to Spring Boot Actuator module it is also exposing metrics including RabbiMQ specific values. It is important to expose them in the format readable by Prometheus.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
   <groupId>io.micrometer</groupId>
   <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

The listener application defines and creates two exchanges. First of them, trx-events-topic, is used for multicast communication. On the other hand, trx-events-direct takes a part in point-to-point communication. Both our sample applications are exchanging messages in JSON format. Therefore we have to override a default Spring Boot AMQP message converter with Jackson2JsonMessageConverter.

@SpringBootApplication
public class ListenerApplication {

   public static void main(String[] args) {
      SpringApplication.run(ListenerApplication.class, args);
   }

   @Bean
   public TopicExchange topic() {
      return new TopicExchange("trx-events-topic");
   }

   @Bean
   public DirectExchange queue() {
      return new DirectExchange("trx-events-direct");
   }

   @Bean
   public MessageConverter jsonMessageConverter() {
      return new Jackson2JsonMessageConverter();
   }
}

The listener application receives messages from the both topic and direct exchanges. Each running instance of this application is creating a queue binding with the random name. With a direct exchange, only a single queue is receiving incoming messages. On the other hand, all the queues related to a topic exchange are receiving incoming messages.

@Component
@Slf4j
public class ListenerComponent {

   @RabbitListener(bindings = {
      @QueueBinding(
         exchange = @Exchange(type = ExchangeTypes.TOPIC, name = "trx-events-topic"),
         value = @Queue("${topic.queue.name}")
      )
   })
   public void onTopicMessage(SampleMessage message) {
      log.info("Message received: {}", message);
   }

   @RabbitListener(bindings = {
      @QueueBinding(
         exchange = @Exchange(type = ExchangeTypes.DIRECT, name = "trx-events-direct"),
         value = @Queue("${direct.queue.name}")
      )
   })
   public void onDirectMessage(SampleMessage message) {
      log.info("Message received: {}", message);
   }

}

The name of queues assigned to the topic and direct exchanges is configured inside application.yml file.

topic.queue.name: t-${random.uuid}
direct.queue.name: d-${random.uuid}

Step 4 – Building Spring Boot producer application

The producer application also uses Spring Boot AMQP for integration with RabbitMQ. It sends messages to the exchanges with RabbitTemplate. Similarly to the listener application it formats all the messages as JSON string.

@SpringBootApplication
@EnableScheduling
public class ProducerApplication {

   public static void main(String[] args) {
      SpringApplication.run(ProducerApplication.class, args);
   }

   @Bean
   public RabbitTemplate rabbitTemplate(final ConnectionFactory connectionFactory) {
      final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
      rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
      return rabbitTemplate;
   }

   @Bean
   public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
      return new Jackson2JsonMessageConverter();
   }

}

The producer application starts sending messages just after startup. Each message contains id, type and message fields. They are sent to the topic exchange with 5 seconds interval, and to the direct exchange with 2 seconds interval.

@Component
@Slf4j
public class ProducerComponent {

   int index = 1;
   private RabbitTemplate rabbitTemplate;

   ProducerComponent(RabbitTemplate rabbitTemplate) {
      this.rabbitTemplate = rabbitTemplate;
   }

   @Scheduled(fixedRate = 5000)
   public void sendToTopic() {
      SampleMessage msg = new SampleMessage(index++, "abc", "topic");
      rabbitTemplate.convertAndSend("trx-events-topic", null, msg);
      log.info("Sending message: {}", msg);
   }

   @Scheduled(fixedRate = 2000)
   public void sendToDirect() {
      SampleMessage msg = new SampleMessage(index++, "abc", "direct");
      rabbitTemplate.convertAndSend("trx-events-direct", null, msg);
      log.info("Sending message: {}", msg);
   }
   
}

Step 5 – Deploying Prometheus and Grafana for RabbitMQ monitoring

We will use Prometheus for collecting metrics from RabbitMQ, and our both Spring Boot applications. Prometheus detects endpoints with metrics by the Kubernetes Service app label and a HTTP port name. Of course, you can define different search criteria for that. Because Spring Boot and RabbitMQ metrics are defined under different endpoints, we need to define two jobs. On the source code below you can see the ConfigMap that contains the Prometheus configuration file.

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus
  labels:
    name: prometheus
data:
  prometheus.yml: |-
    scrape_configs:
      - job_name: 'springboot'
        metrics_path: /actuator/prometheus
        scrape_interval: 5s
        kubernetes_sd_configs:
        - role: endpoints
          namespaces:
            names:
              - default

        relabel_configs:
          - source_labels: [__meta_kubernetes_service_label_app]
            separator: ;
            regex: (producer|listener)
            replacement: $1
            action: keep
          - source_labels: [__meta_kubernetes_endpoint_port_name]
            separator: ;
            regex: http
            replacement: $1
            action: keep
          # ...
      - job_name: 'rabbitmq'
        metrics_path: /metrics
        scrape_interval: 5s
        kubernetes_sd_configs:
        - role: endpoints
          namespaces:
            names:
              - default

        relabel_configs:
          - source_labels: [__meta_kubernetes_service_label_app]
            separator: ;
            regex: rabbitmq
            replacement: $1
            action: keep
          - source_labels: [__meta_kubernetes_endpoint_port_name]
            separator: ;
            regex: prometheus
            replacement: $1
            action: keep
          # ...

For the full version of Prometheus deployment please refer to the source code. Prometheus tries to discover metric endpoints by the Kubernetes Service label and port name. Let’s take a look on the Service for the listener application.

apiVersion: v1
kind: Service
metadata:
  name: listener-service
  labels:
    app: listener
spec:
  type: ClusterIP
  selector:
    app: listener
  ports:
  - port: 8080
    name: http

Similarly, we should create Service for RabbitMQ.

apiVersion: v1
kind: Service
metadata:
  name: rabbitmq-service
  labels:
    app: rabbitmq
spec:
  type: NodePort
  selector:
    app: rabbitmq
  ports:
  - port: 15672
    name: http
  - port: 5672
    name: amqp
  - port: 15692
    name: prometheus

Step 6 – Deploying stack for RabbitMQ monitoring

We can finally proceed to the deployment on Kubernetes. In summary, we have five running applications. Three of them, RabbitMQ with the management console, Prometheus, and Grafana are a part of the RabbitMQ monitoring stack. We also have a single instance of the Spring Boot AMQP producer application, and two instances of the Spring Boot AMQP listener application. You can see the final list of pods in the picture below.

rabbitmq-monitoring-pods

If you deploy Spring Boot applications with skaffold dev --port-forward command, you can easily access them on the local port. Other applications can be accessed via Kubernetes Service NodePort.

Step 7 – RabbitMQ monitoring with Prometheus metrics

We can easily verify a list of metrics generated by Spring Boot applications by calling /actuator/prometheus endpoint. First, let’s take a look at the metrics returned by the listener application.

rabbitmq_not_acknowledged_published_total{name="rabbit",} 0.0
rabbitmq_unrouted_published_total{name="rabbit",} 0.0
rabbitmq_channels{name="rabbit",} 2.0
rabbitmq_consumed_total{name="rabbit",} 2432.0
rabbitmq_connections{name="rabbit",} 1.0
rabbitmq_acknowledged_total{name="rabbit",} 2432.0
spring_rabbitmq_listener_seconds_max{exception="none",listener_id="org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#1",queue="d-ea28bd07-929d-4928-8d2c-5dceeec9950a",result="success",} 0.0025406
spring_rabbitmq_listener_seconds_max{exception="none",listener_id="org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#0",queue="t-e8990fe4-7d8c-4a2c-96d7-fff4fe503265",result="success",} 0.0024175
spring_rabbitmq_listener_seconds_count{exception="none",listener_id="org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#1",queue="d-ea28bd07-929d-4928-8d2c-5dceeec9950a",result="success",} 1712.0
spring_rabbitmq_listener_seconds_sum{exception="none",listener_id="org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#1",queue="d-ea28bd07-929d-4928-8d2c-5dceeec9950a",result="success",} 0.992886413
spring_rabbitmq_listener_seconds_count{exception="none",listener_id="org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#0",queue="t-e8990fe4-7d8c-4a2c-96d7-fff4fe503265",result="success",} 720.0
spring_rabbitmq_listener_seconds_sum{exception="none",listener_id="org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#0",queue="t-e8990fe4-7d8c-4a2c-96d7-fff4fe503265",result="success",} 0.598468801
rabbitmq_acknowledged_published_total{name="rabbit",} 0.0
rabbitmq_failed_to_publish_total{name="rabbit",} 0.0
rabbitmq_rejected_total{name="rabbit",} 0.0
rabbitmq_published_total{name="rabbit",} 0.0

Similarly, we can verify the list of metrics from the producer application. In contrast to the listener application, it is generating rabbitmq_published_total instead of rabbitmq_consumed_total.

rabbitmq_acknowledged_published_total{name="rabbit",} 0.0
rabbitmq_unrouted_published_total{name="rabbit",} 0.0
rabbitmq_acknowledged_total{name="rabbit",} 0.0
rabbitmq_rejected_total{name="rabbit",} 0.0
rabbitmq_connections{name="rabbit",} 1.0
rabbitmq_not_acknowledged_published_total{name="rabbit",} 0.0
rabbitmq_consumed_total{name="rabbit",} 0.0
rabbitmq_failed_to_publish_total{name="rabbit",} 0.0
rabbitmq_published_total{name="rabbit",} 2553.0
rabbitmq_channels{name="rabbit",} 1.0

The list of metrics generated by the RabbitMQ Prometheus plugin is pretty impressive. I decided to use only some of them.

rabbitmq_channel_consumers 6
rabbitmq_channel_messages_published_total 2926
rabbitmq_channel_messages_delivered_total 2022
rabbitmq_channel_messages_acked_total 5922
rabbitmq_connections_opened_total 9
rabbitmq_connection_incoming_bytes_total 700878
rabbitmq_connection_outgoing_bytes_total 1388158
rabbitmq_connection_channels 5
rabbitmq_queue_messages 5917
rabbitmq_queue_consumers 6
rabbitmq_queues 10

We can visualize all the metrics on the Grafana dashboard. Grafana is using Prometheus as a data source. To repeat, Prometheus collects data from the Spring Boot applications endpoints and RabbitMQ.

rabbitmq-monitoring-grafana

Step 8 – RabbitMQ monitoring with Tracing plugin

The tracing plugin allows us to log all incoming and outgoing messages. We can configure it in the RabbitMQ management console. In order to do that we need to switch to the “Admin” tab. Then, we can enable the logging of all messages or just those incoming to the particular queue or exchange.

rabbitmq-monitoring-tracing

The logs are available inside a file. You can download it. Each entry in the file contains detailed information about a message. You can verify the name of the node, change, or queue. Of course, it also contains the message payload, properties, and time of the event as shown below.

Conclusion

RabbitMQ monitoring tools allow you to verify general metrics of the node and detailed logs of every message. In addition, Spring Boot AMQP offers dedicated metrics for applications that interact with RabbitMQ. In this article, I described how to run the full monitoring stack on Kubernetes. Enjoy 🙂

The post RabbitMQ Monitoring on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/09/29/rabbitmq-monitoring-on-kubernetes/feed/ 6 8883
Using Spring Cloud Kubernetes External Library https://piotrminkowski.com/2020/03/16/using-spring-cloud-kubernetes-external-library/ https://piotrminkowski.com/2020/03/16/using-spring-cloud-kubernetes-external-library/#respond Mon, 16 Mar 2020 16:57:51 +0000 http://piotrminkowski.com/?p=7824 In this article I’m going to introduce my newest library for registering Spring Boot applications running outside the Kubernetes cluster. The motivation for creating this library has already been described in the details in my article Spring Cloud Kubernetes for Hybrid Microservices Architecture. Since Spring Cloud Kubernetes doesn’t implement registration in the service registry in […]

The post Using Spring Cloud Kubernetes External Library appeared first on Piotr's TechBlog.

]]>
In this article I’m going to introduce my newest library for registering Spring Boot applications running outside the Kubernetes cluster. The motivation for creating this library has already been described in the details in my article Spring Cloud Kubernetes for Hybrid Microservices Architecture. Since Spring Cloud Kubernetes doesn’t implement registration in the service registry in any way, and just delegates it to the platform, it will not provide many benefits to applications running outside the Kubernetes cluster. To take an advantage of Spring Cloud Kubernetes Discovery you may just include library spring-cloud-kubernetes-discovery-ext-client to your Spring Boot application running externally.
The current stable version of this library is 1.0.1.RELEASE.


<dependency>
  <groupId>com.github.piomin</groupId>
  <artifactId>spring-cloud-kubernetes-discovery-ext-client</artifactId>
  <version>1.0.1.RELEASE</version>
</dependency>

The registration feature is still disabled, since we won’t set property spring.cloud.kubernetes.discovery.register to true.

spring:
  cloud:
    kubernetes:
      discovery:
        register: true

If you are running an application you need to set a target namespace in Kubernetes, where it will be registered after startup. Here we are using a mechanism provided by Spring Cloud Kubernetes, which allows to set a default namespace for Fabric8 Kubernetes Client by setting environment variable KUBERNETES_NAMESPACE.

A registration mechanism is based on Kubernetes objects: Service and Endpoints. It creates a service with the name taken from property spring.application.name. In the annotations field of Service object it puts a path of health check endpoint. By default it is /actuator/health. The new Service object is created only if it does not exist. The following screen shows the details about Service created by library for application api-test.

spring-cloud-kubernetes-external-library-service

The path of the health check endpoint may be overridden using property spring.cloud.kubernetes.discovery.healthUrl.

spring:
  cloud:
    kubernetes:
      discovery:
        healthUrl: /actuator/liveness 

The next step is to create an Endpoints object. Normally, you don’t have a lot to deal with Endpoints, since it just tracks the IP addresses of the pods the service send traffic to. The name of Endpoints is the same as the name of Service. The IP address of the application is stored in the Subset section. To distinguish Endpoints created by the library for external applications from Endpoints registered by the platform automatically each of them is labeled with external flag with value true.

spring-cloud-kubernetes-external-library-endpoints

We may display details about selected Endpoints by command kubectl describe to see the structure of this object.

spring-cloud-kubernetes-external-library-endpoints-describe

The IP address of Spring Boot application is automatically detected by the library by calling Java method InetAddress.getLocalHost().getHostAddress(). You may set a static IP address by using property spring.cloud.kubernetes.discovery.ipAddress as shown below.

spring:
  cloud:
    kubernetes:
      discovery:
        ipAddress: 192.168.99.1

The library spring-cloud-kubernetes-discovery-ext-client is based on the Spring Cloud Kubernetes project. It uses the Kubernetes API client provided within this library. The version of Spring Cloud Release Train used by the library is Hoxton.RELEASE.


<dependencies>
   <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-kubernetes</artifactId>
   </dependency>
</dependencies>
<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-dependencies</artifactId>
         <version>Hoxton.RELEASE</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

Assuming you have currently run a local instance of your Kubernetes cluster, you should at least provide an address of master API, and set property spring.cloud.kubernetes.client.trustCerts just for the development purposes. Here’s bootstrap.yml for my Spring Boot demo application.

spring:
  application:
    name: api-test
  cloud:
    kubernetes:
      discovery:
        register: true
      client:
        masterUrl: 192.168.99.100:8443
        trustCerts: true

If you shutdown your Spring Boot application gracefully spring-cloud-kubernetes-discovery-ext-client will unregister it from Kubernetes API. However, we always have to consider situations like forceful kill of application or network problems, that may cause unregistered instances in our Kubernetes API. Such situations should be handled on the platform side. Since Kubernetes Discovery does provide any built-in mechanisms for that (like heartbeat for applications running outside cluster), you may provide your implementation within Kubernetes Job or you can just use my library spring-cloud-kubernetes-discovery-ext-watcher responsible for detecting and removing inactive Endpoints.
The main idea behind that library is illustrated on the picture below. The module spring-cloud-kubernetes-discovery-ext-watcher is in fact Spring Boot application that needs to be run on Kubernetes. It periodically queries Kubernetes API in order to fetch the current list of external Endpoints registered by applications using spring-cloud-kubernetes-discovery-ext-client library. Then it is trying to call health endpoints registered for each application using IP address and ports taken from master API. If it won’t receive any response or receive response with HTTP status 5XX several times in row, it removes IP address from Subset section of Endpoints object.

spring-cloud-kubernetes-external-library-diagram.png

By default, spring-cloud-kubernetes-discovery-ext-watcher application checks out endpoints registered in the same Kubernetes namespace as that application. This behaviour may be customized using configuration properties. We can set the default target namespace by setting property spring.cloud.kubernetes.watcher.targetNamespace or just enable watching for Endpoints labeled with external=true across all the namespace by setting property spring.cloud.kubernetes.watcher.allNamespaces to true. We can also override some default retry properties for calling application health endpoints like number of retries (by default 3) or connect timeout (by default 1000ms). The configuration settings need to be delivered to the Watcher application as application.yml or bootstrap.yml file.

spring:
  cloud:
    kubernetes:
      watcher:
        targetNamespace: test
        retries: 5
        retryTimeout: 5000

To deploy spring-cloud-kubernetes-discovery-ext-watcher application on your Kubernetes cluster you just need to apply the following Deployment definition using kubectl apply command.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: spring-cloud-discovery-watcher
spec:
  selector:
    matchLabels:
      app: spring-cloud-discovery-watcher
  template:
    metadata:
      labels:
        app: spring-cloud-discovery-watcher
    spec:
      containers:
      - name: watcher
        image: piomin/spring-cloud-discovery-watcher
        ports:
        - containerPort: 8080

Since that application uses Spring Cloud Kubernetes for accessing Master API you need to grant privileges to read objects like Service, Endpoints or ConfigMap. For development purposes you can just assign cluster-admin to the default ServiceAccount in target namespace.

$ kubectl create clusterrolebinding admin-external --clusterrole=cluster-admin --serviceaccount=external:default

In case you would like to override some default configuration settings you should define application.yml file and place it inside ConfigMap. Since spring-cloud-kubernetes-discovery-ext-watcher uses Spring Cloud Kubernetes Config we may take an advantage of its integration with ConfigMap. To do that just create ConfigMap with the same metadata.name as the application name.

apiVersion: v1
kind: ConfigMap
metadata:
  name: api-test
data:
  application.yaml: |-
    spring:
      cloud:
        kubernetes:
          watcher:
            allNamespaces: true
            retries: 5
            retryTimeout: 5000

Here’s our sample deployment in the external namespace.

spring-boot-admin-on-kubernetes-watcher-deployment

Now, let’s take a look on the logs generated by the spring-cloud-kubernetes-discovery-ext-watcher application. Before running it I have started my sample application that uses spring-cloud-kubernetes-discovery-ext-client outside Kubernetes. I has been registered under address 192.168.99.1:8080. As you in the following logs in the beginning Watcher Application was able to communicate with sample application Actuator endpoint. Then I killed the sample application. Since Watcher was unable to call Actuator endpoint of previously checked application it finally remove that address from Subset section of api-test Endpoints

spring-cloud-kubernetes-external-library-watcher-logs

Here’s the Endpoints object after removal of address 192.168.99.1:8080.

spring-cloud-kubernetes-external-library-endpoint-after-remove

Summary

The repository with Client library and Watcher application is available on GitHub: https://github.com/piomin/spring-cloud-kubernetes-discovery-ext.git. The library spring-cloud-kubernetes-discovery-ext-client is available in Maven Central Repository. The Docker image with Watcher application is available on Docker Hub: https://hub.docker.com/repository/docker/piomin/spring-cloud-discovery-watcher. You can also run it directly from a source code using Skaffold.

The post Using Spring Cloud Kubernetes External Library appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/03/16/using-spring-cloud-kubernetes-external-library/feed/ 0 7824
Best Practices For Microservices on Kubernetes https://piotrminkowski.com/2020/03/10/best-practices-for-microservices-on-kubernetes/ https://piotrminkowski.com/2020/03/10/best-practices-for-microservices-on-kubernetes/#comments Tue, 10 Mar 2020 11:37:56 +0000 http://piotrminkowski.com/?p=7798 There are several best practices for building microservices architecture properly. You may find many articles about it online. One of them is my previous article Spring Boot Best Practices For Microservices. I focused there on the most important aspects that should be considered when running microservice applications built on top of Spring Boot on production. […]

The post Best Practices For Microservices on Kubernetes appeared first on Piotr's TechBlog.

]]>
There are several best practices for building microservices architecture properly. You may find many articles about it online. One of them is my previous article Spring Boot Best Practices For Microservices. I focused there on the most important aspects that should be considered when running microservice applications built on top of Spring Boot on production. I didn’t assume there is any platform used for orchestration or management, but just a group of independent applications. In this article, I’m going to extend the list of already introduced best practices with some new rules dedicated especially to microservices deployed on the Kubernetes platform.
The first question is if it makes any difference when you deploy your microservices on Kubernetes instead of running them independently without any platform? Well, actually yes and no… Yes, because now you have a platform that is responsible for running and monitoring your applications, and it launches some of its own rules. No, because you still have microservices architecture, a group of loosely coupled, independent applications, and you should not forget about it! In fact, many of the previously introduced best practices are actual, some of them need to be redefined a little. There are also some new, platform-specific rules, which should be mentioned.
One thing that needs to be explained before proceeding. This list of Kubernetes microservices best practices is built based on my experience in running microservices-based architecture on cloud platforms like Kubernetes. I didn’t copy it from other articles or books. In my organization, we have already migrated our microservices from Spring Cloud (Eureka, Zuul, Spring Cloud Config) to OpenShift. We are continuously improving this architecture based on experience in maintaining it.

Example

The sample Spring Boot application that implements currently described Kubernetes microservices best practices is written in Kotlin. It is available on GitHub in repository sample-spring-kotlin-microservice under branch kubernetes: https://github.com/piomin/sample-spring-kotlin-microservice/tree/kubernetes.

1. Allow platform to collect metrics

I have also put a similar section in my article about best practices for Spring Boot. However, metrics are also one of the important Kubernetes microservices best practices. We were using InfluxDB as a target metrics store. Since our approach to gathering metrics data is being changed after migration to Kubernetes I redefined the title of this point into Allow platform to collect metrics. The main difference between current and previous approaches is in the way of collecting data. We use Prometheus, because that process may be managed by the platform. InfluxDB is a push-based system, where your application actively pushes data into the monitoring system. Prometheus is a pull-based system, where a server fetches the metrics values from the running application periodically. So, our main responsibility at this point is to provide endpoints on the application side for Prometheus.
Fortunately, it is very easy to provide metrics for Prometheus with Spring Boot. You need to include Spring Boot Actuator and a dedicated Micrometer library for integration with Prometheus.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
   <groupId>io.micrometer</groupId>
   <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

We should also enable exposing Actuator HTTP endpoints outside application. You can enable a single endpoint dedicated for Prometheus or just expose all Actuator endpoints as shown below.

management.endpoints.web.exposure.include: '*'

After running your application endpoint is by default available under path /actuator/prometheus.

best-practices-microservices-kubernetes-actuator

Assuming you run your application on Kubernetes you need to deploy and configure Prometheus to scrape logs from your pods. The configuration may be delivered as Kubernetes ConfigMap. The prometheus.yml file should contain section scrape_config with path of endpoint serving metrics and Kubernetes discovery settings. Prometheus is trying to localize all application pods by Kubernetes Endpoints. The application should be labeled with app=sample-spring-kotlin-microservice and have a port with name http exposed outside the container.

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus
  labels:
    name: prometheus
data:
  prometheus.yml: |-
    scrape_configs:
      - job_name: 'springboot'
        metrics_path: /actuator/prometheus
        scrape_interval: 5s
        kubernetes_sd_configs:
        - role: endpoints
          namespaces:
            names:
              - default

        relabel_configs:
          - source_labels: [__meta_kubernetes_service_label_app]
            separator: ;
            regex: sample-spring-kotlin-microservice
            replacement: $1
            action: keep
          - source_labels: [__meta_kubernetes_endpoint_port_name]
            separator: ;
            regex: http
            replacement: $1
            action: keep
          - source_labels: [__meta_kubernetes_namespace]
            separator: ;
            regex: (.*)
            target_label: namespace
            replacement: $1
            action: replace
          - source_labels: [__meta_kubernetes_pod_name]
            separator: ;
            regex: (.*)
            target_label: pod
            replacement: $1
            action: replace
          - source_labels: [__meta_kubernetes_service_name]
            separator: ;
            regex: (.*)
            target_label: service
            replacement: $1
            action: replace
          - source_labels: [__meta_kubernetes_service_name]
            separator: ;
            regex: (.*)
            target_label: job
            replacement: ${1}
            action: replace
          - separator: ;
            regex: (.*)
            target_label: endpoint
            replacement: http
            action: replace

The last step is to deploy Prometheus on Kubernetes. You should attach ConfigMap with Prometheus configuration to the Deployment via Kubernetes mounted volume. After that you may set the location of a configuration file using config.file parameter: --config.file=/prometheus2/prometheus.yml.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus
  labels:
    app: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
        - name: prometheus
          image: prom/prometheus:latest
          args:
            - "--config.file=/prometheus2/prometheus.yml"
            - "--storage.tsdb.path=/prometheus/"
          ports:
            - containerPort: 9090
              name: http
          volumeMounts:
            - name: prometheus-storage-volume
              mountPath: /prometheus/
            - name: prometheus-config-map
              mountPath: /prometheus2/
      volumes:
        - name: prometheus-storage-volume
          emptyDir: {}
        - name: prometheus-config-map
          configMap:
            name: prometheus

Now you can verify if Prometheus has discovered your application running on Kubernetes by accessing endpoint /targets.

best-practices-microservices-kubernetes-prometheus

2. Prepare logs in right format

The approach to collecting logs is pretty similar to collecting metrics. Our application should not handle the process of sending logs by itself. It just should take care of formatting logs sent to the output stream properly. Since Docker has a built-in logging driver for Fluentd it is very convenient to use it as a log collector for applications running on Kubernetes. This means no additional agent is required on the container to push logs to Fluentd. Logs are directly shipped to Fluentd service from STDOUT and no additional logs file or persistent storage is required. Fluentd tries to structure data as JSON to unify logging across different sources and destinations.
In order to format our logs to JSON readable by Fluentd we may include the Logstash Logback Encoder library to our dependencies.

<dependency>
   <groupId>net.logstash.logback</groupId>
   <artifactId>logstash-logback-encoder</artifactId>
   <version>6.3</version>
</dependency>

Then we just need to set a default console log appender for our Spring Boot application in the file logback-spring.xml.

<configuration>
    <appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>
    <logger name="jsonLogger" additivity="false" level="DEBUG">
        <appender-ref ref="consoleAppender"/>
    </logger>
    <root level="INFO">
        <appender-ref ref="consoleAppender"/>
    </root>
</configuration>

The logs are printed into STDOUT in the format visible below.

kubernetes-log-format

It is very simple to install Fluentd, Elasticsearch and Kibana on Minikube. Disadvantage of this approach is that we are installing older versions of these tools.

$ minikube addons enable efk
* efk was successfully enabled
$ minikube addons enable logviewer
* logviewer was successfully enabled

After enabling efk and logviewer addons Kubernetes pulls and starts all the required pods as shown below.

best-practices-microservices-kubernetes-pods-logging

Thanks to the logstash-logback-encoder library we may automatically create logs compatible with Fluentd including MDC fields. Here’s a screen from Kibana that shows logs from our test application.

best-practices-microservices-kubernetes-kibana

Optionally, you can add my library for logging requests/responses for Spring Boot application.

<dependency>
   <groupId>com.github.piomin</groupId>
   <artifactId>logstash-logging-spring-boot-starter</artifactId>
   <version>1.2.2.RELEASE</version>
</dependency>

3. Implement both liveness and readiness health check

It is important to understand the difference between liveness and readiness probes in Kubernetes. If these probes are not implemented carefully, they can degrade the overall operation of a service, for example by causing unnecessary restarts. Liveness probe is used to decide whether to restart the container or not. If an application is unavailable for any reason, restarting the container sometimes can make sense. On the other hand, a readiness probe is used to decide if a container can handle incoming traffic. If a pod has been recognized as not ready, it is removed from load balancing. Fail of the readiness probe does not result in pod restart. The most typical liveness or readiness probe for web applications is realized via HTTP endpoint.
In a typical web application running outside a platform like Kubernetes, you won’t distinguish liveness and readiness health checks. That’s why most web frameworks provide only a single built-in health check implementation. For Spring Boot application you may easily enable health check by including Spring Boot Actuator to your dependencies. The important information about the Actuator health check is that it may behave differently depending on integrations between your application and third-party systems. For example, if you define a Spring data source for connecting to a database or declare a connection to the message broker, a health check may automatically include such validation through auto-configuration. Therefore, if you set a default Spring Actuator health check implementation as a liveness probe endpoint, it may result in unnecessary restart if the application is unable to connect the database or message broker. Since such behavior is not desired, I suggest you should implement a very simple liveness endpoint, that just verifies the availability of application without checking connection to other external systems.
Adding a custom implementation of a health check is not very hard with Spring Boot. There are some different ways to do that. One of them is visible below. We are using the mechanism provided within Spring Boot Actuator. It is worth noting that we won’t override a default health check, but we are adding another, custom implementation. The following implementation is just checking if an application is able to handle incoming requests.

@Component
@Endpoint(id = "liveness")
class LivenessHealthEndpoint {

    @ReadOperation
    fun health() : Health = Health.up().build()

    @ReadOperation
    fun name(@Selector name: String) : String = "liveness"

    @WriteOperation
    fun write(@Selector name: String) {

    }

    @DeleteOperation
    fun delete(@Selector name: String) {

    }

}

In turn, a default Spring Boot Actuator health check may be the right solution for a readiness probe. Assuming your application would connect to database Postgres and RabbitMQ message broker you should add the following dependencies to your Maven pom.xml.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
   <groupId>org.postgresql</groupId>
   <artifactId>postgresql</artifactId>
   <scope>runtime</scope>
</dependency>

Now, just for information add the following property to your application.yml. It enables displaying detailed information for auto-configured Actuator /health endpoint.

management:
  endpoint:
    health:
      show-details: always

Finally, let’s call /actuator/health to see the detailed result. As you see in the picture below, a health check returns information about Postgres and RabbitMQ connections.

best-practices-microservices-kubernetes-readiness

There is another aspect of using liveness and readiness probes in your web application. It is related to thread pooling. In a standard web container like Tomcat, each request is handled by the HTTP thread pool. If you are processing each request in the main thread and you have some long-running tasks in your application you may block all available HTTP threads. If your liveness will fail several times in row an application pod will be restarted. Therefore, you should consider implementing long-running tasks using another thread pool. Here’s the example of HTTP endpoint implementation with DeferredResult and Kotlin coroutines.

@PostMapping("/long-running")
fun addLongRunning(@RequestBody person: Person): DeferredResult<Person> {
   var result: DeferredResult<Person>  = DeferredResult()
   GlobalScope.launch {
      logger.info("Person long-running: {}", person)
      delay(10000L)
      result.setResult(repository.save(person))
   }
   return result
}

4. Consider your integrations

Hardly ever our application is able to exist without any external solutions like databases, message brokers or just other applications. There are two aspects of integration with third-party applications that should be carefully considered: connection settings and auto-creation of resources.
Let’s start with connection settings. As you probably remember, in the previous section we were using the default implementation of Spring Boot Actuator /health endpoint as a readiness probe. However, if you leave default connection settings for Postgres and Rabbit each call of the readiness probe takes a long time if they are unavailable. That’s why I suggest decreasing these timeouts to lower values as shown below.

spring:
  application:
    name: sample-spring-kotlin-microservice
  datasource:
    url: jdbc:postgresql://postgres:5432/postgres
    username: postgres
    password: postgres123
    hikari:
      connection-timeout: 2000
      initialization-fail-timeout: 0
  jpa:
    database-platform: org.hibernate.dialect.PostgreSQLDialect
  rabbitmq:
    host: rabbitmq
    port: 5672
    connection-timeout: 2000

Except properly configured connection timeouts you should also guarantee auto-creation of resources required by the application. For example, if you use RabbitMQ queue for asynchronous messaging between two applications you should guarantee that the queue is created on startup if it does not exist. To do that first declare a queue – usually on the listener side.

@Configuration
class RabbitMQConfig {

    @Bean
    fun myQueue(): Queue {
        return Queue("myQueue", false)
    }

}

Here’s a listener bean with receiving method implementation.

@Component
class PersonListener {

    val logger: Logger = LoggerFactory.getLogger(PersonListener::class.java)

    @RabbitListener(queues = ["myQueue"])
    fun listen(msg: String) {
        logger.info("Received: {}", msg)
    }

}

The similar case is with database integration. First, you should ensure that your application starts even if the connection to the database fails. That’s why I declared PostgreSQLDialect. It is required if the application is not able to connect to the database. Moreover, each change in the entities model should be applied on tables before application startup.
Fortunately, Spring Boot has auto-configured support for popular tools for managing database schema changes: Liquibase and Flyway. To enable Liquibase we just need to include the following dependency in Maven pom.xml.

<dependency>
   <groupId>org.liquibase</groupId>
   <artifactId>liquibase-core</artifactId>
</dependency>

Then you just need to create a change log and put in the default location db/changelog/db.changelog-master.yaml. Here’s a sample Liquibase changelog YAML file for creating table person.

databaseChangeLog:
  - changeSet:
      id: 1
      author: piomin
      changes:
        - createTable:
            tableName: person
            columns:
              - column:
                  name: id
                  type: int
                  autoIncrement: true
                  constraints:
                    primaryKey: true
                    nullable: false
              - column:
                  name: name
                  type: varchar(50)
                  constraints:
                    nullable: false
              - column:
                  name: age
                  type: int
                  constraints:
                    nullable: false
              - column:
                  name: gender
                  type: smallint
                  constraints:
                    nullable: false

5. Use Service Mesh

If you are building microservices architecture outside Kubernetes, such mechanisms like load balancing, circuit breaking, fallback, or retrying are realized on the application side. Popular cloud-native frameworks like Spring Cloud simplify the implementation of these patterns in your application and just reduce it to adding a dedicated library to your project. However, if you migrate your microservices to Kubernetes you should not still use these libraries for traffic management. It is becoming some kind of anti-pattern. Traffic management in communication between microservices should be delegated to the platform. This approach on Kubernetes is known as Service Mesh. One of the most important Kubernetes microservices best practices is to use dedicated software for building a service mesh.
Since originally Kubernetes has not been dedicated to microservices, it does not provide any built-in mechanism for advanced managing of traffic between many applications. However, there are some additional solutions dedicated for traffic management, which may be easily installed on Kubernetes. One of the most popular of them is Istio. Besides traffic management it also solves problems related to security, monitoring, tracing and metrics collecting.
Istio can be easily installed on your cluster or on standalone development instances like Minikube. After downloading it just run the following command.

$ istioctl manifest apply

Istio components need to be injected into a deployment manifest. After that, we can define traffic rules using YAML manifests. Istio gives many interesting options for configuration. The following example shows how to inject faults into the existing route. It can be either delays or aborts. We can define a percentage level of error using the percent field for both types of fault. In the Istio resource I have defined a 2 seconds delay for every single request sent to Service account-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: account-service
spec:
  hosts:
    - account-service
  http:
  - fault:
      delay:
        fixedDelay: 2s
        percent: 100
    route:
    - destination:
        host: account-service
        subset: v1

Besides VirtualService we also need to define DestinationRule for account-service. It is really simple – we have just defined the version label of the target service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: account-service
spec:
  host: account-service
  subsets:
  - name: v1
    labels:
      version: v1

6. Be open for framework-specific solutions

There are many interesting tools and solutions around Kubernetes, which may help you in running and managing applications. However, you should also not forget about some interesting tools and solutions offered by a framework you use. Let me give you some examples. One of them is the Spring Boot Admin. It is a useful tool designed for monitoring Spring Boot applications across a single discovery. Assuming you are running microservices on Kubernetes you may also install Spring Boot Admin there.
There is another interesting project within Spring Cloud – Spring Cloud Kubernetes. It provides some useful features that simplify integration between a Spring Boot application and Kubernetes. One of them is a discovery across all namespaces. If you use that feature together with Spring Boot Admin, you may easily create a powerful tool, which is able to monitor all Spring Boot microservices running on your Kubernetes cluster. For more details about implementation details you may refer to my article Spring Boot Admin on Kubernetes.
Sometimes you may use Spring Boot integrations with third-party tools to easily deploy such a solution on Kubernetes without building separated Deployment. You can even build a cluster of multiple instances. This approach may be used for products that can be embedded in a Spring Boot application. It can be, for example RabbitMQ or Hazelcast (popular in-memory data grid). If you are interested in more details about running Hazelcast cluster on Kubernetes using this approach please refer to my article Hazelcast with Spring Boot on Kubernetes.

7. Be prepared for a rollback

Kubernetes provides a convenient way to rollback an application to an older version based on ReplicaSet and Deployment objects. By default, Kubernetes keeps 10 previous ReplicaSets and lets you roll back to any of them. However, one thing needs to be pointed out. A rollback does not include configuration stored inside ConfigMap and Secret. Sometimes it is desired to rollback not only application binaries, but also configuration.
Fortunately, Spring Boot gives us really great possibilities for managing externalized configuration. We may keep configuration files inside the application and also load them from an external location. On Kubernetes we may use ConfigMap and Secret for defining Spring configuration files. The following definition of ConfigMap creates application-rollbacktest.yml Spring configuration containing only a single property. This configuration is loaded by the application only if Spring profile rollbacktest is active.

apiVersion: v1
kind: ConfigMap
metadata:
  name: sample-spring-kotlin-microservice
data:
  application-rollbacktest.yml: |-
    property1: 123456

A ConfigMap is included to the application through a mounted volume.

spec:
  containers:
  - name: sample-spring-kotlin-microservice
    image: piomin/sample-spring-kotlin-microservice
    ports:
    - containerPort: 8080
       name: http
    volumeMounts:
    - name: config-map-volume
       mountPath: /config/
  volumes:
    - name: config-map-volume
       configMap:
         name: sample-spring-kotlin-microservice

We also have application.yml on the classpath. The first version contains only a single property.

property1: 123

In the second we are going to activate the rollbacktest profile. Since, a profile-specific configuration file has higher priority than application.yml, the value of property1 property is overridden with value taken from application-rollbacktest.yml.

property1: 123
spring.profiles.active: rollbacktest

Let’s test the mechanism using a simple HTTP endpoint that prints the value of the property.

@RestController
@RequestMapping("/properties")
class TestPropertyController(@Value("\${property1}") val property1: String) {

    @GetMapping
    fun printProperty1(): String  = property1
    
}

Let’s take a look how we are rolling back a version of deployment. First, let’s see how many revisions we have.

$ kubectl rollout history deployment/sample-spring-kotlin-microservice
deployment.apps/sample-spring-kotlin-microservice
REVISION  CHANGE-CAUSE
1         
2         
3         

Now, we are calling endpoint /properties of current deployment, that returns value of property property1. Since, profile rollbacktest is active it returns value from file application-rollbacktest.yml.

$ curl http://localhost:8080/properties
123456

Let’s roll back to the previous revision.


$ kubectl rollout undo deployment/sample-spring-kotlin-microservice --to-revision=2
deployment.apps/sample-spring-kotlin-microservice rolled back

As you see below the revision=2 is not visible, it is now deployed as the newest revision=4.

$ kubectl rollout history deployment/sample-spring-kotlin-microservice
deployment.apps/sample-spring-kotlin-microservice
REVISION  CHANGE-CAUSE
1         
3         
4         

In this version of application profile rollbacktest wasn’t active, so value of property property1 is taken from application.yml.

 $ curl http://localhost:8080/properties
123

The post Best Practices For Microservices on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/03/10/best-practices-for-microservices-on-kubernetes/feed/ 1 7798
Spring Boot Admin on Kubernetes https://piotrminkowski.com/2020/02/18/spring-boot-admin-on-kubernetes/ https://piotrminkowski.com/2020/02/18/spring-boot-admin-on-kubernetes/#comments Tue, 18 Feb 2020 07:43:26 +0000 http://piotrminkowski.com/?p=7723 The main goal of this article is to show how to monitor Spring Boot applications running on Kubernetes with Spring Boot Admin. I have already written about Spring Boot Admin more than two years ago in the article Monitoring Microservices With Spring Boot Admin. You can find there a detailed description of its main features. […]

The post Spring Boot Admin on Kubernetes appeared first on Piotr's TechBlog.

]]>
The main goal of this article is to show how to monitor Spring Boot applications running on Kubernetes with Spring Boot Admin. I have already written about Spring Boot Admin more than two years ago in the article Monitoring Microservices With Spring Boot Admin. You can find there a detailed description of its main features. During this time some new features have been added. They have also changed the look of the application to more modern. But the principles of working have not been changes anymore, so you can still refer to my previous article to understand the main concept around Spring Boot Admin.
I was pretty surprised that there is no comprehensive article about running Spring Boot Admin on Kubernetes online. That’s why I decided to build this tutorial. Today I’m going to show you how to use Spring Cloud Kubernetes with Spring Boot Admin to enable monitoring for all Spring Boot applications running across the whole cluster. That’s a challenging task, but fortunately it is not very hard with Spring Cloud Kubernetes.

Example

As usual the source code with sample applications is available on GitHub in the repository https://github.com/piomin/sample-spring-microservices-kubernetes.git. It was used as the example for some other articles on my blog, which may be helpful to better understand the idea behind Spring Cloud Kubernetes. One of them is Microservices with Spring Cloud Kubernetes.
Anyway, I’m using three sample Spring Boot applications from this repository for a demo purpose. Each application would be run in a different namespace inside Minikube. Spring Boot Admin should monitor only Spring Boot applications, with the assumptions that there are different types of applications and solutions running on Kubernetes. All the three applications are optimized to be built with Skaffold and Jib. So the only thing you have to do is to run command skaffold dev in the directory of each application.

The picture illustrating an architecture of applications for the current article is visible below. The application employee-service is deployed inside namespace a, department-service is deployed inside namespace b, while organization-service is deployed inside namespace c. Spring Boot Admin is also started using Spring Boot and is deployed in the namespace default.

spring-boot-admin-on-kubernetes.png

Dependencies

Let’s begin from dependencies. Spring Boot Admin Server is available as a starter library. The current stable version is 2.2.2. We also need to include Spring Boot Web starter. Because our Spring Boot Admin application is integrating with Kubernetes discovery we should include Spring Cloud Kubernetes Discovery. If you also use Spring Cloud Kubernetes for direct integration with ConfigMap and Secret you may just include spring-cloud-starter-kubernetes-all starter instead of starter dedicated only for discovery. Here’s the list of dependencies inside our sample module admin-server with embedded Spring Boot Admin.

<parent>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-parent</artifactId>
   <version>2.2.4.RELEASE</version>
</parent>
<dependencies>
   <dependency>
      <groupId>de.codecentric</groupId>
      <artifactId>spring-boot-admin-starter-server</artifactId>
      <version>2.2.2</version>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-security</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-kubernetes-all</artifactId>
   </dependency>
</dependencies>
<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-dependencies</artifactId>
         <version>Hoxton.RELEASE</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

Spring Boot Admin Server and Kubernetes discovery

We need to add some annotations to the Spring Boot main class to enable Spring Boot Admin with Kubernetes Discovery. First of them is of course @EnableAdminServer (1). With Spring Cloud Kubernetes we still need to add annotation @EnableDiscoveryClient to enable DiscoveryClient based on Kubernetes API (2). The last annotation @EnableScheduling is not so obvious, but also required (3). Without it a discovery client won’t periodically call Kubernetes API to refresh the list of running services, and do it only once on startup. Since, we always want to have the current list of pods (for example after scaling up the number of application instances), we need to enable a scheduler responsible for watching service catalog for changes and updating the DiscoveryClient implementation accordingly.

@SpringBootApplication
@EnableAdminServer // (1)
@EnableDiscoveryClient // (2)
@EnableScheduling // (3)
public class AdminApplication {

   public static void main(String[] args) {
      SpringApplication.run(AdminApplication.class, args);
   }

}

Now, a crucial issue throughout the game – configuration. Those two properties visible below do the whole “magic”. First of them spring.cloud.kubernetes.discovery.all-namespace enables DiscoveryClient from all namespaces (1). After enabling it Spring Boot Admin would be able to monitor all the applications across the whole cluster! Great, but since we would like to force it to monitor just the Spring Boot application, we need to be able to filter them. Here comes Spring Cloud Kubernetes with another smart solution. The property spring.cloud.kubernetes.discovery.serviceLabels allows us to define the set of labels used for filtering list of services fetched from Kubernetes API. In simple words, we fetch only those Kubernetes Service, that has labels defined on the Spring Boot Admin server side with such values. Here’s application.yml for admin-service defined inside ConfigMap.

kind: ConfigMap
apiVersion: v1
metadata:
  name: admin
data:
  application.yml: |-
    spring:
     cloud:
      kubernetes:
        discovery:
          all-namespaces: true # (1)
          service-labels:
            spring-boot: true # (2)

Implementation on the client side

Thanks to using DiscoveryClient on Spring Boot Admin Server we don’t have to include any additional Spring Boot Admin Client library to our sample applications. Of course, they also don’t have to include Spring Cloud Kubernetes, since Kubernetes manages pods registration by itself. In fact, the only thing we have to do is to add label to the Kubernetes Service, which has been set as a filtering condition on the server side. Here’s the example of Service definition for department-service.

apiVersion: v1
kind: Service
metadata:
  name: department
  labels:
    app: department
    spring-boot: "true"
spec:
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: department

Spring Boot Admin is based on Spring Boot Actuator endpoints, so each application should at least include that project to Maven dependencies.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

By default, not all the endpoints are exposed outside application. Therefore we need to expose them by setting property management.endpoints.web.exposure.include to *. It is also worth showing details about application in health endpoint. That step is optional. Here’s our application.yml for department-service.

spring:
  application:
    name: department
management:
  endpoints:
    web:
      exposure:
        include: "*"
  endpoint:
    health:
      show-details: ALWAYS

It’s also worth generating a build-info.properties file during build to provide more details for Actuator /info endpoint.

<plugin>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-maven-plugin</artifactId>
   <executions>
      <execution>
         <goals>
            <goal>build-info</goal>
         </goals>
      </execution>
   </executions>
</plugin>

Running Spring Boot Admin on Kubernetes

First, we should start Minikube. Because we are running applications and MongoDB I suggest changing the default memory limit 4 GB.

$ minikube start --vm-driver=virtualbox --memory='4000mb'

Let’s start from creating all the required namespaces on Minikube. Except default namespace, we also need namespaces a, b, and c, as it has been described in the section Example.

$ kubectl create namespace a
namespace/a created
$ kubectl create namespace b
namespace/b created
$ kubectl create namespace c
namespace/c created

Spring Boot Admin uses Spring Cloud Kubernetes, which requires extra privileges in order to access Kubernetes API. Let’s set cluster-admin as a default role for ServiceAccount just for a development purpose.

$ kubectl create clusterrolebinding admin-default --clusterrole=cluster-admin --serviceaccount=default:default

Assuming we have already succesfully deploy all the sample microservices and Spring Boot Admin Server application on Minikube you will get the following list of Services for all the namespaces. Option -L=spring-boot prints value of label spring-boot for all services.

kubernetes-svc

We may also take a look into the list of pods. I have set two instances for employee deployment.

kubernetes-pods

As you can see on the picture below Spring Boot Admin manages only Kubernetes Services labeled with spring-boot=true. For example mongodb, kubernetes or admin inside default namespace were omitted.

spring-boot-admin-on-kubernetes-main-page

Spring Boot Admin provides some useful features for managing Spring Boot applications. It is worth considering using it on Kubernetes to monitor the whole set of your microservices distributed across many namespaces. The following picture illustrates the details page for a single application (pod) running on Minikube.

spring-boot-admin-on-kubernetes-details

The post Spring Boot Admin on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/02/18/spring-boot-admin-on-kubernetes/feed/ 13 7723
Spring Boot Best Practices for Microservices https://piotrminkowski.com/2019/12/06/spring-boot-best-practices-for-microservices/ https://piotrminkowski.com/2019/12/06/spring-boot-best-practices-for-microservices/#comments Fri, 06 Dec 2019 11:14:24 +0000 http://piotrminkowski.com/?p=7514 In this article I’m going to propose my list of “golden rules” for building Spring Boot applications, which are a part of a microservices-based system. I’m basing on my experience in migrating monolithic SOAP applications running on JEE servers into REST-based small applications built on top of Spring Boot. This list of Spring Boot best […]

The post Spring Boot Best Practices for Microservices appeared first on Piotr's TechBlog.

]]>
In this article I’m going to propose my list of “golden rules” for building Spring Boot applications, which are a part of a microservices-based system. I’m basing on my experience in migrating monolithic SOAP applications running on JEE servers into REST-based small applications built on top of Spring Boot. This list of Spring Boot best practices assumes you are running many microservices on the production under huge incoming traffic. Let’s begin.

1. Collect metrics

It is just amazing how metrics visualization can change an approach to the systems monitoring in the organization. After setting up monitoring in Grafana we are able to recognize more than 90% of bigger problems in our systems before they are reported by customers to our support team. Thanks to those two monitors with plenty of diagrams and alerts we may react much faster than earlier. If you have microservices-based architecture metrics become even more important than for monoliths.
The good news for us is that Spring Boot comes with a built-in mechanism for collecting the most important metrics. In fact, we just need to set some configuration properties to expose a predefined set of metrics provided by the Spring Boot Actuator. To use it we need to include Actuator starter as dependency:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

To enable metrics endpoint we have to set property management.endpoint.metrics.enabled to true. Now you may check out the full list of generated metrics by calling endpoint GET /actuator/metrics. One of the most important metrics for us is http.server.requests, which provides statistics with the number of incoming requests and response time. It is automatically tagged with method type (POST, GET, etc.), HTTP status, and URI.
Metrics have to be stored somewhere. The most popular tools for that are InfluxDB and Prometheus. They are representing two different models of collecting data. Prometheus periodically retrieves data from the endpoint exposed by the application, while InfluxDB provides REST API that has to be called by the application. The integration with those two tools and several others is realized with the Micrometer library. To enable support for InfluxDB we have to include the following dependency.

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-influx</artifactId>
</dependency>

We also have to provide at least URL and Influx database name inside application.yml file.

management:
  metrics:
    export:
      influx:
        db: springboot
        uri: http://192.168.99.100:8086

To enable Prometheus HTTP endpoint we first need to include the appropriate Micrometer module and also set property management.endpoint.prometheus.enabled to true.

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

By default, Prometheus tries to collect data from defined target endpoint once a minute. A rest of configuration has to be provided inside Prometheus. A scrape_config section is responsible for specifying a set of targets and parameters describing how to connect with them.

scrape_configs:
  - job_name: 'springboot'
    metrics_path: '/actuator/prometheus'
    static_configs:
    - targets: ['person-service:2222']

Sometimes it is useful to provide additional tags to metrics, especially if we have many instances of a single microservice that logs to a single Influx database. Here’s the sample of tagging for applications running on Kubernetes.

@Configuration
class ConfigurationMetrics {

    @Value("\${spring.application.name}")
    lateinit var appName: String
    @Value("\${NAMESPACE:default}")
    lateinit var namespace: String
    @Value("\${HOSTNAME:default}")
    lateinit var hostname: String

    @Bean
    fun tags(): MeterRegistryCustomizer<InfluxMeterRegistry> {
        return MeterRegistryCustomizer { registry ->
            registry.config().commonTags("appName", appName).commonTags("namespace", namespace).commonTags("pod", hostname)
        }
    }

}

Here’s a diagram from Grafana created for http.server.requests metric of a single application.

spring-boot-best-practices-metrics

2. Don’t forget about logging

Logging is something that is not very important during development, but is the key point during maintenance. It is worth to remember that in the organization your application would be viewed through the logs quality. Usually, an application is maintenanced by the support team, so your logs should be significant. Don’t try to put everything there, only the most important events should be logged.
It is also important to use the same standard of logging for all the microservices. For example, if you are logging information in JSON format, do the same for every single application. If you use tag appName for indicating application name or instanceId to distinguish different instances of the same application do it everywhere. Why? You usually want to store the logs collected from all microservices in a single, central place. The most popular tool for that (or rather the collection of tools) is Elastic Stack (ELK). To take advantage of storing logs in a central place, you should ensure that query criteria and response structure would be the same for all the applications, especially that you will correlate the logs between different microservices. How is that? Of course by using the external library. I can recommend my library for Spring Boot logging. To use it you should include it to your dependencies.

<dependency>
  <groupId>com.github.piomin</groupId>
  <artifactId>logstash-logging-spring-boot-starter</artifactId>
  <version>1.2.2.RELEASE</version>
</dependency>

This library will force you to use some good logging practices and automatically integrate with Logstash (one of three ELK tools responsible for collecting logs). Its main features are:

  • an ability to log all incoming HTTP requests and outgoing HTTP responses with full body, and send those logs to Logstash with the proper tags indicating calling method name or response HTTP status
  • it is able to calculate and store an execution time for each request
  • an ability to generate and propagate correlationId for downstream services calling with Spring RestTemplate

To enable sending logs to Logstash we should at least provide its address and property logging.logstash.enabled to true.

logging.logstash:
  enabled: true
  url: 192.168.99.100:5000

After including the library logstash-logging-spring-boot-starter you may take advantage of logs tagging in Logstash. Here’s the screen from Kibana for single response log entry.

logstash-2

We may also include Spring Cloud Sleuth library to our dependencies.

<dependency> 
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>

Spring Cloud Sleuth propagates headers compatible with Zipkin – a popular tool for distributed tracing. Its main features are:

  • adding trace (correlating requests) and span IDs to the Slf4J MDC
  • recording timing information to aid in latency analysis
  • it modifies a pattern of log entry to add some informations like additional MDC fields
  • it provides integration with other Spring components like OpenFeign, RestTemplate or Spring Cloud Netflix Zuul

3. Make your API usable

In most cases, your application will be called by other applications through REST-based API. Therefore, it is worth taking care of proper and clear documentation. The documentation should be generated along with the code. Of course there are some tools for that. One of the most popular of them is Swagger. You can easily integrate Swagger 2 with your Spring Boot application using SpringFox project. In order to expose a Swagger HTML site with API documentation we need to include the following dependencies. The first library is responsible for generating Swagger descriptor from Spring MVC controllers code, while the second embeds Swagger UI to display representation of Swagger YAML descriptor in your web browser.

<dependency>
   <groupId>io.springfox</groupId>
   <artifactId>springfox-swagger2</artifactId>
   <version>2.9.2</version>
</dependency>
<dependency>
   <groupId>io.springfox</groupId>
   <artifactId>springfox-swagger-ui</artifactId>
   <version>2.9.2</version>
</dependency>

It’s not all. We also have to provide some beans to customize default Swagger generation behaviour. It should document only methods implemented inside our controllers, for example not the methods provided by Spring Boot automatically like /actuator/* endpoints. We may also customize UI appearance by defining UiConfiguration bean.

@Configuration
@EnableSwagger2
public class ConfigurationSwagger {

    @Autowired
    Optional<BuildProperties> build;

    @Bean
    public Docket api() {
        String version = "1.0.0";
        if (build.isPresent())
            version = build.get().getVersion();
        return new Docket(DocumentationType.SWAGGER_2)
                .apiInfo(apiInfo(version))
                .select()
                .apis(RequestHandlerSelectors.any())
                .paths(PathSelectors.regex("(/components.*)"))
                .build()
                .useDefaultResponseMessages(false)
                .forCodeGeneration(true);
    }

    @Bean
    public UiConfiguration uiConfig() {
        return UiConfigurationBuilder.builder().docExpansion(DocExpansion.LIST).build();
    }

    private ApiInfo apiInfo(String version) {
        return new ApiInfoBuilder()
                .title("API - Components Service")
                .description("Managing Components.")
                .version(version)
                .build();
    }
}

Here’s an example of Swagger 2 UI for a single microservice.

spring-boot-best-practices-swagger

The next case is to define the same REST API guideline for all microservices. If you are building an API of your microservices consistently, it is much simpler to integrate with it for both external and internal clients. The guideline should contain instructions on how to build your API, which headers need to be set on the request and response, how to generate error codes etc. Such a guideline should be shared with all developers and vendors in your organization. For more detailed explanation of generating Swagger documentation for Spring Boot microservices including exposing it for all the application on API gateway you may refer to my article Microservices API Documentation with Swagger2.

4. Don’t afraid of using circuit breaker

If you are using Spring cloud for communication between microservices, you may leverage Spring Cloud Netflix Hystrix or Spring Cloud Circuit Breaker to implement circuit breaking. However, the first solution has been already moved to the maintenance mode by Pivotal team, since Netflix does not develop Hystrix anymore. The recommended solution is the new Spring Cloud Circuit Breaker built on top of resilience4j project.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-circuitbreaker-resilience4j</artifactId>
</dependency>

Then we need to configure required settings for circuit breaker by defining Customizer bean that is passed a Resilience4JCircuitBreakerFactory. We are using default values as shown below.

@Bean
public Customizer<Resilience4JCircuitBreakerFactory> defaultCustomizer() {
    return factory -> factory.configureDefault(id -> new Resilience4JConfigBuilder(id)
            .timeLimiterConfig(TimeLimiterConfig.custom().timeoutDuration(Duration.ofSeconds(5)).build())
            .circuitBreakerConfig(CircuitBreakerConfig.ofDefaults())
            .build());
}

For more details about integrating Hystrix circuit breaker with Spring Boot application you may refer to my article Part 3: Creating Microservices: Circuit Breaker, Fallback and Load Balancing with Spring Cloud.

5. Make your application transparent

Another important rule amongst Spring Boot best practices is transparency. We should not forget that one of the most important reasons for migration into microservices architecture is a requirement of continuous delivery. Today, the ability to deliver changes fast gives the advantage on the market. You should be able even to deliver changes several times during a day. Therefore, it is important what’s the current version, where it has been released and what changes it includes.
When working with Spring Boot and Maven we may easily publish such information like a date of last changes, Git commit id or numerous version of application. To achieve that we just need to include following Maven plugins to our pom.xml.

<plugins>
   <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      <executions>
         <execution>
            <goals>
               <goal>build-info</goal>
            </goals>
         </execution>
      </executions>
   </plugin>
   <plugin>
      <groupId>pl.project13.maven</groupId>
      <artifactId>git-commit-id-plugin</artifactId>
      <configuration>
         <failOnNoGitDirectory>false</failOnNoGitDirectory>
      </configuration>
   </plugin>
</plugins>

Assuming you have already included Spring Boot Actuator (see Section 1), you have to enable /info endpoint to be able to display all interesting data.


management.endpoint.info.enabled: true

Of course, we have many microservices consisting of our system, and there are a few running instances of every single microservice. It is desirable to monitor our instances in a single, central place – the same as with collecting metrics and logs. Fortunately, there is a tool dedicated for Spring Boot application, that is able to collect data from all Actuator endpoints and display them in UI. It is Spring Boot Admin developed by Codecentric. The most comfortable way to run it is by creating a dedicated Spring Boot application that includes Spring Boot Admin dependencies and integrates with a discovery server, for example Spring Cloud Netflix Eureka.

<dependency>
    <groupId>de.codecentric</groupId>
    <artifactId>spring-boot-admin-starter-server</artifactId>
    <version>2.1.6</version>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>

Then we should enable it for Spring Boot application by annotating the main class with @EnableAdminServer.

@SpringBootApplication
@EnableDiscoveryClient
@EnableAdminServer
@EnableAutoConfiguration
public class Application {
 
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
 
}

With Spring Boot Admin we may easily browse a list of applications registered in the discovery server and check out the version or commit info for each of them.

boot-admin-2

We can expand details to see all elements retrieved from /info endpoint and much more data collected from other Actuator endpoints.

boot-admin-3-details

6. Write contract tests

Consumer Driven Contract (CDC) testing is one of the methods that allows you to verify integration between applications within your system. The number of such interactions may be really large especially if you maintain microservices-based architecture. It is relatively easy to start with contract testing in Spring Boot thanks to the Spring Cloud Contract project. There are some other frameworks designed especially for CDC like Pact, but Spring Cloud Contract would probably be the first choice, since we are using Spring Boot.
To use it on the producer side we need to include Spring Cloud Contract Verifier.

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-contract-verifier</artifactId>
    <scope>test</scope>
</dependency>

On the consumer side we should include Spring Cloud Contract Stub Runner.


<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
    <scope>test</scope>
</dependency>

The first step is to define a contract. One of the options to write it is by using Groovy language. The contract should be verified on the both producer and consumer side. Here’s


import org.springframework.cloud.contract.spec.Contract
Contract.make {
    request {
        method 'GET'
        urlPath('/persons/1')
    }
    response {
        status OK()
        body([
            id: 1,
            firstName: 'John',
            lastName: 'Smith',
            address: ([
                city: $(regex(alphaNumeric())),
                country: $(regex(alphaNumeric())),
                postalCode: $(regex('[0-9]{2}-[0-9]{3}')),
                houseNo: $(regex(positiveInt())),
                street: $(regex(nonEmpty()))
            ])
        ])
        headers {
            contentType(applicationJson())
        }
    }
}

The contract is packaged inside the JAR together with stubs. It may be published to a repository manager like Artifactory or Nexus, and then consumers may download it from there during the JUnit test. Generated JAR file is suffixed with stubs.

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.NONE)
@AutoConfigureStubRunner(ids = {"pl.piomin.services:person-service:+:stubs:8090"}, consumerName = "letter-consumer",  stubsPerConsumer = true, stubsMode = StubsMode.REMOTE, repositoryRoot = "http://192.168.99.100:8081/artifactory/libs-snapshot-local")
@DirtiesContext
public class PersonConsumerContractTest {
 
    @Autowired
    private PersonClient personClient;
     
    @Test
    public void verifyPerson() {
        Person p = personClient.findPersonById(1);
        Assert.assertNotNull(p);
        Assert.assertEquals(1, p.getId().intValue());
        Assert.assertNotNull(p.getFirstName());
        Assert.assertNotNull(p.getLastName());
        Assert.assertNotNull(p.getAddress());
        Assert.assertNotNull(p.getAddress().getCity());
        Assert.assertNotNull(p.getAddress().getCountry());
        Assert.assertNotNull(p.getAddress().getPostalCode());
        Assert.assertNotNull(p.getAddress().getStreet());
        Assert.assertNotEquals(0, p.getAddress().getHouseNo());
    }
     
}

Contract testing will not verify sophisticated use cases in your microservices-based system. However, it is the first phase of testing interaction between microservices. Once you ensure the API contracts between applications are valid, you proceed to more advanced integration or end-to-end tests. For more detailed explanation of continuous integration with Spring Cloud Contract you may refer to my article Continuous Integration with Jenkins, Artifactory and Spring Cloud Contract.

7. Be up-to-date

Spring Boot and Spring Cloud relatively often release the new versions of their framework. Assuming that your microservices have a small codebase it is easy to up a version of used libraries. Spring Cloud releases new versions of projects using release train pattern, to simplify dependencies management and avoid problems with conflicts between incompatible versions of libraries.
Moreover, Spring Boot systematically improves startup time and memory footprint of applications, so it is worth updating it just because of that. Here’s the current stable release of Spring Boot and Spring Cloud.

<parent>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-parent</artifactId>
   <version>2.2.1.RELEASE</version>
</parent>
<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-dependencies</artifactId>
         <version>Hoxton.RELEASE</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

Conclusion

I showed you that it is not hard to follow best practices with Spring Boot features and some additional libraries being a part of Spring Cloud. These Spring Boot best practices will make it easier for you to migrate into microservices-based architecture and also to run your applications in containers.

The post Spring Boot Best Practices for Microservices appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/12/06/spring-boot-best-practices-for-microservices/feed/ 11 7514
Deploying Spring Boot Application on OpenShift with Dekorate https://piotrminkowski.com/2019/10/01/deploying-spring-boot-application-on-openshift-with-dekorate/ https://piotrminkowski.com/2019/10/01/deploying-spring-boot-application-on-openshift-with-dekorate/#comments Tue, 01 Oct 2019 07:51:25 +0000 https://piotrminkowski.wordpress.com/?p=7254 More advanced deployments to Kubernetes or OpenShift are a bit troublesome for developers. In comparison to Kubernetes OpenShift provides S2I (Source-2-Image) mechanism, which may help reduce the time required for preparation of application deployment descriptors. Although S2I is quite useful for developers, it solves only simple use cases and does not provide a unified approach […]

The post Deploying Spring Boot Application on OpenShift with Dekorate appeared first on Piotr's TechBlog.

]]>
More advanced deployments to Kubernetes or OpenShift are a bit troublesome for developers. In comparison to Kubernetes OpenShift provides S2I (Source-2-Image) mechanism, which may help reduce the time required for preparation of application deployment descriptors. Although S2I is quite useful for developers, it solves only simple use cases and does not provide a unified approach to building deployment configuration from a source code. Dekorate (https://dekorate.io), the recently created open-source project, tries to solve that problem. This project seems to be very interesting. It appears to be confirmed by RedHat, which has already announced a decision on including Dekorate to Red Hat OpenShift Application Runtimes as a “Tech Preview”.

You can read more about it in this article: https://developers.redhat.com/blog/2019/08/15/how-to-use-dekorate-to-create-kubernetes-manifests.

How does it work?

Dekorate is a library that defines a set of annotation processors used for generating and decorating Kubernetes or OpenShift manifests. In fact, you just need to annotate your application main class properly and Dekorate will take care of everything else. To use this library you only have to include it in your Maven pom.xml as shown below.

<dependency>
   <groupId>io.dekorate</groupId>
   <artifactId>openshift-spring-starter</artifactId>
   <version>0.8.2</version>
</dependency>

The starter contains not only annotations and annotation processors that may be used on your application, but also support for generation during Maven build, which is executed during the compile phase. To enable Decorate during build you need to set property dekorate.build to true. You can also enable deployment by setting property dekorate.deploy to true as shown below.

$ mvn clean install -Ddekorate.build=true -Ddekorate.deploy=true

Dekorate supports OpenShift S2I. It generates ImageStream for builder and target application, and also BuildConfig resource. If you enable deploy mode it also generates deployment config with required resources. Here’s the screen with logs from Maven build executed on my local machine.

spring-boot-decorate-openshift-1

In this case Dekorate is generating OpenShift manifest files and saves them inside directory target/classes/META-INF/dekorate/, and then performing deployment on my instance of Minishift available under virtual address 192.168.99.100.
Unfortunately, we have to modify generated BuildConfig for our convenience. So instead of source type Binary we will just declare Git source repository address.

apiVersion: "build.openshift.io/v1"
kind: "BuildConfig"
metadata:
  labels:
    app: "sample-app"
    version: "1.1.0"
    group: "minkowp"
  name: "sample-app"
spec:
  output:
    to:
      kind: "ImageStreamTag"
      name: "sample-app:1.1.0"
  source:
    git:
      uri: 'https://github.com/piomin/sample-app.git'
    type: Git
  strategy:
    sourceStrategy:
      from:
        kind: "ImageStreamTag"
        name: "s2i-java:2.3"

In order to apply the changes execute the following commands:

$ oc delete bc sample-app
$ oc apply -f build-config-dekorate.yaml

Customization

If you have Spring Boot applications it is possible to completely bypass annotations by utilizing already-existing, framework-specific metadata. To customize the generated manifests you can add dekorate properties to your application.yml or application.properties descriptors. I have some problems running these modes with Dekorate, so I avoided it. However, I prefer using annotations on the code, so I prepared the following configuration, which has been successfully generated:

@SpringBootApplication
@OpenshiftApplication(replicas = 2, expose = true, envVars = {
        @Env(name="sample-app-config", configmap = "sample-app-config")
})
@JvmOptions(xms = 128, xmx = 256, heapDumpOnOutOfMemoryError = true)
@EnableSwagger2
public class SampleApp {

    public static void main(String[] args) {
        SpringApplication.run(SampleApp.class, args);
    }
   
    // ... REST OF THE CODE
}

In the fragment of code visible above we have declared some useful settings for the application. First, it should be run in two pods (replicas=2). It also should be exposed outside a cluster using OpenShift route (expose=true). The application uses Kubernetes ConfigMap as a source of dynamically managed configuration settings. By annotating the main class with @JvmOptions we may customize the behavior of JVM running on the container, for example by setting maximum heap memory consumption. Here’s the definition of ConfigMap created for the test purpose:

apiVersion: v1
kind: ConfigMap
metadata:
  name: sample-app-config
  namespace: myproject
data:
  showAddress: 'true'
  showContactInfo: 'true'
  showSocial: 'false'

Sample Application

The sample application code snippet is available on GitHub under repository https://github.com/piomin/sample-app.git. This very basic Spring Boot web application exposes a simple REST API with some monitoring endpoints included with Spring Boot Actuator and API documentation generated using Swagger.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
   <groupId>io.springfox</groupId>
   <artifactId>springfox-swagger2</artifactId>
   <version>2.9.2</version>
</dependency>
<dependency>
   <groupId>io.springfox</groupId>
   <artifactId>springfox-swagger-ui</artifactId>
   <version>2.9.2</version>
</dependency>

Dekorate is able to automatically detect existence of Spring Boot Actuator dependency and based on it generate OpenShift readiness and liveness healthchecks definitions as shown below. By default it sets timeout on 10 seconds, and period on 30 seconds. We can override a default behaviour using fields liveness and readiness of @OpenshiftApplication.

dekorate-3

The controller class provides implementation for some basic CRUD REST operations. It also injects some environment variables taken from ConfigMap sample-app-config. Based on their values it decides whether to show or not to show additional person parameters like address, contact information or social links. Here’s an implementation of PersonController:

@RestController
@RequestMapping("/persons")
public class PersonsController {

    private static final Logger LOGGER = LoggerFactory.getLogger(PersonsController.class);

    @Autowired
    PersonRepository repository;

    @Value(value = "${showAddress:false}")
    boolean showAddress;
    @Value(value = "${showContactInfo:false}")
    boolean showContactInfo;
    @Value(value = "${showSocial:false}")
    boolean showSocial;

    @PostMapping
    public Person add(@RequestBody Person person) {
        LOGGER.info("Person add: {}", person);
        return repository.add(person);
    }

    @GetMapping("/{id}")
    public Person findById(@PathVariable("id") Integer id) {
        LOGGER.info("Person find: id={}", id);
        return hidePersonParams(repository.findById(id));
    }

    @GetMapping
    public List<Person> findAll() {
        LOGGER.info("Person find");
        return repository.findAll().stream().map(this::hidePersonParams).collect(Collectors.toList());
    }

    private Person hidePersonParams(Person person) {
        if (!showAddress)
            person.setAddress(null);
        if (!showContactInfo)
            person.setContact(null);
        if (!showSocial)
            person.setSocial(null);
        return person;
    }
}

Here’s an implementation of a model class. I used the Lombok library for getters, setters, and constructor generation.

@Getter
@Setter
@NoArgsConstructor
@AllArgsConstructor
@JsonInclude(JsonInclude.Include.NON_NULL)
public class Person {

    private Integer id;
    private String name;
    private int age;
    private Gender gender;
    private Address address;
    private Contact contact;
    private Social social;

}

Here’s a definition of ConfigMap sample-app-config on my instance of Minishift.

spring-boot-decorate-openshift-4

Deployment

The current version of Dekorate uses fabric/s2i-java in version 2.3 as a builder image. It creates ImageStream without any tags, so we have the Docker image from docker.io manually by executing the following command:


$ oc import-image fabric/s2i-java:2.3

This commands forces to download some additional versions of a builder image including the newest version for JDK 11. The library still uses the older version of s2i-java image although we have declared JDK 11 as a default version in pom.xml. In this example we can see that Dekorate has still some things to improve 🙂

dekorate-5

Assuming we have already performed all the required steps described in the previous sections of this article we may start OpenShift build by running the following command:

$ oc start-build sample-app

The build should be finished successfully as shown below.

dekorate-6

After successful build the new image is pushed to OpenShift registry. It triggers the start of a new deployment of our application. As you see on the picture below our application is started in two instances, and the route is created automatically.

dekorate-7

We can also take a look at environment settings. JVM options have been set, and ConfigMap has been assigned to the deployment config.

spring-boot-decorate-openshift-8

Finally we may test the sample application. The Swagger API documentation is available under address http://sample-app-myproject.192.168.99.100.nip.io/swagger-ui.html.

spring-boot-decorate-openshift-9

Conclusion

I think it is a great idea to use Java annotations for customizing application deployment on Kubernetes/OpenShift. Using Dekorate with frameworks based on annotation processing like Spring Boot greatly simplifies the deployment process for developers. Although the library is in the early stage of development and still has some things to improve I definitely recommend using it. In this article, I’m trying to give you some tips for a quick start. Enjoy 🙂

The post Deploying Spring Boot Application on OpenShift with Dekorate appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/10/01/deploying-spring-boot-application-on-openshift-with-dekorate/feed/ 2 7254
Kotlin Microservice with Spring Boot https://piotrminkowski.com/2019/01/15/kotlin-microservice-with-spring-boot/ https://piotrminkowski.com/2019/01/15/kotlin-microservice-with-spring-boot/#respond Tue, 15 Jan 2019 10:34:08 +0000 https://piotrminkowski.wordpress.com/?p=6960 In this tutorial I will show you step-by-step how to implement Spring Boot Kotlin microservices. You may find many examples of microservices built with Spring Boot on my blog, but the most of them is written in Java. With the rise in popularity of Kotlin language it is more often used with Spring Boot for […]

The post Kotlin Microservice with Spring Boot appeared first on Piotr's TechBlog.

]]>
In this tutorial I will show you step-by-step how to implement Spring Boot Kotlin microservices. You may find many examples of microservices built with Spring Boot on my blog, but the most of them is written in Java. With the rise in popularity of Kotlin language it is more often used with Spring Boot for building backend services. Starting with version 5 Spring Framework has introduced first-class support for Kotlin. In this article I’m going to show you an example of microservice build with Kotlin and Spring Boot 2. I’ll describe some interesting features of Spring Boot, which can be treated as a set of good practices when building backend, REST-based microservices.

1. Spring Boot Kotlin configuration and dependencies

To use Kotlin in your Maven project you have to include plugin kotlin-maven-plugin, and /src/main/kotlin, /src/test/kotlin directories to the build configuration. We will also set the -Xjsr305 compiler flag to strict. This option is responsible for checking support for JSR-305 annotations (for example @NotNull annotation).

<build>
   <sourceDirectory>${project.basedir}/src/main/kotlin</sourceDirectory>
   <testSourceDirectory>${project.basedir}/src/test/kotlin</testSourceDirectory>
   <plugins>
      <plugin>
         <groupId>org.jetbrains.kotlin</groupId>
         <artifactId>kotlin-maven-plugin</artifactId>
         <configuration>
            <args>
               <arg>-Xjsr305=strict</arg>
            </args>
            <compilerPlugins>
               <plugin>spring</plugin>
            </compilerPlugins>
         </configuration>
         <dependencies>
            <dependency>
               <groupId>org.jetbrains.kotlin</groupId>
               <artifactId>kotlin-maven-allopen</artifactId>
               <version>${kotlin.version}</version>
            </dependency>
         </dependencies>
      </plugin>
   </plugins>
</build>

We should also include some core Kotlin libraries like kotlin-stdlib-jdk8 and kotlin-reflect. They are provided by default for a Kotlin project on start.spring.io. For REST-based applications you will also need Jackson library used for JSON serialization/deserialization. Of course, we have to include Spring starters for Web application together with Actuator responsible for providing management endpoints.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
   <groupId>com.fasterxml.jackson.module</groupId>
   <artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-reflect</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-stdlib-jdk8</artifactId>
</dependency>

We use the latest stable version of Spring Boot with Kotlin 1.2.71

<parent>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-parent</artifactId>
   <version>2.1.2.RELEASE</version>
</parent>
<properties>
   <java.version>1.8</java.version>
   <kotlin.version>1.2.71</kotlin.version>
</properties>

2. Building Spring Boot Kotlin microservice application

Let’s begin from the basics. If you are familiar with Spring Boot and Java, the biggest difference is in the main class declaration. You will call runApplication method outside the Spring Boot application class. The main class, the same as in Java, is annotated with @SpringBootApplication.

@SpringBootApplication
class SampleSpringKotlinMicroserviceApplication

fun main(args: Array<String>) {
    runApplication<SampleSpringKotlinMicroserviceApplication>(*args)
}

Our sample application is very simple. It exposes some REST endpoints providing CRUD operations for a model object. Even at this fragment of code illustrating controller implementation you can see some nice Kotlin features. We may use shortened function declaration with inferred return type. Annotation @PathVariable does not require any arguments. The input parameter name is considered to be the same as the variable name. Of course, we are using the same annotations as with Java. In Kotlin, every property declared as having a non-null type must be initialized in the constructor. So, if you are initializing it using dependency injection it has to be declared as lateinit. Here’s the implementation of PersonController.

@RestController
@RequestMapping("/persons")
class PersonController {

    @Autowired
    lateinit var repository: PersonRepository

    @GetMapping("/{id}")
    fun findById(@PathVariable id: Int): Person? = repository.findById(id)

    @GetMapping
    fun findAll(): List<Person> = repository.findAll()

    @PostMapping
    fun add(@RequestBody person: Person): Person = repository.save(person)

    @PutMapping
    fun update(@RequestBody person: Person): Person = repository.update(person)

    @DeleteMapping("/{id}")
    fun remove(@PathVariable id: Int): Boolean = repository.removeById(id)

}

Kotlin automatically generates getters and setters for class properties declared as var. Also if you declare a model as a data class it generates equals, hashCode, and toString methods. The declaration of our model class Person is very concise as shown below.


data class Person(var id: Int?, var name: String, var age: Int, var gender: Gender)

I have implemented my own in-memory repository class. I use Kotlin extensions for manipulating a list of elements. This built-in Kotlin feature is similar to Java streams, with the difference that you don’t have to perform any conversion between Collection and Stream.

@Repository
class PersonRepository {
    val persons: MutableList<Person> = ArrayList()

    fun findById(id: Int): Person? {
        return persons.singleOrNull { it.id == id }
    }

    fun findAll(): List<Person> {
        return persons
    }

    fun save(person: Person): Person {
        person.id = (persons.maxBy { it.id!! }?.id ?: 0) + 1
        persons.add(person)
        return person
    }

    fun update(person: Person): Person {
        val index = persons.indexOfFirst { it.id == person.id }
        if (index >= 0) {
            persons[index] = person
        }
        return person
    }

    fun removeById(id: Int): Boolean {
        return persons.removeIf { it.id == id }
    }

}

The sample application source code is available on GitHub in repository https://github.com/piomin/sample-spring-kotlin-microservice.git.

3. Enabling Spring Boot Actuator endpoints

Since we have already included Spring Boot starter with Actuator into the application code, we can take advantage of its production-ready features. Spring Boot Actuator gives you very powerful tools for monitoring and managing your apps. You can provide advanced healthchecks, info endpoints or send metrics to numerous monitoring systems like InfluxDB. After including Actuator artifacts the only thing we have to do is to enable all its endpoints for our application via HTTP.


management.endpoints.web.exposure.include: '*'

We can customize Actuator endpoints to provide more details about our app. A good practice is to expose information about version and git commit to info endpoint. As usual Spring Boot provides auto-configuration for such features, so the only thing we need to do is to include some Maven plugins to build configuration in pom.xml. The goal build-info set for spring-boot-maven-plugin forces it to generate a properties file with basic information about the version. The file is located in directory META-INF/build-info.properties. Plugin git-commit-id-plugin will generate git.properties file in the root directory.

<plugin>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-maven-plugin</artifactId>
   <executions>
      <execution>
         <goals>
            <goal>build-info</goal>
         </goals>
      </execution>
   </executions>
</plugin>
<plugin>
   <groupId>pl.project13.maven</groupId>
   <artifactId>git-commit-id-plugin</artifactId>
   <configuration>
      <failOnNoGitDirectory>false</failOnNoGitDirectory>
   </configuration>
</plugin>

Now you should just build your application using mvn clean install command and then run it.

$ java -jar target\sample-spring-kotlin-microservice-1.0-SNAPSHOT.jar

The info endpoint is available under address http://localhost:8080/actuator/info. It exposes all interesting information for us.

{
   "git":{
      "commit":{
         "time":"2019-01-14T16:20:31Z",
         "id":"f7cb437"
      },
      "branch":"master"
   },
   "build":{
      "version":"1.0-SNAPSHOT",
      "artifact":"sample-spring-kotlin-microservice",
      "name":"sample-spring-kotlin-microservice",
      "group":"pl.piomin.services",
      "time":"2019-01-15T09:18:48.836Z"
   }
}

4. Enabling Swagger API documentation

Build info and git properties may be easily injected into the application code. It can be useful in some cases. One of those cases is if you have enabled auto-generated API documentation. The most popular tool to use for it is Swagger. You can easily integrate Swagger2 with Spring Boot using the SpringFox Swagger project. First, you need to include the following dependencies to your pom.xml.

<dependency>
   <groupId>io.springfox</groupId>
   <artifactId>springfox-swagger2</artifactId>
   <version>2.9.2</version>
</dependency>
<dependency>
   <groupId>io.springfox</groupId>
   <artifactId>springfox-swagger-ui</artifactId>
   <version>2.9.2</version>
</dependency>

Then, you should enable Swagger by annotating configuration class with @EnableSwagger2. Required information is available inside beans BuildProperties and GitProperties. We just have to inject them into Swagger configuration class as shown below. We set them as optional to prevent application startup failure in case they are not present on classpath.

@Configuration
@EnableSwagger2
class SwaggerConfig {

    @Autowired
    lateinit var build: Optional<BuildProperties>
    @Autowired
    lateinit var git: Optional<GitProperties>

    @Bean
    fun api(): Docket {
        var version = "1.0"
        if (build.isPresent && git.isPresent) {
            var buildInfo = build.get()
            var gitInfo = git.get()
            version = "${buildInfo.version}-${gitInfo.shortCommitId}-${gitInfo.branch}"
        }
        return Docket(DocumentationType.SWAGGER_2)
                .apiInfo(apiInfo(version))
                .select()
                .apis(RequestHandlerSelectors.any())
                .paths{ it.equals("/persons")}
                .build()
                .useDefaultResponseMessages(false)
                .forCodeGeneration(true)
    }

    @Bean
    fun uiConfig(): UiConfiguration {
        return UiConfiguration(java.lang.Boolean.TRUE, java.lang.Boolean.FALSE, 1, 1, ModelRendering.MODEL, java.lang.Boolean.FALSE, DocExpansion.LIST, java.lang.Boolean.FALSE, null, OperationsSorter.ALPHA, java.lang.Boolean.FALSE, TagsSorter.ALPHA, UiConfiguration.Constants.DEFAULT_SUBMIT_METHODS, null)
    }

    private fun apiInfo(version: String): ApiInfo {
        return ApiInfoBuilder()
                .title("API - Person Service")
                .description("Persons Management")
                .version(version)
                .build()
    }

}

The documentation is available under context path /swagger-ui.html. Besides API documentation it displays the full information about application version, git commit id and branch name.

kotlin-microservices-1.PNG

5. Choosing your app server

Spring Boot Web can be run on three different embedded servers: Tomcat, Jetty or Undertow. By default it uses Tomcat. To change the default server you just need include the suitable Spring Boot starter and exclude spring-boot-starter-tomcat. The good practice may be to enable switching between servers during application build. You can achieve it by declaring Maven profiles as shown below.

<profiles>
   <profile>
      <id>tomcat</id>
      <activation>
         <activeByDefault>true</activeByDefault>
      </activation>
      <dependencies>
         <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
         </dependency>
      </dependencies>
   </profile>
   <profile>
      <id>jetty</id>
      <dependencies>
         <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
            <exclusions>
               <exclusion>
                  <groupId>org.springframework.boot</groupId>
                  <artifactId>spring-boot-starter-tomcat</artifactId>
               </exclusion>
            </exclusions>
         </dependency>
         <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-jetty</artifactId>
         </dependency>
      </dependencies>
   </profile>
   <profile>
      <id>undertow</id>
      <dependencies>
         <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
            <exclusions>
               <exclusion>
                  <groupId>org.springframework.boot</groupId>
                  <artifactId>spring-boot-starter-tomcat</artifactId>
               </exclusion>
            </exclusions>
         </dependency>
         <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-undertow</artifactId>
         </dependency>
      </dependencies>
   </profile>
</profiles>

Now, if you would like to enable another server than Tomcat for your application you should activate the appropriate profile during Maven build.

$ mvn clean install -Pjetty

Conclusion

Development of microservices using Kotlin and Spring Boot is nice and simple. Based on the sample application I have introduced the main Spring Boot features for Kotlin. I also described some good practices you may apply to your microservices when building it using Spring Boot and Kotlin. You can compare described approach with some other micro-frameworks used with Kotlin, for example Ktor described in one of my previous articles Kotlin Microservices with Ktor.

The post Kotlin Microservice with Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/01/15/kotlin-microservice-with-spring-boot/feed/ 0 6960
Spring Boot Autoscaler https://piotrminkowski.com/2018/09/18/spring-boot-autoscaler/ https://piotrminkowski.com/2018/09/18/spring-boot-autoscaler/#comments Tue, 18 Sep 2018 11:20:30 +0000 https://piotrminkowski.wordpress.com/?p=6824 One of more important reasons we are deciding to use such tools like Kubernetes, Pivotal Cloud Foundry or HashiCorp’s Nomad is the availability of auto-scaling our applications. Of course those tools provide many other useful mechanisms, but we can implement auto-scaling by ourselves. At first glance it seems to be difficult, but assuming we use […]

The post Spring Boot Autoscaler appeared first on Piotr's TechBlog.

]]>
One of more important reasons we are deciding to use such tools like Kubernetes, Pivotal Cloud Foundry or HashiCorp’s Nomad is the availability of auto-scaling our applications. Of course those tools provide many other useful mechanisms, but we can implement auto-scaling by ourselves. At first glance it seems to be difficult, but assuming we use Spring Boot as a framework for building our applications and Jenkins as a CI server, it finally does not require a lot of work.
Today, I’m going to show you how to implement such a solutions using the following frameworks/tools:

  • Spring Boot
  • Spring Boot Actuator
  • Spring Cloud Netflix Eureka
  • Jenkins CI

How it works?

Every Spring Boot application, which contains Spring Boot Actuator library can expose metrics under endpoint /actuator/metrics. There are many valuable metrics that give you the detailed information about an application status. Some of them may be especially important when talking about Spring Boot microservices autoscaling: JVM, CPU metrics, a number of running threads and a number of incoming HTTP requests. There is a dedicated Jenkins pipeline responsible for monitoring application’s metrics by polling endpoint /actuator/metrics periodically. If any monitored metrics is below or above target range it runs a new instance or shutdown a running instance of application using another Actuator endpoint /actuator/shutdown. Before that, it needs to fetch the current list of running instances of a single application in order to get an address of an existing application selected for shutting down or the address of the server with the smallest number of running instances for a new instance of application..

spring-autoscaler-1

After discussing the architecture of our system we may proceed to the development. Our application needs to meet some requirements: it has to expose metrics and endpoints for graceful shutdown, it needs to register in Eureka after after startup and deregister on shutdown, and finally it also should dynamically allocate a running port randomly from the pool of free ports. Thanks to Spring Boot we may easily implement all these mechanisms in five minutes 🙂

Dynamic port allocation

Since it is possible to run many instances of application on a single machine we have to guarantee that there won’t be conflicts in port numbers. Fortunately, Spring Boot provides such mechanisms for an application. We just need to set port number to 0 inside application.yml file using server.port property. Because our application registers itself in eureka it also needs to send unique instanceId, which is by default generated as a concatenation of fields spring.cloud.client.hostname, spring.application.name and server.port.
Here’s the current configuration of our sample application. I have changed the template of instanceId field by replacing number of port to randomly generated number.

spring:
  application:
    name: example-service
server:
  port: ${PORT:0}
eureka:
  instance:
    instanceId: ${spring.cloud.client.hostname}:${spring.application.name}:${random.int[1,999999]}

Enabling Actuator metrics

To enable Spring Boot Actuator we need to include the following dependency to pom.xml.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

We also have to enable exposure of actuator endpoints via HTTP API by setting property management.endpoints.web.exposure.include to '*'. Now, the list of all available metric names is available under context path /actuator/metrics, while detailed information for each metric under path /actuator/metrics/{metricName}.

Graceful shutdown

Besides metrics Spring Boot Actuator also provides an endpoint for shutting down an application. However, in contrast to other endpoints this endpoint is not available by default. We have to set property management.endpoint.shutdown.enabled to true. After that we will have to stop our application by sending the POST request to /actuator/shutdown endpoint.
This method of stopping application guarantees that service will unregister itself from the Eureka server before shutdown.

Enabling Eureka discovery

Eureka is the most popular discovery server used for building microservices-based architecture with Spring Cloud. So, if you already have microservices and want to provide auto-scaling mechanisms for them, Eureka would be a natural choice. It contains the IP address and port number of every registered instance of application. To enable Eureka on the client side you just need to include the following dependency to your pom.xml.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>

As I have mentioned before we also have to guarantee a uniqueness of instanceId send to Eureka server by the client-side application. It has been described in the step “Dynamic port allocation”.
The next step is to create an application with an embedded Eureka server. To achieve it we first need to include the following dependency into pom.xml.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
</dependency>

The main class should be annotated with @EnableEurekaServer.

@SpringBootApplication
@EnableEurekaServer
public class DiscoveryApp {

    public static void main(String[] args) {
        new SpringApplicationBuilder(DiscoveryApp.class).run(args);
    }

}

Client-side applications by default try to connect with the Eureka server on localhost under port 8761. We only need a single, standalone Eureka node, so we will disable registration and attempts to fetch a list of services from other instances of the server.

spring:
  application:
    name: discovery-service
server:
  port: ${PORT:8761}
eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/

The tests of the sample Spring Boot microservices autoscaling system will be performed using Docker containers, so we need to prepare and build images with the Eureka server. Here’s Dockerfile with image definition. It can be built using command docker build -t piomin/discovery-server:2.0 ..

FROM openjdk:8-jre-alpine
ENV APP_FILE discovery-service-1.0-SNAPSHOT.jar
ENV APP_HOME /usr/apps
EXPOSE 8761
COPY target/$APP_FILE $APP_HOME/
WORKDIR $APP_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $APP_FILE"]

Building Jenkins pipeline for autoscaling

The first step is to prepare the Jenkins pipeline responsible for autoscaling. We will create the Jenkins Declarative Pipeline, which runs every minute. Periodical execution may be configured with the triggers directive, that defines the automated ways in which the pipeline should be re-triggered. Our pipeline will communicate with Eureka server and metrics endpoints exposed by every microservice using Spring Boot Actuator.
The test service name is EXAMPLE-SERVICE, which is equal to value (big letters) of property spring.application.name defined inside application.yml file. The monitored metric is the number of HTTP listener threads running on a Tomcat container. These threads are responsible for processing incoming HTTP requests.

pipeline {
    agent any
    triggers {
        cron('* * * * *')
    }
    environment {
        SERVICE_NAME = "EXAMPLE-SERVICE"
        METRICS_ENDPOINT = "/actuator/metrics/tomcat.threads.busy?tag=name:http-nio-auto-1"
        SHUTDOWN_ENDPOINT = "/actuator/shutdown"
    }
    stages { ... }
}

Integrating Jenkins pipeline with Eureka

The first stage of our pipeline is responsible for fetching a list of services registered in the service discovery server. Eureka exposes HTTP API with several endpoints. One of them is GET /eureka/apps/{serviceName}, which returns a list of all instances of an application with a given name. We are saving the number of running instances and the URL of metrics endpoint of every single instance. These values would be accessed during next stages of the pipeline.
Here’s the fragment pipeline responsible for fetching a list of running instances of the application. The name of stage is Calculate. We use HTTP Request Plugin for HTTP connections.

stage('Calculate') {
   steps {
      script {
         def response = httpRequest "http://192.168.99.100:8761/eureka/apps/${env.SERVICE_NAME}"
         def app = printXml(response.content)
         def index = 0
         env["INSTANCE_COUNT"] = app.instance.size()
         app.instance.each {
            if (it.status == 'UP') {
               def address = "http://${it.ipAddr}:${it.port}"
               env["INSTANCE_${index++}"] = address 
            }
         }
      }
   }
}

@NonCPS
def printXml(String text) {
    return new XmlSlurper(false, false).parseText(text)
}

Here’s a sample response from Eureka API for our microservice. The response content type is XML.

spring-autoscaler-2

Integrating Jenkins pipeline with Spring Boot Actuator metrics

Spring Boot Actuator exposes an endpoint with metrics, which allows to find metrics by name and optionally by tag. In the fragment of pipeline visible below I’m trying to find the instance with a metric below or above a defined threshold. If there is such an instance we stop the loop in order to proceed to the next stage, which performs scaling down or up. The ip addresses of running applications are taken from the pipeline environment variable with prefix INSTANCE_, which has been saved in the previous stage.

stage('Metrics') {
   steps {
      script {
         def count = env.INSTANCE_COUNT
         for(def i=0; i<count; i++) {
            def ip = env["INSTANCE_${i}"] + env.METRICS_ENDPOINT
            if (ip == null)
               break;
            def response = httpRequest ip
            def objRes = printJson(response.content)
            env.SCALE_TYPE = returnScaleType(objRes)
            if (env.SCALE_TYPE != "NONE")
               break
         }
      }
   }
}

@NonCPS
def printJson(String text) {
    return new JsonSlurper().parseText(text)
}

def returnScaleType(objRes) {
def value = objRes.measurements[0].value
if (value.toInteger() > 100)
      return "UP"
else if (value.toInteger() < 20)
      return "DOWN"
else
      return "NONE"
}

Shutdown application instance

In the last stage of our pipeline we will shutdown the running instance or start a new instance depending on the result saved in the previous stage. Shutdown may be easily performed by calling Spring Boot Actuator endpoint. In the following fragment of pipeline we pick the instance returned by Eureka as first. Then we send POST requests to that ip address.
If we need to scale up our application we call another pipeline responsible for building a fat JAR and launch it on our machine.

stage('Scaling') {
   steps {
      script {
         if (env.SCALE_TYPE == 'DOWN') {
            def ip = env["INSTANCE_0"] + env.SHUTDOWN_ENDPOINT
            httpRequest url:ip, contentType:'APPLICATION_JSON', httpMode:'POST'
         } else if (env.SCALE_TYPE == 'UP') {
            build job:'spring-boot-run-pipeline'
         }
         currentBuild.description = env.SCALE_TYPE
      }
   }
}

Here’s a full definition of our pipeline spring-boot-run-pipeline responsible for starting a new instance of application. It clones the repository with application source code, builds binaries using Maven commands, and finally runs the application using java -jar command passing the address of Eureka server as a parameter.

pipeline {
    agent any
    tools {
        maven 'M3'
    }
    stages {
        stage('Checkout') {
            steps {
                git url: 'https://github.com/piomin/sample-spring-boot-autoscaler.git', credentialsId: 'github-piomin', branch: 'master'
            }
        }
        stage('Build') {
            steps {
                dir('example-service') {
                    sh 'mvn clean package'
                }
            }
        }
        stage('Run') {
            steps {
                dir('example-service') {
                    sh 'nohup java -jar -DEUREKA_URL=http://192.168.99.100:8761/eureka target/example-service-1.0-SNAPSHOT.jar 1>/dev/null 2>logs/runlog &'
                }
            }
        }
    }
}

Remote extension

The algorithm discussed in the previous sections will work fine only for microservices launched on the single machine. If we would like to extend it to work with many machines, we will have to modify our architecture as shown below. Each machine has a Jenkins agent running and communicating with Jenkins master. If we would like to start a new instance of microservices on the selected machine, we have to run a pipeline using an agent running on that machine. This agent is responsible only for building applications from source code and launching it on the target machine. The shutdown of instance is still performed just by calling HTTP endpoint.

spring-autoscaler-3

You can find more information about running Jenkins agents and connecting them with Jenkins master via JNLP protocol in my article Jenkins nodes on Docker containers. Assuming we have successfully launched some agents on the target machines we need to parametrize our pipelines in order to be able to select agents (and therefore the target machine) dynamically.
When we are scaling up our application we have to pass the agent label to the downstream pipeline.


build job:'spring-boot-run-pipeline', parameters:[string(name: 'agent', value:"slave-1")]

The calling pipeline will be run by an agent labelled with a given parameter.

pipeline {
    agent {
        label "${params.agent}"
    }
    stages { ... }
}

If we have more than one agent connected to the master node we can map their addresses into the labels. Thanks to that you would be able to map the IP address of the microservice instance fetched from Eureka to the target machine with Jenkins agent.

pipeline {
    agent any
    triggers {
        cron('* * * * *')
    }
    environment {
        SERVICE_NAME = "EXAMPLE-SERVICE"
        METRICS_ENDPOINT = "/actuator/metrics/tomcat.threads.busy?tag=name:http-nio-auto-1"
        SHUTDOWN_ENDPOINT = "/actuator/shutdown"
        AGENT_192.168.99.102 = "slave-1"
        AGENT_192.168.99.103 = "slave-2"
    }
    stages { ... }
}

Summary

In this article, I have demonstrated how to use Spring Boot Actuator metrics in order to scale up/scale down your Spring Boot application. Using basic mechanisms provided by Spring Boot together with Spring Cloud Netflix Eureka and Jenkins you can implement Spring Boot microservices autoscaling for your applications without getting any other third-party tools. The case described in this article assumes using Jenkins agents on the remote machines to launch their new instance of the application, but you may as well use a tool like Ansible for that. If you would decide to run Ansible playbooks from Jenkins you will not have to launch Jenkins agents on remote machines. The source code with sample applications is available on GitHub: https://github.com/piomin/sample-spring-boot-autoscaler.git.

The post Spring Boot Autoscaler appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2018/09/18/spring-boot-autoscaler/feed/ 2 6824