Prometheus Archives - Piotr's TechBlog https://piotrminkowski.com/tag/prometheus/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 09 Jul 2024 10:21:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Prometheus Archives - Piotr's TechBlog https://piotrminkowski.com/tag/prometheus/ 32 32 181738725 Multi-node Kubernetes Cluster with Minikube https://piotrminkowski.com/2024/07/09/multi-node-kubernetes-cluster-with-minikube/ https://piotrminkowski.com/2024/07/09/multi-node-kubernetes-cluster-with-minikube/#comments Tue, 09 Jul 2024 10:20:59 +0000 https://piotrminkowski.com/?p=15346 This article will teach you how to run and manage a multi-node Kubernetes cluster locally with Minikube. We will run this cluster on Docker. After that, we will enable some useful add-ons, install Kubernetes-native tools for monitoring and observability, and run a sample app that requires storage. You can compare this article with a similar […]

The post Multi-node Kubernetes Cluster with Minikube appeared first on Piotr's TechBlog.

]]>
This article will teach you how to run and manage a multi-node Kubernetes cluster locally with Minikube. We will run this cluster on Docker. After that, we will enable some useful add-ons, install Kubernetes-native tools for monitoring and observability, and run a sample app that requires storage. You can compare this article with a similar post about the Azure Kubernetes Service.

Prerequisites

Before you begin, you need to install Docker on your local machine. Then you need to download and install Minikube. On macOS, we can do it using the Homebrew command as shown below:

$ brew install minikube
ShellSession

Once we successfully installed Minikube, we can use its CLI. Let’s verify the version used in this article:

$ minikube version
minikube version: v1.33.1
commit: 5883c09216182566a63dff4c326a6fc9ed2982ff
ShellSession

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. This time, we won’t work much with the source code. However, you can access the repository with the sample Spring Boot app that uses storage exposed on the Kubernetes cluster. Once you clone the repository, go to the volumes/files-app directory. Then you should follow my instructions.

Create a Multi-node Kubernetes Cluster with Minikube

In order to create a multi-node Kubernetes cluster with Minikube, we need to use the --nodes or -n parameter in the minikube start command. Additionally, we can increase the default value of memory and CPUs reserved for the cluster with the --memory and --cpus parameters. Here’s the required command to execute:

$ minikube start --memory='12g' --cpus='4' -n 3
ShellSession

By the way, if you increase the resources assigned to the Minikube instance, you should also take care of resource reservations for Docker.

Once we run the minikube start command, the cluster creation begins. You should see a similar output, if everything goes fine.

minikube-kubernetes-create

Now, we can use Minikube with the kubectl tool:

$ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:52879
CoreDNS is running at https://127.0.0.1:52879/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ShellSession

We can display a list of running nodes:

$ kubectl get nodes
NAME           STATUS   ROLES           AGE     VERSION
minikube       Ready    control-plane   4h10m   v1.30.0
minikube-m02   Ready    <none>          4h9m    v1.30.0
minikube-m03   Ready    <none>          4h9m    v1.30.0
ShellSession

Sample Spring Boot App

Our Spring Boot app is simple. It exposes some REST endpoints for file-based operations on the target directory attached as a mounted volume. In order to expose REST endpoints, we need to include the Spring Boot Web starter. We will build the image using the Jib Maven plugin.

<properties>
  <spring-boot.version>3.3.1</spring-boot.version>
</properties>

<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
  </dependency>
</dependencies>

<build>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
  </plugins>
</build>

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-dependencies</artifactId>
      <version>${spring-boot.version}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>
    
<build>
  <plugins>
    <plugin>
      <groupId>com.google.cloud.tools</groupId>
      <artifactId>jib-maven-plugin</artifactId>
      <version>3.4.3</version>
    </plugin>
  </plugins>
</build>
XML

Let’s take a look at the main @RestController in our app. It exposes endpoints for listing all the files inside the target directory (GET /files/all), another one for creating a new file (POST /files/{name}), and also for adding a new string line to the existing file (POST /files/{name}/line).

package pl.piomin.services.files.controller;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.*;

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.List;

import static java.nio.file.Files.list;
import static java.nio.file.Files.writeString;

@RestController
@RequestMapping("/files")
public class FilesController {

    private static final Logger LOG = LoggerFactory.getLogger(FilesController.class);

    @Value("${MOUNT_PATH:/mount/data}")
    String root;

    @GetMapping("/all")
    public List<String> files() throws IOException {
        return list(Path.of(root)).map(Path::toString).toList();
    }

    @PostMapping("/{name}")
    public String createFile(@PathVariable("name") String name) throws IOException {
        return Files.createFile(Path.of(root + "/" + name)).toString();
    }

    @PostMapping("/{name}/line")
    public void addLine(@PathVariable("name") String name,
                        @RequestBody String line) {
        try {
            writeString(Path.of(root + "/" + name), line, StandardOpenOption.APPEND);
        } catch (IOException e) {
            LOG.error("Error while writing to file", e);
        }
    }
}
Java

Usually, I deploy the apps on Kubernetes with Skaffold. But this time, there are some issues with integration between the multi-node Minikube cluster and Skaffold. You can find a detailed description of those issues here. Therefore we build the image directly with the Jib Maven plugin and then just run the app with kubectl CLI.

Install Addons and Tools

Minikube comes with a set of predefined add-ons for Kubernetes. We can install each of them with a single minikube addons enable <ADDON_NAME> command. Although there are several plugins available, we still need to install some useful Kubernetes-native tools like Prometheus, for example using the Helm chart. In order to list all available plugins, we should execute the following command:

$ minikube addons list
ShellSession

Install Addon for Storage

The default storage provider in Minikube doesn’t support the multi-node clusters. It also doesn’t implement the CSI interface and is not able to handle volume snapshots. Fortunately, Minikube offers the csi-hostpath-driver addon for deploying the “CSI Hostpath Driver”. Since this plugin is disabled, we need to enable it.

$ minikube addons enable csi-hostpath-driver
ShellSession

Then, we can set the csi-hostpath-driver as a default storage class for the dynamic volume claims.

$ kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
$ kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
ShellSession

Install Monitoring Stack with Helm

The monitoring stack is not available as an add-on. However, we can easily install it using the Helm chart. We will use the official community chart for that kube-prometheus-stack. Firstly, let’s add the required repository.

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
ShellSession

Then, we can install the Prometheus monitoring stack in the monitoring namespace by executing the following command:

$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
  -n monitoring --create-namespace
ShellSession

Once you install Prometheus in your Minikube, you take advantage of the several default metrics exposed by this tool. For example, the Lens IDE automatically integrates with Prometheus metrics and displays the graphs with cluster overview.

minikube-kubernetes-cluster-metrics

We can also see the visualization of resource usage for all running pods, deployments, or stateful sets.

minikube-kubernetes-pod-metrics

Install Postgres with Helm

We will also install the Postgres database for multi-node cluster testing purposes. Once again, there is a Helm chart that simplifies Postgres installation on Kubernetes. It is published in the Bitnami repository. Firstly, let’s add the required repository:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
ShellSession

Then, we can install Postgres in the db namespace. We increase the default number of instances to 3.

$ helm install postgresql bitnami/postgresql \
  --set readReplicas.replicaCount=3 \
  -n db --create-namespace
ShellSession

The chart creates the StatefulSet object with 3 replicas.

$ kubectl get statefulset -n db
NAME         READY   AGE
postgresql   3/3     55m
ShellSession

We can display a list of running pods. As you see, Kubernetes scheduled 2 pods on the minikube-m02 node, and a single pod on the minikube node.

$ kubectl get po -n db -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP            NODE 
postgresql-0   1/1     Running   0          56m   10.244.1.9    minikube-m02
postgresql-1   1/1     Running   0          23m   10.244.1.10   minikube-m02
postgresql-2   1/1     Running   0          23m   10.244.0.4    minikube
ShellSession

Under the hood, there are 3 persistence volumes created. They use a default csi-hostpath-sc storage class and RWO mode.

$ kubectl get pvc -n db
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data-postgresql-0   Bound    pvc-e9b55ce8-978a-44ae-8fab-d5d6f911f1f9   8Gi        RWO            csi-hostpath-sc   <unset>                 65m
data-postgresql-1   Bound    pvc-d93af9ad-a034-4fbb-8377-f39005cddc99   8Gi        RWO            csi-hostpath-sc   <unset>                 32m
data-postgresql-2   Bound    pvc-b683f1dc-4cd9-466c-9c99-eb0d356229c3   8Gi        RWO            csi-hostpath-sc   <unset>                 32m
ShellSession

Build and Deploy Sample Spring Boot App on Minikube

In the first step, we build the app image. We use the Jib Maven plugin for that. I’m pushing the image to my own Docker registry under the piomin name. So you can change to your registry account.

$ cd volumes/files-app
$ mvn clean compile jib:build -Dimage=piomin/files-app:latest
ShellSession

The image is successfully pushed to the remote registry and is available under the piomin/files-app:latest tag.

Let’s create a new namespace on Minikube. We will run our app in the demo namespace.

$ kubectl create ns demo
ShellSession

Then, let’s create the PersistenceVolumeClaim. Since we will run multiple app pods distributed across all the Kubernetes nodes and the same volume is shared between all the instances we need the ReadWriteMany mode.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data
  namespace: demo
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
YAML

xxx

$ kubectl get pvc -n demo
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data   Bound    pvc-08fe242a-6599-4282-b03c-ee38e092431e   1Gi        RWX            csi-hostpath-sc
ShellSession

After that, we can deploy our app. In order, to spread the pods across all the cluster nodes we need to define the PodAntiAffinity rule (1). We will enable the running of only a single pod on each node. The deployment also mounts the data volume into all the app pods (2) (3).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: files-app
  namespace: demo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: files-app
  template:
    metadata:
      labels:
        app: files-app
    spec:
      # (1)
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - files-app
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: files-app
        image: piomin/files-app:latest
        imagePullPolicy: Always
        resources:
          requests:
            memory: 200Mi
            cpu: 100m
        ports:
        - containerPort: 8080
        env:
          - name: MOUNT_PATH
            value: /mount/data
        # (2)
        volumeMounts:
          - name: data
            mountPath: /mount/data
      # (3)
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: data
YAML

Let’s verify a list of running pods deploying the app.

$ kubectl get po -n demo
NAME                         READY   STATUS    RESTARTS   AGE
files-app-84897d9b57-5qqdr   0/1     Pending   0          36m
files-app-84897d9b57-7gwgp   1/1     Running   0          36m
files-app-84897d9b57-bjs84   0/1     Pending   0          36m
ShellSession

Although, we created the RWX volume, only a single pod is running. As you see, the CSI Hostpath Provider doesn’t fully support the read-write-many mode on Minikube.

In order to solve that problem, we can enable the Storage Provisioner Gluster addon in Minikube.

$ minikube addons enable storage-provisioner-gluster
ShellSession

After enabling it, several new pods are running in the storage-gluster namespace.

$ kubectl -n storage-gluster get pods
NAME                                       READY   STATUS    RESTARTS   AGE
glusterfile-provisioner-79cf7f87d5-87p57   1/1     Running   0          5m25s
glusterfs-d8pfp                            1/1     Running   0          5m25s
glusterfs-mp2qx                            1/1     Running   0          5m25s
glusterfs-rlnxz                            1/1     Running   0          5m25s
heketi-778d755cd-jcpqb                     1/1     Running   0          5m25s
ShellSession

Also, there is a new default StorageClass with the glusterfile name.

$ kubectl get sc
NAME                    PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-hostpath-sc         hostpath.csi.k8s.io        Delete          Immediate           false                  20h
glusterfile (default)   gluster.org/glusterfile    Delete          Immediate           false                  19s
standard                k8s.io/minikube-hostpath   Delete          Immediate           false                  21h
ShellSession

Once we redeploy our app and recreate the PVC using a new default storage class, we can expose our sample Spring Boot app as a Kubernetes service:

apiVersion: v1
kind: Service
metadata:
  name: files-app
spec:
  selector:
    app: files-app
  ports:
  - port: 8080
    protocol: TCP
    name: http
  type: ClusterIP
YAML

Then, let’s enable port forwarding for that service to access it over the localhost:8080:

$ kubectl port-forward svc/files-app 8080 -n demo
ShellSession

Finally, we can run some tests to list and create some files on the target volume:

$ curl http://localhost:8080/files/all
[]

$ curl http://localhost:8080/files/test1.txt -X POST
/mount/data/test1.txt

$ curl http://localhost:8080/files/test2.txt -X POST
/mount/data/test2.txt

$ curl http://localhost:8080/files/all
["/mount/data/test1.txt","/mount/data/test2.txt"]

$ curl http://localhost:8080/files/test1.txt/line -X POST -d "hello1"
$ curl http://localhost:8080/files/test1.txt/line -X POST -d "hello2"
ShellSession

And verify the content of a particular inside the volume:

Final Thoughts

In this article, I wanted to share my experience working with the multi-node Kubernetes cluster simulation on Minikube. It was a very quick introduction. I hope it helps 🙂

The post Multi-node Kubernetes Cluster with Minikube appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/07/09/multi-node-kubernetes-cluster-with-minikube/feed/ 2 15346
Backstage on Kubernetes https://piotrminkowski.com/2024/06/28/backstage-on-kubernetes/ https://piotrminkowski.com/2024/06/28/backstage-on-kubernetes/#respond Fri, 28 Jun 2024 15:06:35 +0000 https://piotrminkowski.com/?p=15291 In this article, you will learn how to integrate Backstage with Kubernetes. We will run Backstage in two different ways. Firstly, it will run outside the cluster and connect with Kubernetes via the API. In the second scenario, we will deploy it directly on the cluster using the official Helm chart. Our instance of Backstage […]

The post Backstage on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to integrate Backstage with Kubernetes. We will run Backstage in two different ways. Firstly, it will run outside the cluster and connect with Kubernetes via the API. In the second scenario, we will deploy it directly on the cluster using the official Helm chart. Our instance of Backstage will connect Argo CD and Prometheus deployed on Kubernetes, to visualize the status of Argo CD synchronization and basic metrics related to the app.

This exercise continues the work described in my previous article about Backstage. So, before you start, you should read that article to understand the whole concept. In many places, I will refer to something that was described and done in the previous article. I’m describing there how to configure and run Backstage, and also how to build a basic template for the sample Spring Boot app. You should be familiar with all those basic terms, to fully understand what happens in the current exercise.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. Our sample GitHub repository contains software templates written in the Backstage technology called Skaffolder. In this article, we will analyze a template dedicated to Kubernetes available in the templates/spring-boot-basic-on-kubernetes directory. After cloning this repository, you should just follow my instructions.

Here’s the structure of our repository. Besides the template, it also contains the Argo CD template with YAML deployment manifests to apply on Kubernetes.

.
├── skeletons
│   └── argocd
│       └── manifests
│           ├── deployment.yaml
│           └── service.yaml
├── templates
│   └── spring-boot-basic-on-kubernetes
│       ├── skeleton
│       │   ├── README.md
│       │   ├── catalog-info.yaml
│       │   ├── k8s
│       │   │   ├── deployment.yaml
│       │   │   └── kind-cluster-test.yaml
│       │   ├── pom.xml
│       │   ├── renovate.json
│       │   ├── skaffold.yaml
│       │   └── src
│       │       ├── main
│       │       │   ├── java
│       │       │   │   └── ${{values.javaPackage}}
│       │       │   │       ├── Application.java
│       │       │   │       ├── controller
│       │       │   │       │   └── ${{values.domainName}}Controller.java
│       │       │   │       └── domain
│       │       │   │           └── ${{values.domainName}}.java
│       │       │   └── resources
│       │       │       └── application.yml
│       │       └── test
│       │           ├── java
│       │           │   └── ${{values.javaPackage}}
│       │           │       └── ${{values.domainName}}ControllerTests.java
│       │           └── resources
│       │               └── k6
│       │                   └── load-tests-add.js
│       └── template.yaml
└── templates.yaml
ShellSession

There is also another Git repository related to this article. It contains the modified source code of Backstage with several plugins installed and configured. The process of extending Backstage with plugins is described in detail in this article. So, you can start from scratch and apply my instructions step by step. But you can clone the final version of the code committed inside that repo and run it on your laptop as well.

Run and Prepare Kubernetes

Before we start with Backstage, we need to run and configure our instance of the Kubernetes cluster. It can be, for example, Minikube. Once you have the running cluster, you can obtain its control plane URL by executing the following command. As you see, my Minikube is available under the https://127.0.0.1:55782 address, so I will have to set it in the Backstage configuration later.

$ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:55782
...
$ export K8S_URL=https://127.0.0.1:55782
ShellSession

We need to install Prometheus and Argo CD on our Kubernetes. In order to install Prometheus, we will use the kube-prometheus-stack Helm chart. Firstly, we should add the Prometheus chart repository with the following command:

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
ShellSession

Then, we can run the following command to install Prometheus in the monitoring namespace:

$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
  --version 60.3.0 \
  -n monitoring --create-namespace
ShellSession

The same as with Prometheus, for Argo CD we need to add the chart repository first:

$ helm repo add argo https://argoproj.github.io/argo-helm
ShellSession

For Argo CD we need an additional configuration to be provided inside the values.yaml file. We have to create the user for the Backstage with privileges to call HTTP API with the apiKey authentication. It is required to automatically create an Argo CD Application from the Skaffolder template.

configs:
  cm:
    accounts.backstage: apiKey,login
  rbac:
    policy.csv: |
      p, backstage, applications, *, */*, allow
YAML

Let’s install Argo CD in the argocd namespace using the settings from values.yaml file:

$ helm install argo-cd argo/argo-cd \
  --version 7.2.0 \
  -f values.yaml \
  -n argocd --create-namespace
ShellSession

That’s not all. We still need to generate the apiKey for the backstage user. Firstly, let’s enable port forwarding for both Argo CD and Prometheus services to access their APIs over localhost.

$ kubectl port-forward svc/argo-cd-argocd-server 8443:443 -n argocd
$ kubectl port-forward svc/kube-prometheus-stack-prometheus 9090 -n monitoring
ShellSession

In order to generate the apiKey for the backstage user we need to sign in to Argo CD with the argocd CLI as the admin user. Then, we need to run the following command for the backstage account and export the generated token as the ARGOCD_TOKEN env variable:

$ argocd account generate-token --account backstage
$ export ARGOCD_TOKEN='argocd.token=<generated_token>'
ShellSession

Finally, let’s obtain the long-lived API token for Kubernetes by creating a secret:

apiVersion: v1
kind: Secret
metadata:
  name: default-token
  namespace: default
  annotations:
    kubernetes.io/service-account.name: default
type: kubernetes.io/service-account-token
YAML

Then, we can copy and export it as the K8S_TOKEN environment variable with the following command:

$ export K8S_TOKEN=$(kubectl get secret default-token -o go-template='{{.data.token | base64decode}}')
ShellSession

Just for the testing purposes, we add the cluster-admin role to the default ServiceAccount.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: default-admin
subjects:
- kind: ServiceAccount
  name: default
  namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
YAML

Modify App Source Code Skeleton for Kubernetes

First of all, we will modify several things in the application source code skeleton. In order to build the container image, we include the jib-maven-plugin in the Maven pom.xml. This plugin will be activated under the jib Maven profile.

<profiles>
  <profile>
    <id>jib</id>
    <activation>
      <activeByDefault>false</activeByDefault>
    </activation>
    <build>
      <plugins>
        <plugin>
          <groupId>com.google.cloud.tools</groupId>
          <artifactId>jib-maven-plugin</artifactId>
          <version>3.4.3</version>
          <configuration>
            <from>
              
            </from>
          </configuration>
        </plugin>
      </plugins>
    </build>
  </profile>
</profiles>
XML

Our source code repository will also contain the Skaffold configuration file. With Skaffold we can easily build an image and deploy an app to Kubernetes in a single step. The address of the image depends on the orgName and appName parameters in the Skaffolder template. During the image build we skip the tests and activate the Maven jib profile.

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: ${{ values.appName }}
build:
  artifacts:
    - image: ${{ values.orgName }}/${{ values.appName }}
      jib:
        args:
          - -Pjib
          - -DskipTests
manifests:
  rawYaml:
    - k8s/deployment.yaml
deploy:
  kubectl: {}
YAML

In order to deploy the app on Kubernetes, Skaffold is looking for the k8s/deployment.yaml manifest. We will use this deployment manifest only for development and automated test purposes. In the “production” we will keep the YAML manifests in a separate Git repository and apply them through Argo CD. Once we provide a change in the source CircleCI will try to deploy the app on the temporary Kind cluster. Therefore, our Service is exposed as a NodePort under the 30000 port.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ${{ values.appName }}
spec:
  selector:
    matchLabels:
      app: ${{ values.appName }}
  template:
    metadata:
      annotations:
        prometheus.io/path: /actuator/prometheus
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
      labels:
        app: ${{ values.appName }}
    spec:
      containers:
        - name: ${{ values.appName }}
          image: ${{ values.orgName }}/${{ values.appName }}
          ports:
            - containerPort: 8080
          readinessProbe:
            httpGet:
              port: 8080
              path: /actuator/health/readiness
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          resources:
            limits:
              memory: 1024Mi
---
apiVersion: v1
kind: Service
metadata:
  name: ${{ values.appName }}
spec:
  type: NodePort
  selector:
    app: ${{ values.appName }}
  ports:
    - port: 8080
      nodePort: 30000
YAML

Let’s switch to the CircleCi configuration file. It also contains several changes related to Kubernetes. We need to include the image-build job responsible for building and pushing the app image to the target registry using Jib. We also include the deploy-k8s job to perform a test deployment to the Kind cluster. In this job, we have to install Skaffold and Kind tools on the CircleCI executor machine. Once the Kind cluster is up and ready, we deploy the app there by executing the skaffold run command.

version: 2.1

jobs:
  analyze:
    docker:
      - image: 'cimg/openjdk:21.0.2'
    steps:
      - checkout
      - run:
          name: Analyze on SonarCloud
          command: mvn verify sonar:sonar -DskipTests
  test:
    executor: machine_executor_amd64
    steps:
      - checkout
      - run:
          name: Install OpenJDK 21
          command: |
            java -version
            sudo apt-get update && sudo apt-get install openjdk-21-jdk
            sudo update-alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
            sudo update-alternatives --set javac /usr/lib/jvm/java-21-openjdk-amd64/bin/javac
            java -version
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
      - run:
          name: Maven Tests
          command: mvn test
  deploy-k8s:
    executor: machine_executor_amd64
    steps:
      - checkout
      - run:
          name: Install Kubectl
          command: |
            curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
            chmod +x kubectl
            sudo mv ./kubectl /usr/local/bin/kubectl
      - run:
          name: Install Skaffold
          command: |
            curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64
            chmod +x skaffold
            sudo mv skaffold /usr/local/bin
      - run:
          name: Install Kind
          command: |
            [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
            chmod +x ./kind
            sudo mv ./kind /usr/local/bin/kind
      - run:
          name: Install OpenJDK 21
          command: |
            java -version
            sudo apt-get update && sudo apt-get install openjdk-21-jdk
            sudo update-alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
            sudo update-alternatives --set javac /usr/lib/jvm/java-21-openjdk-amd64/bin/javac
            java -version
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
      - run:
          name: Create Kind Cluster
          command: |
            kind create cluster --name c1 --config k8s/kind-cluster-test.yaml
      - run:
          name: Deploy to K8s
          command: |
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
            skaffold run
      - run:
          name: Delete Kind Cluster
          command: |
            kind delete cluster --name c1
  image-push:
    docker:
      - image: 'cimg/openjdk:21.0.2'
    steps:
      - checkout
      - run:
          name: Build and push image to DockerHub
          command: mvn compile jib:build -Pjib -Djib.to.image=${{ values.orgName }}/${{ values.appName }}:latest -Djib.to.auth.username=${DOCKER_LOGIN} -Djib.to.auth.password=${DOCKER_PASSWORD} -DskipTests

executors:
  machine_executor_amd64:
    machine:
      image: ubuntu-2204:2023.10.1
    environment:
      architecture: "amd64"
      platform: "linux/amd64"

workflows:
  maven_test:
    jobs:
      - test
      - analyze:
          context: SonarCloud
      - deploy-k8s:
          requires:
            - test
      - image-push:
          context: Docker
          requires:
            - deploy-k8s
YAML

Install Backstage Plugins for Kubernetes

In the previous article about Backstage, we learned how to install plugins for GitHub, CircleCI, and Sonarqube integration. We will still use those plugins but also extend our Backstage instance with some additional plugins dedicated mostly to the Kubernetes-native environment. We will install the following plugins: Kubernetes (backend + frontend), HTTP Request Action (backend), Argo CD (frontend), and Prometheus (frontend). Let’s begin with the Kubernetes plugin.

Install the Kubernetes Plugin

In the first step, we install the Kubernetes frontend plugin. It allows us to view the app pods running on Kubernetes in the Backstage UI. In order to install it, we need to execute the following yarn command:

$ yarn --cwd packages/app add @backstage/plugin-kubernetes
ShellSession

Then, we have to make some changes in the packages/app/src/components/catalog/EntityPage.tsx file. We should import the EntityKubernetesContent component, and then include it in the serviceEntityPage object as a new route on the frontend.

import { EntityKubernetesContent } from '@backstage/plugin-kubernetes';

const serviceEntityPage = (
  <EntityLayout>
    ...
    <EntityLayout.Route path="/kubernetes" title="Kubernetes">
      <EntityKubernetesContent refreshIntervalMs={30000} />
    </EntityLayout.Route>
    ...
  </EntityLayout>
);
TypeScript

We also need to install the Kubernetes backend plugin, to make it work on the frontend site. Here’s the required yarn command:

$ yarn --cwd packages/backend add @backstage/plugin-kubernetes-backend
ShellSession

Then, we should register the plugin-kubernetes-backend module in the packages/backend/src/index.ts file.

import { createBackend } from '@backstage/backend-defaults';

const backend = createBackend();

backend.add(import('@backstage/plugin-app-backend/alpha'));
backend.add(import('@backstage/plugin-proxy-backend/alpha'));
backend.add(import('@backstage/plugin-scaffolder-backend/alpha'));
backend.add(import('@backstage/plugin-techdocs-backend/alpha'));
backend.add(import('@backstage/plugin-auth-backend'));
backend.add(import('@backstage/plugin-auth-backend-module-guest-provider'));
backend.add(import('@backstage/plugin-catalog-backend/alpha'));
backend.add(
  import('@backstage/plugin-catalog-backend-module-scaffolder-entity-model'),
);
backend.add(import('@backstage/plugin-permission-backend/alpha'));
backend.add(import('@backstage/plugin-permission-backend-module-allow-all-policy'));
backend.add(import('@backstage/plugin-search-backend/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-catalog/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-techdocs/alpha'));

backend.add(import('@backstage/plugin-scaffolder-backend-module-github'));
backend.add(import('@backstage-community/plugin-sonarqube-backend'));
backend.add(import('@backstage/plugin-kubernetes-backend/alpha'));

backend.start();
TypeScript

Install the Argo CD Plugin

We also integrate our instance of Backstage with Argo CD running on Kubernetes. Firstly, we should execute the following yarn command:

$ yarn --cwd packages/app add @roadiehq/backstage-plugin-argo-cd
ShellSession

Then, we need to update the EntityPage.tsx file. We will add the EntityArgoCDOverviewCard component inside the overviewContent object.

import {
  EntityArgoCDOverviewCard,
  isArgocdAvailable
} from '@roadiehq/backstage-plugin-argo-cd';

const overviewContent = (
  <Grid container spacing={3} alignItems="stretch">
  ...
    <EntitySwitch>
      <EntitySwitch.Case if={e => Boolean(isArgocdAvailable(e))}>
        <Grid item sm={4}>
          <EntityArgoCDOverviewCard />
        </Grid>
      </EntitySwitch.Case>
    </EntitySwitch>
  ...
  </Grid>
);
TSX

Install Prometheus Plugin

The steps for the Prometheus Plugin are pretty similar to those for the Argo CD Plugin. Firstly, we should execute the following yarn command:

$ yarn --cwd packages/app add @roadiehq/backstage-plugin-prometheus
ShellSession

Then, we need to update the EntityPage.tsx file. We will add the EntityPrometheusContent component inside the seerviceEntityPage object.

import {
  EntityPrometheusContent,
} from '@roadiehq/backstage-plugin-prometheus';

const serviceEntityPage = (
  <EntityLayout>
    ...
    <EntityLayout.Route path="/kubernetes" title="Kubernetes">
      <EntityKubernetesContent refreshIntervalMs={30000} />
    </EntityLayout.Route>
    <EntityLayout.Route path="/prometheus" title="Prometheus">
      <EntityPrometheusContent />
    </EntityLayout.Route>
    ...
  </EntityLayout>
);
TSX

Install HTTP Request Action Plugin

This plugin is not related to Kubernetes. It allows us to integrate with third-party solutions through the HTTP API services. As you probably remember, we have already integrated with Sonarcloud and CircleCI in the Backstage UI. However, we didn’t create any projects there. We could just view the history of builds or scans for the previously created projects in Sonarcloud or CircleCI. It’s time to change it in our template! Thanks to the HTTP Request Action plugin we will create the Argo CD Application through the REST API. As always, we need to execute the yarn add command to install the backend plugin:

$ yarn --cwd packages/backend add @roadiehq/scaffolder-backend-module-http-request
ShellSession

Then, we will register it in the index.ts file:

import { createBackend } from '@backstage/backend-defaults';

const backend = createBackend();

backend.add(import('@backstage/plugin-app-backend/alpha'));
backend.add(import('@backstage/plugin-proxy-backend/alpha'));
backend.add(import('@backstage/plugin-scaffolder-backend/alpha'));
backend.add(import('@backstage/plugin-techdocs-backend/alpha'));
backend.add(import('@backstage/plugin-auth-backend'));
backend.add(import('@backstage/plugin-auth-backend-module-guest-provider'));
backend.add(import('@backstage/plugin-catalog-backend/alpha'));
backend.add(
  import('@backstage/plugin-catalog-backend-module-scaffolder-entity-model'),
);
backend.add(import('@backstage/plugin-permission-backend/alpha'));
backend.add(
  import('@backstage/plugin-permission-backend-module-allow-all-policy'),
);
backend.add(import('@backstage/plugin-search-backend/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-catalog/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-techdocs/alpha'));

backend.add(import('@backstage/plugin-scaffolder-backend-module-github'));
backend.add(import('@backstage-community/plugin-sonarqube-backend'));
backend.add(import('@backstage/plugin-kubernetes-backend/alpha'));
backend.add(import('@roadiehq/scaffolder-backend-module-http-request/new-backend'));

backend.start();
TypeScript

After that, we can modify a Skaffolder template used in the previous article with some additional steps.

Prepare Backstage Template for Kubernetes

Once we have all the things in place, we can modify a previous template for the standard Spring Boot app to adapt it to the Kubernetes requirements.

Create Skaffolder Template

First of all, we add a single input parameter that indicates the target namespace in Kubernetes for running our app (1). Then, we include some additional action steps. In the first of them, we generate the repository with the YAML configuration manifests for Argo CD (2). Then, we will publish that repository on GitHub under the ${{parameters.appName}}-gitops name (3).

After that, we will use the HTTP Request Action plugin to automatically follow a new repository in CircleCI (5). Once we create such a repository in the previous step, CircleCI automatically starts a build after detecting it. We also use the HTTP Request Action plugin to create a new repository on Sonarcloud under the same name as the ${{parameters.appName}} (4). Finally, we integrate with Argo CD through the API to create a new Application responsible for applying app Deployment to Kubernetes (6). This Argo CD Application will access the previously published config repository with the -config suffix in the name and apply manifests inside the manifests directory

apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  name: spring-boot-basic-on-kubernetes-template
  title: Create a Spring Boot app for Kubernetes
  description: Create a Spring Boot app for Kubernetes
  tags:
    - spring-boot
    - java
    - maven
    - circleci
    - renovate
    - sonarqube
    - kubernetes
    - argocd
spec:
  owner: piomin
  system: microservices
  type: service

  parameters:
    - title: Provide information about the new component
      required:
        - orgName
        - appName
        - domainName
        - repoBranchName
        - groupId
        - javaPackage
        - apiPath
        - namespace
        - description
      properties:
        orgName:
          title: Organization name
          type: string
          default: piomin
        appName:
          title: App name
          type: string
          default: sample-spring-boot-app-k8s
        domainName:
          title: Name of the domain object
          type: string
          default: Person
        repoBranchName:
          title: Name of the branch in the Git repository
          type: string
          default: master
        groupId:
          title: Maven Group ID
          type: string
          default: pl.piomin.services
        javaPackage:
          title: Java package directory
          type: string
          default: pl/piomin/services
        apiPath:
          title: REST API path
          type: string
          default: /api/v1
        # (1)
        namespace:
          title: The target namespace on Kubernetes
          type: string
          default: demo
        description:
          title: Description
          type: string
          default: Spring Boot App Generated by Backstage
  steps:
    - id: sourceCodeTemplate
      name: Generating the Source Code Component
      action: fetch:template
      input:
        url: ./skeleton
        values:
          orgName: ${{ parameters.orgName }}
          appName: ${{ parameters.appName }}
          domainName: ${{ parameters.domainName }}
          groupId: ${{ parameters.groupId }}
          javaPackage: ${{ parameters.javaPackage }}
          apiPath: ${{ parameters.apiPath }}

    - id: publish
      name: Publishing to the Source Code Repository
      action: publish:github
      input:
        allowedHosts: ['github.com']
        description: ${{ parameters.description }}
        repoUrl: github.com?owner=${{ parameters.orgName }}&repo=${{ parameters.appName }}
        defaultBranch: ${{ parameters.repoBranchName }}
        repoVisibility: public

    - id: register
      name: Registering the Catalog Info Component
      action: catalog:register
      input:
        repoContentsUrl: ${{ steps.publish.output.repoContentsUrl }}
        catalogInfoPath: /catalog-info.yaml

    # (2)
    - id: configCodeTemplate
      name: Generating the Config Code Component
      action: fetch:template
      input:
        url: ../../skeletons/argocd
        values:
          orgName: ${{ parameters.orgName }}
          appName: ${{ parameters.appName }}
        targetPath: ./gitops

    # (3)
    - id: publish
      name: Publishing to the Config Code Repository
      action: publish:github
      input:
        allowedHosts: ['github.com']
        description: ${{ parameters.description }}
        repoUrl: github.com?owner=${{ parameters.orgName }}&repo=${{ parameters.appName }}-config
        defaultBranch: ${{ parameters.repoBranchName }}
        sourcePath: ./gitops
        repoVisibility: public

    # (4)
    - id: sonarqube
      name: Follow new project on Sonarcloud
      action: http:backstage:request
      input:
        method: 'POST'
        path: '/proxy/sonarqube/projects/create?name=${{ parameters.appName }}&organization=${{ parameters.orgName }}&project=${{ parameters.orgName }}_${{ parameters.appName }}'
        headers:
          content-type: 'application/json'

    # (5)
    - id: circleci
      name: Follow new project on CircleCI
      action: http:backstage:request
      input:
        method: 'POST'
        path: '/proxy/circleci/api/project/gh/${{ parameters.orgName }}/${{ parameters.appName }}/follow'
        headers:
          content-type: 'application/json'

    # (6)
    - id: argocd
      name: Create New Application in Argo CD
      action: http:backstage:request
      input:
        method: 'POST'
        path: '/proxy/argocd/api/applications'
        headers:
          content-type: 'application/json'
        body:
          metadata:
            name: ${{ parameters.appName }}
            namespace: argocd
          spec:
            project: default
            source:
              # (7)
              repoURL: https://github.com/${{ parameters.orgName }}/${{ parameters.appName }}-config.git
              targetRevision: master
              path: manifests
            destination:
              server: https://kubernetes.default.svc
              namespace: ${{ parameters.namespace }}
            syncPolicy:
              automated:
                prune: true
                selfHeal: true
              syncOptions:
                - CreateNamespace=true

  output:
    links:
      - title: Open the Source Code Repository
        url: ${{ steps.publish.output.remoteUrl }}
      - title: Open the Catalog Info Component
        icon: catalog
        entityRef: ${{ steps.register.output.entityRef }}
YAML

Create Catalog Component

Our catalog-info.yaml file should contain several additional annotations related to the plugins installed in the previous section. The argocd/app-name annotation indicates the name of the target Argo CD Application responsible for deployment on Kubernetes. The backstage.io/kubernetes-id annotation contains the value of the label used to search the pods on Kubernetes displayed in the Backstage UI. Finally, the prometheus.io/rule annotation contains a comma-separated list of the Prometheus queries. We will create graphs displaying app pod CPU and memory usage.

apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
  name: ${{ values.appName }}
  title: ${{ values.appName }}
  annotations:
    circleci.com/project-slug: github/${{ values.orgName }}/${{ values.appName }}
    github.com/project-slug: ${{ values.orgName }}/${{ values.appName }}
    sonarqube.org/project-key: ${{ values.orgName }}_${{ values.appName }}
    backstage.io/kubernetes-id: ${{ values.appName }}
    argocd/app-name: ${{ values.appName }}
    prometheus.io/rule: container_memory_usage_bytes{pod=~"${{ values.appName }}-.*"}|pod,rate(container_cpu_usage_seconds_total{pod=~"${{ values.appName }}-.*"}[5m])|pod
  tags:
    - spring-boot
    - java
    - maven
    - circleci
    - renovate
    - sonarqube
spec:
  type: service
  owner: piotr.minkowski@gmail.com
  lifecycle: experimental
YAML

Provide Configuration Settings

We need to include several configuration settings inside the app-config.yaml file. It includes the proxy section, which should contain all APIs required by the HTTP Request Action plugin and frontend plugins. We should include proxy addresses for CircleCI (1), Sonarcloud (2), Argo CD (3), and Prometheus (4). After that, we include the address of our Skaffolder template (5). We also have to include the kubernetes section with the address of the Minikube cluster and previously generated service account token (6).

app:
  title: Scaffolded Backstage App
  baseUrl: http://localhost:3000

organization:
  name: piomin

backend:
  baseUrl: http://localhost:7007
  listen:
    port: 7007
  csp:
    connect-src: ["'self'", 'http:', 'https:']
  cors:
    origin: http://localhost:3000
    methods: [GET, HEAD, PATCH, POST, PUT, DELETE]
    credentials: true
  database:
    client: better-sqlite3
    connection: ':memory:'

integrations:
  github:
    - host: github.com
      token: ${GITHUB_TOKEN}

proxy:
  # (1)
  '/circleci/api':
    target: https://circleci.com/api/v1.1
    headers:
      Circle-Token: ${CIRCLECI_TOKEN}
  # (2)
  '/sonarqube':
    target: https://sonarcloud.io/api
    allowedMethods: [ 'GET', 'POST' ]
    auth: "${SONARCLOUD_TOKEN}:"
  # (3)
  '/argocd/api':
    target: https://localhost:8443/api/v1/
    changeOrigin: true
    secure: false
    headers:
      Cookie:
        $env: ARGOCD_TOKEN
  # (4)
  '/prometheus/api':
    target: http://localhost:9090/api/v1/

auth:
  providers:
    guest: {}

catalog:
  import:
    entityFilename: catalog-info.yaml
    pullRequestBranchName: backstage-integration
  rules:
    - allow: [Component, System, API, Resource, Location]
  locations:
    - type: file
      target: ../../examples/entities.yaml

    - type: file
      target: ../../examples/template/template.yaml
      rules:
        - allow: [Template]
    
    # (5)
    - type: url
      target: https://github.com/piomin/backstage-templates/blob/master/templates/spring-boot-basic-on-kubernetes/template.yaml
      rules:
        - allow: [ Template ]

    - type: file
      target: ../../examples/org.yaml
      rules:
        - allow: [User, Group]


sonarqube:
  baseUrl: https://sonarcloud.io
  apiKey: ${SONARCLOUD_TOKEN}

# (6)
kubernetes:
  serviceLocatorMethod:
    type: 'multiTenant'
  clusterLocatorMethods:
    - type: 'config'
      clusters:
        - url: ${K8S_URL}
          name: minikube
          authProvider: 'serviceAccount'
          skipTLSVerify: false
          skipMetricsLookup: true
          serviceAccountToken: ${K8S_TOKEN}
          dashboardApp: standard
          caFile: '/Users/pminkows/.minikube/ca.crt'
YAML

Build Backstage Image

Our source code repository with Backstage contains all the required plugins and the configuration. Now, we will build it using the yarn tool. Here’s a list of required commands to perform a build.

$ yarn clean
$ yarn install
$ yarn tsc
$ yarn build:backend 
ShellSession

The repository with Backstage already contains the Dockerfile. You can find it in the packages/backend directory.

FROM node:18-bookworm-slim

RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
    --mount=type=cache,target=/var/lib/apt,sharing=locked \
    apt-get update && \
    apt-get install -y --no-install-recommends python3 g++ build-essential && \
    yarn config set python /usr/bin/python3

RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
    --mount=type=cache,target=/var/lib/apt,sharing=locked \
    apt-get update && \
    apt-get install -y --no-install-recommends libsqlite3-dev

USER node

WORKDIR /app

ENV NODE_ENV production

COPY --chown=node:node yarn.lock package.json packages/backend/dist/skeleton.tar.gz ./
RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz

RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
    yarn install --frozen-lockfile --production --network-timeout 300000

COPY --chown=node:node packages/backend/dist/bundle.tar.gz app-config*.yaml ./
RUN tar xzf bundle.tar.gz && rm bundle.tar.gz

CMD ["node", "packages/backend", "--config", "app-config.yaml"]
Dockerfile

In order to build the image using the Dockerfile from the packages/backend directory, we need to run the following command from the project root directory.

$ yarn build-image
ShellSession

If you see a similar result, it means that the build was successfully finished.

The image is available locally as backstage:latest. We can run it on Docker with the following command:

$ docker run -it -p 7007:7007 \
  -e GITHUB_TOKEN=${GITHUB_TOKEN} \
  -e SONARCLOUD_TOKEN=${SONARCLOUD_TOKEN} \
  -e CIRCLECI_TOKEN=${CIRCLECI_TOKEN} \
  -e ARGOCD_TOKEN=${ARGOCD_TOKEN} \
  -e K8S_TOKEN=${K8S_TOKEN} \
  -e K8S_URL=${K8S_URL} \
  -e NODE_ENV=development \
  backstage:latest
ShellSession

However, our main goal today is to run it directly on Kubernetes. You can find our custom Backstage image in my Docker registry: piomin/backstage:latest.

Deploy Backstage on Kubernetes

We will use the official Helm chart for installing Backstage on Kubernetes. In the first step, let’s add the following chart repository:

$ helm repo add backstage https://backstage.github.io/charts
ShellSession

Here’s our values.yaml file for Helm installation. We need to set all the required tokens as extra environment variables inside the Backstage pod. We also changed the default image used in the installation into the previously built custom image. To simplify the exercise, we can disable the external database and use the internal SQLite instance. It is possible to pass extra configuration files by defining them as ConfigMap, without rebuilding the Docker image (my-app-config).

backstage:
  extraEnvVars:
    - name: NODE_ENV
      value: development
    - name: GITHUB_TOKEN
      value: ${GITHUB_TOKEN}
    - name: SONARCLOUD_TOKEN
      value: ${SONARCLOUD_TOKEN}
    - name: CIRCLECI_TOKEN
      value: ${CIRCLECI_TOKEN}
    - name: ARGOCD_TOKEN
      value: ${ARGOCD_TOKEN}
  image:
    registry: docker.io
    repository: piomin/backstage
  extraAppConfig:
    - filename: app-config.extra.yaml
      configMapRef: my-app-config
postgresql:
  enabled: false
YAML

We will change the addresses of the Kubernetes cluster, Argo CD, and Prometheus into the internal cluster locations by modifying the app-config.yaml file.

proxy:
  .
  '/argocd/api':
    target: https://argo-cd-argocd-server.argocd.svc/api/v1/
    changeOrigin: true
    secure: false
    headers:
      Cookie:
        $env: ARGOCD_TOKEN
  '/prometheus/api':
    target: http://kube-prometheus-stack-prometheus.monitoring.svc:9090/api/v1/

catalog:
  locations:
    ...
    - type: url
      target: https://github.com/piomin/backstage-templates/blob/master/templates/spring-boot-basic-on-kubernetes/template.yaml
      rules:
        - allow: [ Template ]
            
kubernetes:
  serviceLocatorMethod:
    type: 'multiTenant'
  clusterLocatorMethods:
    - type: 'config'
      clusters:
        - url: https://kubernetes.default.svc
          name: minikube
          authProvider: 'serviceAccount'
          skipTLSVerify: false
          skipMetricsLookup: true
app-config-kubernetes.yaml

Then, we will create the backstage namespace and extra ConfigMap that contains a new configuration for the Backstage running inside the Kubernetes cluster.

$ kubectl create ns backstage
$ kubectl create configmap my-app-config \
  --from-file=app-config.extra.yaml=app-config-kubernetes.yaml -n backstage
ShellSession

Finally, let’s install our custom instance of Backstage in the backstage namespace by executing the following command:

$ envsubst < values.yaml | helm install backstage backstage/backstage \
  --values - -n backstage
ShellSession

As I result, there is a running Backstage pod on Kubernetes:

$ kubectl get po -n backstage
NAME                         READY   STATUS    RESTARTS   AGE
backstage-7bfbc55647-8cj5d   1/1     Running   0          16m
ShellSession

Let’s enable port forwarding to access the Backstage UI on the http://localhost:7007:

$ kubectl port-forward svc/backstage 7007 -n backstage
ShellSession

This time we increase the privileges for default ServiceAccount in the backstage namespace used by our instance of Backstage:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: default-admin
subjects:
- kind: ServiceAccount
  name: default
  namespace: backstage
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
YAML

Final Test

After accessing Backstage UI we can create a new Spring Boot app from the template. Choose the “Create a Spring Boot app for Kubernetes” template as shown below:

backstage-kubernetes-create

If you would like to try it by yourself, you need to change the organization name to your GitHub account name. Then click “Review” and “Create” on the next page.

There are two GitHub repositories created. The first one contains the sample app source code.

backstage-kubernetes-repo

The second one contains YAML manifests with Deployment for Argo CD.

The Argo CD Application is automatically created. We can verify the synchronization status in the Backstage UI.

backstage-kubernetes-argocd

Our application is running in the demo namespace. We can display a list of pods in the “KUBERNETES” tab.

backstage-kubernetes-pod

We can also verify the detailed status of each pod.

backstage-kubernetes-pod-status

Or take a look at the logs.

Final Thoughts

In this article, we learned how to install and integrate Backstage with Kubernetes-native services like Argo CD or Prometheus. We built the customized image with Backstage and then deployed it on Kubernetes using the Helm chart.

The post Backstage on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/06/28/backstage-on-kubernetes/feed/ 0 15291
Getting Started with Azure Kubernetes Service https://piotrminkowski.com/2024/02/05/getting-started-with-azure-kubernetes-service/ https://piotrminkowski.com/2024/02/05/getting-started-with-azure-kubernetes-service/#respond Mon, 05 Feb 2024 11:12:47 +0000 https://piotrminkowski.com/?p=14887 In this article, you will learn how to create and manage a Kubernetes cluster on Azure and run your apps on it. We will focus on the Azure features that simplify Kubernetes adoption. We will discuss such topics as enabling monitoring based on Prometheus or exposing an app outside of the cluster using the Ingress […]

The post Getting Started with Azure Kubernetes Service appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create and manage a Kubernetes cluster on Azure and run your apps on it. We will focus on the Azure features that simplify Kubernetes adoption. We will discuss such topics as enabling monitoring based on Prometheus or exposing an app outside of the cluster using the Ingress object and Azure mechanisms. To proceed with that article, you don’t need to have a deep knowledge of Kubernetes. However, you may find a lot of articles about Kubernetes and cloud-native development on my blog. For example, if you are developing Java apps and running them on Kubernetes you may read the following article about best practices.

On the other hand, if you are interested in Azure and looking for some other approaches for running Java apps there, you can also refer to some previous posts on my blog. I have already described how to use such services as Azure Spring Apps or Azure Function for Java. For example, in that article, you can read how to integrate Spring Boot with Azure services using the Spring Cloud Azure project. For more information about Azure Function and Spring Cloud refer to that article.

Source Code

This time we won’t work much with a source code. However, if you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should follow my further instructions.

Create Cluster with Azure Kubernetes Service

After signing in to Azure Portal we can create a resource group for our cluster. The name of my resource group is aks. Then we need to find the Azure Kubernetes Service (AKS) in the marketplace. We are creating the instance of AKS in the aks resource group.

We will redirected to the first page of the creation wizard. I will just put the name of my cluster, and leave all the others with the recommended values. The name of my cluster is piomin. The default cluster preset configuration is “Dev/Test”, which is enough for our exercise. However, if you choose e.g. the “Production Standard” preset it will set the 3 availability zones and change the pricing tier for your cluster. Let’s click the “Next” button to proceed to the next page.

azure-kubernetes-install-general

We also won’t change anything in the “Node pools” section. On the “Networking” page, we choose “Azure CNI” instead of "Kubenet" as network configuration, and “Azure” instead of “Calico” as network policy. In comparison to Kubenet, the Azure CNI simplifies integration between Kubernetes and Azure Application Gateway.

We will also provide some changes in the Monitoring section. The main goal here is to enable managed Prometheus service for our cluster. In order to do it, we need to create a new workspace in Azure Monitor. The name of my workspace is prometheus.

That’s all that we needed. Finally, we can create our first AKS cluster.

After some minutes our Kubernetes cluster is ready. We can display a list of resources created inside the aks group. As you see, there are some resources related to the Prometheus or Azure Monitor and a single Kubernetes service “piomin”. It is our Kubernetes cluster. We can click it to see details.

azure-kubernetes-resources-aks

Of course, we can manage the cluster using Azure Portal. However, we can also easily switch to the kubectl CLI. Here’s the Kubernetes API server address for our cluster: piomin-xq30re6n.hcp.eastus.azmk8s.io.

Manage AKS with CLI

We can easily import the AKS cluster credentials into our local Kubeconfig file with the following az command (piomin is the name of the cluster, while aks is the name of the resource group):

$ az aks get-credentials -n piomin -g aks

Once you execute the command visible above, it will add a new Kube context or override the existing one.

After that, we can switch to the kubectl CLI. For example, we can display a list of Deployments across all the namespaces:

$ kubectl get deploy -A
NAMESPACE     NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   ama-logs-rs          1/1     1            1           56m
kube-system   ama-metrics          1/1     1            1           52m
kube-system   ama-metrics-ksm      1/1     1            1           52m
kube-system   coredns              2/2     2            2           58m
kube-system   coredns-autoscaler   1/1     1            1           58m
kube-system   konnectivity-agent   2/2     2            2           58m
kube-system   metrics-server       2/2     2            2           58m

Deploy Sample Apps on the AKS Cluster

Once we can interact with the Kubernetes cluster on Azure through the kubectl CLI, we can run our first app there. In order to do it, firstly, go to the callme-service directory. It contains a simple Spring Boot app that exposes REST endpoints. The Kubernetes manifests are located inside the k8s directory. Let’s take a look at the deployment YAML manifest. It contains the Kubernetes Deployment and Service objects.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
  template:
    metadata:
      labels:
        app: callme-service
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

In order to simplify deployment on Kubernetes we can use Skaffold. It integrates with the kubectl CLI. We just need to execute the following command to build the app from the source code and run it on AKS:

$ cd callme-service
$ skaffold run

After that, we will deploy a second app on the cluster. Go to the caller-service directory. Here’s the YAML manifest with Kubernetes Service and Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        env:
          - name: VERSION
            value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: caller-service
  labels:
    app: caller-service
spec:
  type: ClusterIP
  ports:
    - port: 8080
      name: http
  selector:
    app: caller-service

The caller-service app invokes an endpoint exposed by the callme-service app. Here’s the implementation of Spring @RestController responsible for that:

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallerController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
          buildProperties.or(Optional::empty), version);
      String response = restTemplate.getForObject(
         "http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }

}

Once again, let’s build the app on Kubernetes with the Skaffold CLI:

$ cd caller-service
$ skaffold run

Let’s switch to the Azure Portal. In the Azure Kubernetes Service page go to the workloads section. As you see, there are two Deployments: callme-service and caller-service.

We can switch to the pods view.

Monitoring with Managed Prometheus

In order to access Prometheus metrics for our AKS cluster, we need to go to the prometheus Azure Monitor workspace. In the first step, let’s take a look at the list of clusters assigned to that workspace.

Then, we can switch to the “Prometheus explorer” section. It allows us to provide the PromQL query to see a diagram illustrating the selected metric. You will find a full list of metrics collected for the AKS cluster in the following article. For example, we can visualize the RAM usage for both our apps running in the default namespace. In order to do that, we should use the node_namespace_pod_container:container_memory_working_set_bytes metric as shown below.

azure-kubernetes-prometheus

Exposing App Outside Azure Kubernetes

Install Azure Application Gateway on Kubernetes

In order to expose the service outside of the AKS, we need to create the Ingress object. However, we must have an ingress controller installed on the cluster to satisfy an Ingress. Since we are running the cluster on Azure, our natural choice is the AKS Application Gateway Ingress Controller that configures the Azure Application Gateway. We can install it through the Azure Portal. Go to your AKS cluster page and then switch to the “Networking” section. After that just select the “Enable ingress controller” checkbox. The new ingress-appgateway will be created and assigned to the AKS cluster.

azure-kubernetes-gateway

Once it is ready, you can display its details. The ingress-appgateway object exists in the same virtual network as Azure Kubernetes Service. There is a dedicated resource group – in my case MC_aks_piomin_eastus. The gateway has the public IP addresses assigned. For me, it is 20.253.111.153 as shown below.

After installing the Azure Application Gateway addon on AKS, there is a new Deployment ingress-appgw-deployment responsible for integration between the cluster and Azure Application Gateway service. It is our ingress controller.

Create Kubernetes Ingress

There is also a default IngressClass object installed on the cluster. We can display a list of available ingress classes by executing the command visible below. Our IngressClass object is available under the azure-application-gateway name.

$ kubectl get ingressclass
NAME                        CONTROLLER                  PARAMETERS   AGE
azure-application-gateway   azure/application-gateway   <none>       18m

Let’s take a look at the Ingress manifest. It contains several standard fields inside the spec.rules.* section. It exposes the callme-service Kubernetes Service under the 8080 port. Our Ingress object needs to refer to the azure-application-gateway IngressClass. The Azure Application Gateway Ingress Controller (AGIC) will watch such an object. Once we apply the manifest, AGIC will automatically configure the Application Gateway instance. The Application Gateway contains some health checks to verify the status of the backend. Since the Spring Boot app exposes a liveness endpoint under the /actuator/health/liveness path and 8080 port, we need to override the default settings. In order to do it, we need to use the appgw.ingress.kubernetes.io/health-probe-path and appgw.ingress.kubernetes.io/health-probe-port annotations.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: callme-ingress
  namespace: default
  annotations:
    appgw.ingress.kubernetes.io/health-probe-hostname: localhost
    appgw.ingress.kubernetes.io/health-probe-path: /actuator/health/liveness
    appgw.ingress.kubernetes.io/health-probe-port: '8080'
spec:
  ingressClassName: azure-application-gateway
  rules:
    - http:
        paths:
          - path: /callme
            pathType: Prefix
            backend:
              service:
                name: callme-service
                port:
                  number: 8080

The Ingress for the caller-service is very similar. We just need to change the path and the name of the backend service.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: caller-ingress
  namespace: default
  annotations:
    appgw.ingress.kubernetes.io/health-probe-hostname: localhost
    appgw.ingress.kubernetes.io/health-probe-path: /actuator/health/liveness
    appgw.ingress.kubernetes.io/health-probe-port: '8080'
spec:
  ingressClassName: azure-application-gateway
  rules:
    - http:
        paths:
          - path: /caller
            pathType: Prefix
            backend:
              service:
                name: caller-service
                port:
                  number: 8080

Let’s take a look at the list of ingresses in the Azure Portal. They are available under the same address and port. There is just a difference in the target context path.

azure-kubernetes-ingress

We can test both services using the gateway IP address and the right context path. Each app exposes the GET /ping endpoint.

$ curl http://20.253.111.153/callme/ping
$ curl http://20.253.111.153/caller/ping

The Azure Application Gateway contains a list of backends. In the Kubernetes context, those backends are the IP addresses of running pods. As you both health checks respond with the HTTP 200 OK code.

azure-kubernetes-gateway-backend

What’s next

We have already created a Kubernetes cluster, run the apps there, and exposed them to the external client. Now, the question is – how Azure may help in other activities. Let’s say, we want to install some additional software on the cluster. In order to do that, we need to go to the “Extensions + applications” section on the AKS cluster page. Then, we have to click the “Install an extension” button.

The link redirects us to the app marketplace. There are several different apps we can install in a simplified, graphical form. It could be a database, a message broker, or e.g. one of the Kubernetes-native tools like Argo CD.

azure-kubernetes-extensions

We just need to create a new instance of Argo CD and fill in some basic information. The installer is based on the Argo CD Helm chart provided by Bitnami.

azure-kubernetes-argocd

After a while, the instance of Argo CD is running on our cluster. We can display a list of installed extensions.

I installed Argo CD in the gitops namespace. Let’s verify a list of pods running in that namespace after successful installation:

$ kubectl get pod -n gitops
NAME                                             READY   STATUS    RESTARTS   AGE
gitops-argo-cd-app-controller-6d6848f46c-8n44j   1/1     Running   0          4m46s
gitops-argo-cd-repo-server-5f7cccd9d5-bc6ts      1/1     Running   0          4m46s
gitops-argo-cd-server-5c656c9998-fsgb5           1/1     Running   0          4m46s
gitops-redis-master-0                            1/1     Running   0          4m46s

And the last thing. As you remember, we exposed our apps outside the AKS cluster under the IP address. What about exposing them under the DNS name? Firstly, we need to have a DNS zone created on Azure. In this zone, we have to add a new record set containing the IP address of our application gateway. The name of the record set indicates the hostname of the gateway. In my case it is apps.cb57d.azure.redhatworkshops.io.

After that, we need to change the definition of the Ingress object. It should contain the hostname field inside the rules section, with the public DNS address of our gateway.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: caller-ingress
  namespace: default
  annotations:
    appgw.ingress.kubernetes.io/health-probe-hostname: localhost
    appgw.ingress.kubernetes.io/health-probe-path: /actuator/health/liveness
    appgw.ingress.kubernetes.io/health-probe-port: '8080'
spec:
  ingressClassName: azure-application-gateway
  rules:
    - host: apps.cb57d.azure.redhatworkshops.io
      http:
        paths:
          - path: /caller
            pathType: Prefix
            backend:
              service:
                name: caller-service
                port:
                  number: 8080

Final Thoughts

In this article, I focused on Azure features that simplify starting with the Kubernetes cluster. We covered such topics as cluster creation, monitoring, or exposing apps for external clients. Of course, these are not all the interesting features provided by Azure Kubernetes Service.

The post Getting Started with Azure Kubernetes Service appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/02/05/getting-started-with-azure-kubernetes-service/feed/ 0 14887
Monitor Kubernetes Cost Across Teams with Kubecost https://piotrminkowski.com/2023/08/25/monitor-kubernetes-cost-across-teams-with-kubecost/ https://piotrminkowski.com/2023/08/25/monitor-kubernetes-cost-across-teams-with-kubecost/#respond Fri, 25 Aug 2023 12:04:42 +0000 https://piotrminkowski.com/?p=14429 In this article, you will learn how to monitor the real-time cost of the Kubernetes cluster shared across several teams with Kubecost. We won’t focus on the cloud aspects of this tool, like cost optimization. Our main goal is to count cost sharing between several different teams using the same Kubernetes cluster. Sometimes it can […]

The post Monitor Kubernetes Cost Across Teams with Kubecost appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to monitor the real-time cost of the Kubernetes cluster shared across several teams with Kubecost. We won’t focus on the cloud aspects of this tool, like cost optimization. Our main goal is to count cost sharing between several different teams using the same Kubernetes cluster. Sometimes it can be a very important aspect of working with Kubernetes – even in the on-premise environment. If you want to run several apps in your organization within the project, you probably need to guarantee an internal budget for it. Therefore tools that allow to count resource usage as a cost may be very useful in such a scenario. Kubecost seems to be the most popular tool in this area. Let’s try it 🙂

If you are interested in more advanced exercises related to Kubernetes and Prometheus metrics you can read about autoscaling with HPA in my article here.

Prerequisites

In order to perform the exercise you need to have a Kubernetes cluster. It can be a managed cluster on the cloud provider. However, you can also run Kubernetes locally as me, for example with Minikube. The only important thing is to reserve enough resources since we will run several apps. My proposal is to guarantee 16GB of memory and 4 CPUs:

$ minikube start --memory='16g' --cpus='4'

Once we will start the cluster we may proceed to the next section.

Install Kubecost on Kubernetes

We will use the Helm chart to install Kubecost on Minikube. Here’s the installation command:

$ helm install kubecost cost-analyzer \
    --repo https://kubecost.github.io/cost-analyzer/ \
    --namespace kubecost --create-namespace \
    --set kubecostProductConfigs.productKey.key="123"

It will install our tool in the kubecost namespace. Kubecost provides a UI dashboard for visualizing basic aspects related to cost monitoring.

The nice feature here is that it also installs the whole required stack for monitoring including Prometheus and Grafana.

$ kubectl get po -n kubecost
NAME                                          READY   STATUS    RESTARTS   AGE
kubecost-cost-analyzer-66b8cf968c-4j7bb       2/2     Running   0          4m
kubecost-grafana-5fcd9f86c6-44njc             2/2     Running   0          4m
kubecost-kube-state-metrics-c566bb85f-qp5tw   1/1     Running   0          4m
kubecost-prometheus-node-exporter-k2g5g       1/1     Running   0          4m
kubecost-prometheus-server-b8bd4479d-ldvx2    2/2     Running   0          4m

Finally, let’s just enable port forwarding according to the message after the Helm chart installation:

$ kubectl port-forward -n kubecost deployment/kubecost-cost-analyzer 9090

After that, we can access the UI under the http://localhost:9090 address:

kubernetes-cost-ui

Then, click the Settings tab and scroll down to the Labels section. We will find here a list of labels to use in our apps. As you see we can assign each app to a different team, department, or environment. Of course, we can also override the name of each label. However, I don’t think it is necessary for the purpose of our exercise.

kubernetes-cost-labels

Deploy Test Apps on Kubernetes

Let’s deploy some apps on Kubernetes in the different namespaces. First of all, I created five namespaces on my cluster (from demo-1 to demo-5):

$ kubectl create ns demo-1
$ kubectl create ns demo-2
$ kubectl create ns demo-3
$ kubectl create ns demo-4
$ kubectl create ns demo-5

There are two sample apps available. Both of them are written using the Spring Boot framework. The first of them connect to the Mongo database, while the second just uses an in-memory data store. Let’s begin with the Mongo Deployment. We will run it in three namespaces demo-1, demo-2, and demo-3. There are three labels included in the Deployment object: team, env, and department, which are considered by Kubecost. Here’s the illustration of the labeling strategy per namespace.

kubernetes-cost-labeling-strategy

Assuming I’m deploying Mongo in the demo-1, here’s the YAML manifest. Please, pay attention to the values of labels (1) (2) (3). We will change those values according to the illustration above and depending on the namespace.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb
data:
  database-name: admin
---
apiVersion: v1
kind: Secret
metadata:
  name: mongodb
type: Opaque
data:
  database-password: dGVzdDEyMw==
  database-user: dGVzdA==
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb
  labels:
    app: mongodb
    team: team-a # (1)
    env: dev # (2)
    department: dep-a # (3) 
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
        team: team-a # (1)
        env: dev # (2)
        department: dep-a # (3)
    spec:
      containers:
        - name: mongodb
          image: mongo:5.0
          ports:
            - containerPort: 27017
          env:
            - name: MONGO_INITDB_DATABASE
              valueFrom:
                configMapKeyRef:
                  name: mongodb
                  key: database-name
            - name: MONGO_INITDB_ROOT_USERNAME
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-user
            - name: MONGO_INITDB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-password
          resources:
            requests:
              memory: 256Mi
              cpu: 100m
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  ports:
    - port: 27017
      protocol: TCP
  selector:
    app: mongodb

In the next step, we will deploy our Spring Boot that connects to the Mongo database. Since it connects to Mongo, it has to run in the same namespaces. We should also use the same labeling strategy as in the previous Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app
  labels:
    app: sample-app
    team: team-a
    env: dev
    department: dep-2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sample-app
  template:
    metadata:
      labels:
        app: sample-app
        team: team-a
        env: dev
        department: dep-2
    spec:
      containers:
      - name: sample-spring-boot-on-kubernetes
        image: piomin/sample-spring-boot-on-kubernetes:latest
        ports:
        - containerPort: 8080
        env:
          - name: MONGO_DATABASE
            valueFrom:
              configMapKeyRef:
                name: mongodb
                key: database-name
          - name: MONGO_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-user
          - name: MONGO_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-password
          - name: MONGO_URL
            value: mongodb
        readinessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/readiness
            scheme: HTTP
          timeoutSeconds: 1
          periodSeconds: 10
          successThreshold: 1
          failureThreshold: 3
        resources:
          requests:
            memory: 512Mi
            cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
  name: sample-app
spec:
  type: ClusterIP
  selector:
    app: sample-app
  ports:
  - port: 8080

Now, let’s install both Mongo and our app on Kubernetes using the following commands (run the same actions for the demo-2 and demo-3 namespaces after changing the labels and replicas):

$ kubectl apply -f mongodb.yaml -n demo-1
$ kubectl apply -f spring-app.yaml -n demo-1

We will also deploy the Spring Boot app with an in-memory store in the next two namespaces demo-4 and demo-5. Of course, please remember the labeling strategy described above. Here’s the YAML manifest for the demo-4 namespace. To distinguish we can set the higher number of replicas in the demo-5 namespace, e.g. 5.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app-im
  labels:
    app: sample-app-im
    team: team-c # (1)
    env: dev # (2)
    department: dep-3 # (3)
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sample-app-im
  template:
    metadata:
      labels:
        app: sample-app-im
        team: team-c # (1)
        env: dev # (2)
        department: dep-3 # (3)
    spec:
      containers:
      - name: sample-spring-kotlin-microservice
        image: piomin/sample-spring-kotlin-microservice:latest
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: 384Mi
            cpu: 150m
---
apiVersion: v1
kind: Service
metadata:
  name: sample-app-im
spec:
  type: ClusterIP
  selector:
    app: sample-app-im
  ports:
  - port: 8080

Let’s apply the objects from the YAML manifest above (run the same action for the demo-5 after changing the labels and replicas):

$ kubectl apply -f spring-app-im.yaml -n demo-4

Here’s a full list of our deployments with labels and namespaces:

Once we deployed all required apps we can switch to the Kubecost dashboard.

Using Kubecost Dashboard for Kubernetes Cost Monitoring

In the meantime, I changed the currency to Polish “Złoty” 🙂 We can create diagrams using various criteria. Let’s focus on the criteria we set in the custom Kubecost labels in all the deployments. In order to create diagrams go to the Monitor menu item.

We can choose between several available aggregation labels. It is also possible to choose a custom label not predefined by Kubecost. We can use a single aggregation or combine several labels together (multi-aggregation).

kubernetes-cost-ui-aggregation

Here’s the diagram that illustrates aggregation per namespace. We can filter out only important data. In that case, I defined the filter demo* to show only our namespaces with sample apps.

kubernetes-cost-diagram-namespace

Here’s a similar diagram, but aggregated by the team label. As you see we have all the three teams we defined in the label for our sample apps.

After that, we can change the diagram type into e.g. Proportional cost:

Let’s prepare the same diagram type, but using two search criteria. We combine the team with the department in the proportional costs:

kubernetes-cost-diagram-multi

And the last diagram in the article. It displays costs per environment (dev, test and prod). Instead of a cumulative cost, we can show e.g. hourly rate.

Final Thoughts

I had a lot of fun while visualizing the cost of my Kubernetes cluster with Kubecost. Of course, Kubecost has some additional features, but I focused mainly on cost visualization across teams and their apps. I have no major problems with running and understanding the core features of that tool.

The post Monitor Kubernetes Cost Across Teams with Kubecost appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/08/25/monitor-kubernetes-cost-across-teams-with-kubecost/feed/ 0 14429
Spring Boot 3 Observability with Grafana https://piotrminkowski.com/2022/11/03/spring-boot-3-observability-with-grafana/ https://piotrminkowski.com/2022/11/03/spring-boot-3-observability-with-grafana/#comments Thu, 03 Nov 2022 09:34:50 +0000 https://piotrminkowski.com/?p=13682 This article will teach you how to configure observability for your Spring Boot applications. We assume that observability is understood as the interconnection between metrics, logging, and distributed tracing. In the end, it should allow you to monitor the state of your system to detect errors and latency. There are some significant changes in the […]

The post Spring Boot 3 Observability with Grafana appeared first on Piotr's TechBlog.

]]>
This article will teach you how to configure observability for your Spring Boot applications. We assume that observability is understood as the interconnection between metrics, logging, and distributed tracing. In the end, it should allow you to monitor the state of your system to detect errors and latency.

There are some significant changes in the approach to observability between Spring Boot 2 and 3. Tracing will no longer be part of Spring Cloud through the Spring Cloud Sleuth project. The core of that project has been moved to Micrometer Tracing. You can read more about the reasons and future plans in this post on the Spring blog.

The main goal of that article is to give you a simple receipt for how to enable observability for your microservices written in Spring Boot using a new approach. In order to simplify our exercise, we will use a fully managed Grafana instance in their cloud. We will build a very basic architecture with two microservices running locally. Let’s take a moment to discuss our architecture in great detail.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. It contains several tutorials. You need to go to the inter-communication directory. After that, you should just follow my instructions.

Spring Boot Observability Architecture

There are two applications: inter-callme-service and inter-caller-service. The inter-caller-service app calls the HTTP endpoint exposed by the inter-callme-service app. We run two instances of inter-callme-service. We will also configure a static load balancing between those two instances using Spring Cloud Load Balancer. All the apps will expose Prometheus metrics using Spring Boot Actuator and the Micrometer project. For tracing, we are going to use Open Telemetry with Micrometer Tracing and OpenZipkin. In order to send all the data including logs, metrics, and traces from our local Spring Boot instances to the cloud, we have to use Grafana Agent.

On the other hand, there is a stack responsible for collecting and visualizing all the data. As I mentioned before we will leverage Grafana Cloud for that. It is a very comfortable way since we don’t have to install and configure all the required tools. First of all, Grafana Cloud offers a ready instance of Prometheus responsible for collecting metrics. We also need a log aggregation tool for storing and querying logs from our apps. Grafana Cloud offers a preconfigured instance of Loki for that. Finally, we have a distributed tracing backend through the Grafana Tempo. Here’s the visualization of our whole architecture.

spring-boot-observability-arch

Enable Metrics and Tracing with Micrometer

In order to export metrics in Prometheus format, we need to include the micrometer-registry-prometheus dependency. For tracing context propagation we should add the micrometer-tracing-bridge-otel module. We should also export tracing spans using one of the formats supported by Grafana Tempo. It will be OpenZipkin through the opentelemetry-exporter-zipkin dependency.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
  <groupId>io.micrometer</groupId>
  <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
<dependency>
  <groupId>io.micrometer</groupId>
  <artifactId>micrometer-tracing-bridge-otel</artifactId>
</dependency>
<dependency>
  <groupId>io.opentelemetry</groupId>
  <artifactId>opentelemetry-exporter-zipkin</artifactId>
</dependency>

We need to use the latest available version of Spring Boot 3. Currently, it is 3.0.0-RC1. As a release candidate that version is available in the Spring Milestone repository.

<parent>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-parent</artifactId>
  <version>3.0.0-RC1</version>
  <relativePath/>
</parent>

One of the more interesting new features in Spring Boot 3 is the support for Prometheus exemplars. Exemplars are references to data outside of the metrics published by an application. They allow linking metrics data to distributed traces. In that case, the published metrics contain a reference to the traceId. In order to enable exemplars for the particular metrics, we need to expose percentiles histograms. We will do that for http.server.requests metric (1). We will also send all the traces to Grafana Cloud by setting the sampling probability to 1.0 (2). Finally, just to verify it works properly we print traceId and spanId in the log line (3).

spring:
  application:
    name: inter-callme-service
  output.ansi.enabled: ALWAYS

management:
  endpoints.web.exposure.include: '*'
  metrics.distribution.percentiles-histogram.http.server.requests: true # (1)
  tracing.sampling.probability: 1.0 # (2)

logging.pattern.console: "%clr(%d{HH:mm:ss.SSS}){blue} %clr(%5p [${spring.application.name:},%X{traceId:-},%X{spanId:-}]){yellow} %clr(:){red} %clr(%m){faint}%n" # (3)

The inter-callme-service exposes the POST endpoint that just returns the message in the reversed order. We don’t need to add here anything, just standard Spring Web annotations.

@RestController
@RequestMapping("/callme")
class CallmeController(private val repository: ConversationRepository) {

   private val logger: Logger = LoggerFactory
       .getLogger(CallmeController::class.java)

   @PostMapping("/call")
   fun call(@RequestBody request: CallmeRequest) : CallmeResponse {
      logger.info("In: {}", request)
      val response = CallmeResponse(request.id, request.message.reversed())
      repository.add(Conversation(request = request, response = response))
      return response
   }

}

Load Balancing with Spring Cloud

In the endpoint exposed by the inter-caller-service we just call the endpoint from inter-callme-service. We use Spring RestTemplate for that. You can also declare Spring Cloud OpenFeign client, but it seems it does not currently support Micrometer Tracing out-of-the-box.

@RestController
@RequestMapping("/caller")
class CallerController(private val template: RestTemplate) {

   private val logger: Logger = LoggerFactory
      .getLogger(CallerController::class.java)

   private var id: Int = 0

   @PostMapping("/send/{message}")
   fun send(@PathVariable message: String): CallmeResponse? {
      logger.info("In: {}", message)
      val request = CallmeRequest(++id, message)
      return template.postForObject("http://inter-callme-service/callme/call",
         request, CallmeResponse::class.java)
   }

}

In this exercise, we will use a static client-side load balancer that distributes traffic across two instances of inter-callme-service. Normally, you would integrate Spring Cloud Load Balancer with service discovery based e.g. on Eureka. However, I don’t want to complicate our demo with external components in the architecture. Assuming we are running inter-callme-service on 55800 and 55900 here’s the load balancer configuration in the application.yml file:

spring:
  application:
    name: inter-caller-service
  cloud:
    loadbalancer:
      cache:
        ttl: 1s
      instances:
        - name: inter-callme-service
          servers: localhost:55800, localhost:55900

Since there is no built-in static load balancer implementation we need to add some code. Firstly, we have to inject configuration properties into the Spring bean.

@Configuration
@ConfigurationProperties("spring.cloud.loadbalancer")
class LoadBalancerConfigurationProperties {

   val instances: MutableList<ServiceConfig> = mutableListOf()

   class ServiceConfig {
      var name: String = ""
      var servers: String = ""
   }

}

Then we need to create a bean that implements the ServiceInstanceListSupplier interface. It just returns a list of ServiceInstance objects that represents all defined static addresses.

class StaticServiceInstanceListSupplier(private val properties: LoadBalancerConfigurationProperties,
                                        private val environment: Environment): 
   ServiceInstanceListSupplier {

   override fun getServiceId(): String = environment
      .getProperty(LoadBalancerClientFactory.PROPERTY_NAME)!!

   override fun get(): Flux<MutableList<ServiceInstance>> {
      val serviceConfig: LoadBalancerConfigurationProperties.ServiceConfig? =
                properties.instances.find { it.name == serviceId }
      val list: MutableList<ServiceInstance> =
                serviceConfig!!.servers.split(",", ignoreCase = false, limit = 0)
                        .map { StaticServiceInstance(serviceId, it) }.toMutableList()
      return Flux.just(list)
   }

}

Finally, we need to enable Spring Cloud Load Balancer for the app and annotate RestTemplate with @LoadBalanced.

@SpringBootApplication
@LoadBalancerClient(value = "inter-callme-service", configuration = [CustomCallmeClientLoadBalancerConfiguration::class])
class InterCallerServiceApplication {

    @Bean
    @LoadBalanced
    fun template(builder: RestTemplateBuilder): RestTemplate = 
       builder.build()

}

Here’s the client-side load balancer configuration. We are providing our custom StaticServiceInstanceListSupplier implementation as a default ServiceInstanceListSupplier. Then we set RandomLoadBalancer as a default implementation of a load-balancing algorithm.

class CustomCallmeClientLoadBalancerConfiguration {

   @Autowired
   lateinit var properties: LoadBalancerConfigurationProperties

   @Bean
   fun clientServiceInstanceListSupplier(
      environment: Environment, context: ApplicationContext):     
      ServiceInstanceListSupplier {
        val delegate = StaticServiceInstanceListSupplier(properties, environment)
        val cacheManagerProvider = context
           .getBeanProvider(LoadBalancerCacheManager::class.java)
        return if (cacheManagerProvider.ifAvailable != null) {
            CachingServiceInstanceListSupplier(delegate, cacheManagerProvider.ifAvailable)
        } else delegate
    }

   @Bean
   fun loadBalancer(environment: Environment, 
      loadBalancerClientFactory: LoadBalancerClientFactory):
            ReactorLoadBalancer<ServiceInstance> {
        val name: String? = environment.getProperty(LoadBalancerClientFactory.PROPERTY_NAME)
        return RandomLoadBalancer(
           loadBalancerClientFactory
              .getLazyProvider(name, ServiceInstanceListSupplier::class.java),
            name!!
        )
    }
}

Testing Observability with Spring Boot

Let’s see how it works. In the first step, we are going to run two instances of inter-callme-service. Since we set a static value of the listen port we need to override the property server.port for each instance. We can do it with the env variable SERVER_PORT. Go to the inter-communication/inter-callme-service directory and run the following commands:

$ SERVER_PORT=55800 mvn spring-boot:run
$ SERVER_PORT=55900 mvn spring-boot:run

Then, go to the inter-communication/inter-caller-service directory and run a single instance on the default port 8080:

$ mvn spring-boot:run

Then, let’s call our endpoint POST /caller/send/{message} several times with parameters, for example:

$ curl -X POST http://localhost:8080/caller/call/hello1

Here are the logs from inter-caller-service with the highlighted value of the traceId parameter:

spring-boot-observability-traceid

Let’s take a look at the logs from inter-callme-service. As you see the traceId parameter is the same as the traceId for that request on the inter-caller-service side.

Here are the logs from the second instance of inter-callme-service:

ou could also try the same exercise with Spring Cloud OpenFeign. It is configured and ready to use. However, for me, it didn’t propagate the traceId parameter properly. Maybe, it is the case with the current non-GA versions of Spring Boot and Spring Cloud.

Let’s verify another feature – Prometheus exemplars. In order to do that we need to call the /actuator/prometheus endpoint with the Accept header that is asking for the OpenMetrics format. This is the same header Prometheus will use to scrape the metrics.

$ curl -H 'Accept: application/openmetrics-text; version=1.0.0' http://localhost:8080/actuator/prometheus

As you see several metrics for the result contain traceId and spanId parameters. These are our exemplars.

spring-boot-observability-openmetrics

Install and Configure Grafana Agent

Our sample apps are ready. Now, the main goal is to send all the collected observables to our account on Grafana Cloud. There are various ways of sending metrics, logs, and traces to the Grafana Stack. In this article, I will show you how to use the Grafana Agent for that. Firstly, we need to install it. You can find detailed installation instructions depending on your OS here. Since I’m using macOS I can do it with Homebrew:

$ brew install grafana-agent

Before we start the agent, we need to prepare a configuration file. The location of that file also depends on your OS. For me it is $(brew --prefix)/etc/grafana-agent/config.yml. The configuration YAML manifests contain information on how we want to collect and send metrics, traces, and logs. Let’s begin with the metrics. Inside the scrape_configs section, we need to set a list of endpoints for scraping (1) and a default path (2). Inside the remote_write section, we have to pass our Grafana Cloud instance auth credentials (3) and URL (4). By default, Grafana Agent does not send exemplars. Therefore we need to enable it with the send_exemplars property (5).

metrics:
  configs:
    - name: integrations
      scrape_configs:
        - job_name: agent
          static_configs:
            - targets: ['127.0.0.1:55800','127.0.0.1:55900','127.0.0.1:8080'] # (1)
          metrics_path: /actuator/prometheus # (2)
      remote_write:
        - basic_auth: # (3)
            password: <YOUR_GRAFANA_CLOUD_TOKEN>
            username: <YOUR_GRAFANA_CLOUD_USER>
          url: https://prometheus-prod-01-eu-west-0.grafana.net/api/prom/push # (4)
          send_exemplars: true # (5)
  global:
    scrape_interval: 60s

You can find all information about your instance of Prometheus in the Grafana Cloud dashboard.

In the next step, we prepare a configuration for collecting and sending logs to Grafana Loki. The same as before we need to set auth credentials (1) and the URL (2) of our Loki instance. The most important thing here is to pass the location of log files (3).

logs:
  configs:
    - clients:
        - basic_auth: # (1)
            password: <YOUR_GRAFANA_CLOUD_TOKEN>
            username: <YOUR_GRAFANA_CLOUD_USER>
          # (2)
          url: https://logs-prod-eu-west-0.grafana.net/loki/api/v1/push
      name: springboot
      positions:
        filename: /tmp/positions.yaml
      scrape_configs:
        - job_name: springboot-json
          static_configs:
            - labels:
                __path__: ${HOME}/inter-*/spring.log # (3)
                job: springboot-json
              targets:
                - localhost
      target_config:
        sync_period: 10s

By default, Spring Boot logs only to the console and does not write log files. In our case, the Grafana Agent reads log lines from output files. In order to write log files, we need to set a logging.file.name or logging.file.path property. Since there are two instances of inter-callme-service we need to distinguish somehow their log files. We will use the server.port property for that. The logs inside files are stored in JSON format.

logging:
  pattern:
    console: "%clr(%d{HH:mm:ss.SSS}){blue} %clr(%5p [${spring.application.name:},%X{traceId:-},%X{spanId:-}]){yellow} %clr(:){red} %clr(%m){faint}%n"
    file: "{\"timestamp\":\"%d{HH:mm:ss.SSS}\",\"level\":\"%p\",\"traceId\":\"%X{traceId:-}\",\"spanId\":\"%X{spanId:-}\",\"appName\":\"${spring.application.name}\",\"message\":\"%m\"}%n"
  file:
    path: ${HOME}/inter-callme-service-${server.port}

Finally, we will configure trace collecting. Besides auth credentials and URL of the Grafana Tempo instance, we need to enable OpenZipkin receiver (1).

traces:
  configs:
    - name: default
      remote_write:
        - endpoint: tempo-eu-west-0.grafana.net:443
          basic_auth:
            username: <YOUR_GRAFANA_CLOUD_USER>
            password: <YOUR_GRAFANA_CLOUD_TOKEN>
      # (1)
      receivers:
        zipkin:

Then, we can start the agent with the following command:

$ brew services restart grafana-agent

Grafana agent contains a Zipkin collector that listens on the default port 9411. There is also an HTTP API exposed outside the agent on the port 12345 for verifying agent status.

For example, we can use Grafana Agent HTTP API to verify how many log files it is monitoring. To do that just call the endpoint GET agent/api/v1/logs/targets. As you see, for me, it is monitoring three files. So that’s what we exactly wanted to achieve since there are two running instances of inter-callme-service and a single instance of inter-caller-service.

spring-boot-observability-logs

Visualize Spring Boot Observability with Grafana Stack

One of the most important advantages of Grafana Cloud in our exercise is that we have all the required things configured. After you navigate to the Grafana dashboard you can display a list of available data sources. As you see, there are Loki, Prometheus, and Tempo already configured.

spring-boot-observability-datasources

By default, Grafana Cloud enables exemplars in the Prometheus data source. If you are running Grafana by yourself, don’t forget to enable it on your Prometheus data source.

Let’s start with the logs. We will analyze exactly the same logs as in the section “Testing Observability with Spring Boot”. We will get all the logs sent by the Grafana Agent. As you probably remember, we formatted all the logs as JSON. Therefore, we will parse them using the Json parser on the server side. Thanks to that, we would be able to filter by all the log fields. For example, we can use the traceId label a filter expression: {job="springboot-json"} | json | traceId = 1bb1d7d78a5ac47e8ebc2da961955f87.

Here’s a full list of logs without any filtering. The highlighted lines contain logs of two analyzed traces.

In the next step, we will configure Prometheus metrics visualization. Since we enabled percentile histograms for the http.server.requests metrics, we have multiple buckets represented by the http_server_requests_seconds_bucket values. A set of multiple buckets _bucket with a label le which contains a count of all samples whose values are less than or equal to the numeric value contained in the le label. We need to count histograms for 90% and 60% percent of requests. Here are our Prometheus queries:

Here’s our histogram. Exemplars are shown as green diamonds.

spring-boot-observability-histogram

When you hover over the selected exemplar, you will see more details. It includes, for example, the traceId value.

Finally, the last part of our exercise. We would like to analyze the particular traces with Grafana Tempo. The only thing you need to do is to choose the grafanacloud-*-traces data source and set the value of the searched traceId. Here’s a sample result.

Final Thoughts

The first GA release of Spring Boot 3 is just around the corner. Probably one of the most important things you will have to handle during migration from Spring Boot 2 is observability. In this article, you can find a detailed description of the current Spring Boot approach. If you are interested in Spring Boot, it’s worth reading about its best practices for building microservices in this article.

The post Spring Boot 3 Observability with Grafana appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/11/03/spring-boot-3-observability-with-grafana/feed/ 6 13682
Spring Boot Autoscaling on Kubernetes https://piotrminkowski.com/2020/11/05/spring-boot-autoscaling-on-kubernetes/ https://piotrminkowski.com/2020/11/05/spring-boot-autoscaling-on-kubernetes/#comments Thu, 05 Nov 2020 09:39:05 +0000 https://piotrminkowski.com/?p=9051 Autoscaling based on the custom metrics is one of the features that may convince you to run your Spring Boot application on Kubernetes. By default, horizontal pod autoscaling can scale the deployment based on the CPU and memory. However, such an approach is not enough for more advanced scenarios. In that case, you can use […]

The post Spring Boot Autoscaling on Kubernetes appeared first on Piotr's TechBlog.

]]>
Autoscaling based on the custom metrics is one of the features that may convince you to run your Spring Boot application on Kubernetes. By default, horizontal pod autoscaling can scale the deployment based on the CPU and memory. However, such an approach is not enough for more advanced scenarios. In that case, you can use Prometheus to collect metrics from your applications. Then, you may integrate Prometheus with the Kubernetes custom metrics mechanism.

Preface

In this article, you will learn how to run Prometheus on Kubernetes using the Helm package manager. You will use the chart that causes Prometheus to scrape a variety of Kubernetes resource types. Thanks to that you won’t have to configure it. In the next step, you will install the Prometheus Adapter. You can also do that using the Helm package manager. The adapter acts as a bridge between the Prometheus instance and the custom metrics API. Our Spring Boot application exposes metrics through the HTTP endpoint. You will learn how to configure autoscaling on Kubernetes based on the number of incoming requests.

Source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my repository sample-spring-boot-on-kubernetes. Then you should go to the k8s directory. You can find there all the Kubernetes manifests and configuration files required for that exercise.

Our Spring Boot application is ready to be deployed on Kubernetes with Skaffold. You can find the skaffold.yaml file in the project root directory. Skaffold uses Jib Maven Plugin for building a Docker image. It deploys not only the Spring Boot application but also the Mongo database.

apiVersion: skaffold/v2beta5
kind: Config
metadata:
  name: sample-spring-boot-on-kubernetes
build:
  artifacts:
    - image: piomin/sample-spring-boot-on-kubernetes
      jib:
        args:
          - -Pjib
  tagPolicy:
    gitCommit: {}
deploy:
  kubectl:
    manifests:
      - k8s/mongodb-deployment.yaml
      - k8s/deployment.yaml

The only thing you need to do to build and deploy the application is to execute the following command. It also allows you to access HTTP API through the local port.

$ skaffold dev --port-forward

For more information about Skaffold, Jib and a local development of Java applications, you may refer to the article Local Java Development on Kubernetes

Kubernetes Autoscaling with Spring Boot – Architecture

The picture visible below shows the architecture of our sample system. The horizontal pod autoscaler (HPA) automatically scales the number of pods based on CPU, memory, or other custom metrics. It obtains the value of the metric by pulling the Custom Metrics API. In the beginning, we are running a single instance of our Spring Boot application on Kubernetes. Prometheus gathers and stores metrics from the application by calling HTTP endpoint /actuator/promentheus. Consequently, the HPA scales up the number of pods if the value of the metric exceeds the assumed value.

spring-boot-autoscaler-on-kubernetes-arch

Run Prometheus on Kubernetes

Let’s start by running the Prometheus instance on Kubernetes. In order to do that, you should use the official Prometheus Helm chart. Firstly, you need to add the required repository.

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo add stable https://charts.helm.sh/stable
$ helm repo update

Then, you should execute the Helm install command and provide the name of your installation.

$ helm install prometheus prometheus-community/prometheus

In a moment, the Prometheus instance is ready to use. You can access it through the Kubernetes Service prometheus-server. It is available on port 443. By default, the type of service is ClusterIP. Therefore, you should execute the kubectl port-forward command to access it on the local port.

Deploy Spring Boot on Kubernetes

In order to enable Prometheus support in Spring Boot, you need to include Spring Boot Actuator and the Micrometer Prometheus library. The full list of required dependencies also contains the Spring Web and Spring Data MongoDB modules.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
   <groupId>io.micrometer</groupId>
   <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-devtools</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

After enabling Prometheus support, the application exposes metrics through the /actuator/prometheus endpoint.

Our example application is very simple. It exposes the REST API for CRUD operations and connects to the Mongo database. Just to clarify, here’s the REST controller implementation.

@RestController
@RequestMapping("/persons")
public class PersonController {

   private PersonRepository repository;
   private PersonService service;

   PersonController(PersonRepository repository, PersonService service) {
      this.repository = repository;
      this.service = service;
   }

   @PostMapping
   public Person add(@RequestBody Person person) {
      return repository.save(person);
   }

   @PutMapping
   public Person update(@RequestBody Person person) {
      return repository.save(person);
   }

   @DeleteMapping("/{id}")
   public void delete(@PathVariable("id") String id) {
      repository.deleteById(id);
   }

   @GetMapping
   public Iterable<Person> findAll() {
      return repository.findAll();
   }

   @GetMapping("/{id}")
   public Optional<Person> findById(@PathVariable("id") String id) {
      return repository.findById(id);
   }

}

You may add a new person, modify and delete it through the HTTP API. You can also find it by id or just find all available persons. The Spring Boot Actuator generates HTTP traffic statistics per each endpoint and exposes it in the form readable by Prometheus. The number of incoming requests is available in the http_server_requests_seconds_count metric. Consequently, we will use this metric for Spring Boot autoscaling on Kubernetes.

Prometheus collects a pretty large set of metrics from the whole cluster. However, by default, it is not gathering the logs from applications. In order to force Prometheus to scrape the particular pods, you must add annotations to the Deployment as shown below. The annotation prometheus.io/path indicates the context path with the metrics endpoint. Of course, you have to enable scraping for the application using the annotation prometheus.io/scrape. Finally, you need to set the number of HTTP port with prometheus.io/port.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-boot-on-kubernetes-deployment
spec:
  selector:
    matchLabels:
      app: sample-spring-boot-on-kubernetes
  template:
    metadata:
      annotations:
        prometheus.io/path: /actuator/prometheus
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
      labels:
        app: sample-spring-boot-on-kubernetes
    spec:
      containers:
      - name: sample-spring-boot-on-kubernetes
        image: piomin/sample-spring-boot-on-kubernetes
        ports:
        - containerPort: 8080
        env:
          - name: MONGO_DATABASE
            valueFrom:
              configMapKeyRef:
                name: mongodb
                key: database-name
          - name: MONGO_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-user
          - name: MONGO_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-password

Install Prometheus Adapter on Kubernetes

The Prometheus adapter pulls metrics from the Prometheus instance and exposes them as the Custom Metrics API. In this step, you will have to provide configuration for pulling a custom metric exposed by the Spring Boot Actuator. The http_server_requests_seconds_count metric contains a number of requests received by the particular HTTP endpoint. To clarify, let’s take a look at the list of http_server_requests_seconds_count metrics for the multiple /persons endpoints.

You need to override some configuration settings for the Prometheus adapter. Firstly, you should change the default address of the Prometheus instance. Since the name of the Prometheus Service is prometheus-server, you should change it to prometheus-server.default.svc. The number of HTTP port is 80. Then, you have to define a custom rule for pulling the required metric from Prometheus. It is important to override the name of the Kubernetes pod and namespace used as a metric tag by Prometheus. There are multiple entries for http_server_requests_seconds_count, so you must calculate the sum. The name of the custom Kubernetes metric is http_server_requests_seconds_count_sum.

prometheus:
  url: http://prometheus-server.default.svc
  port: 80
  path: ""

rules:
  default: true
  custom:
  - seriesQuery: '{__name__=~"^http_server_requests_seconds_.*"}'
    resources:
      overrides:
        kubernetes_namespace:
          resource: namespace
        kubernetes_pod_name:
          resource: pod
    name:
      matches: "^http_server_requests_seconds_count(.*)"
      as: "http_server_requests_seconds_count_sum"
    metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>,uri=~"/persons.*"}) by (<<.GroupBy>>)

Now, you just need to execute the Helm install command with the location of the YAML configuration file.

$ helm install -f k8s\helm-config.yaml prometheus-adapter prometheus-community/prometheus-adapter

Finally, you can verify if metrics have been successfully pulled by executing the following command.

Create Kubernetes Horizontal Pod Autoscaler

In the last step of this tutorial, you will create a Kubernetes HorizontalPodAutoscaler. HorizontalPodAutoscaler automatically scales up the number of pods if the average value of the http_server_requests_seconds_count_sum metric exceeds 100. In other words, if your instance of application receives more than 100 requests, HPA automatically runs a new instance. Then, after sending another 100 requests, an average value of metric exceeds 100 once again. So, HPA runs a third instance of the application. Consequently, after sending 1k requests you should have 10 pods. The definition of our HorizontalPodAutoscaler is visible below.

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: sample-hpa
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: sample-spring-boot-on-kubernetes-deployment
  minReplicas: 1
  maxReplicas: 10
  metrics:
    - type: Pods
      pods:
        metric:
          name: http_server_requests_seconds_count_sum
        target:
          type: AverageValue
          averageValue: 100

Testing Kubernetes autoscaling with Spring Boot

After deploying all the required components you may verify a list of running pods. As you see, there is a single instance of our Spring Boot application sample-spring-boot-on-kubernetes.

In order to check the current value of the http_server_requests_seconds_count_sum metric you can display the details about HorizontalPodAutoscaler. As you see I have already sent 15 requests to the different HTTP endpoints.

spring-boot-autoscaling-on-kubernetes-hpa

Here’s the sequence of requests you may send to the application to test autoscaling behavior.

$ curl http://localhost:8080/persons -d "{\"firstName\":\"Test\",\"lastName\":\"Test\",\"age\":20,\"gender\":\"MALE\"}" -H "Content-Type: application/
json"
{"id":"5fa334d149685f24841605a9","firstName":"Test","lastName":"Test","age":20,"gender":"MALE"}
$ curl http://localhost:8080/persons/5fa334d149685f24841605a9
{"id":"5fa334d149685f24841605a9","firstName":"Test","lastName":"Test","age":20,"gender":"MALE"}
$ curl http://localhost:8080/persons
[{"id":"5fa334d149685f24841605a9","firstName":"Test","lastName":"Test","age":20,"gender":"MALE"}]
$ curl -X DELETE http://localhost:8080/persons/5fa334d149685f24841605a9

After sending many HTTP requests to our application, you may verify the number of running pods. In that case, we have 5 instances.

spring-boot-autoscaling-on-kubernetes-hpa-finish

You can also display a list of running pods.

Conclusion

In this article, I showed you a simple scenario of Kubernetes autoscaling with Spring Boot based on the number of incoming requests. You may easily create more advanced scenarios just by modifying a metric query in the Prometheus Adapter configuration file. I run all my tests on Google Cloud with GKE. For more information about running JVM applications on GKE please refer to Running Kotlin Microservice on Goggle Kubernetes Engine.

The post Spring Boot Autoscaling on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/11/05/spring-boot-autoscaling-on-kubernetes/feed/ 4 9051
RabbitMQ Monitoring on Kubernetes https://piotrminkowski.com/2020/09/29/rabbitmq-monitoring-on-kubernetes/ https://piotrminkowski.com/2020/09/29/rabbitmq-monitoring-on-kubernetes/#comments Tue, 29 Sep 2020 10:55:02 +0000 https://piotrminkowski.com/?p=8883 RabbitMQ monitoring can be a key point of your system management. Therefore we should use the right tools for that. To enable them on RabbitMQ we need to install some plugins. In this article, I will show you how to use Prometheus and Grafana to monitor the key metrics of RabbitMQ. Of course, we will […]

The post RabbitMQ Monitoring on Kubernetes appeared first on Piotr's TechBlog.

]]>
RabbitMQ monitoring can be a key point of your system management. Therefore we should use the right tools for that. To enable them on RabbitMQ we need to install some plugins. In this article, I will show you how to use Prometheus and Grafana to monitor the key metrics of RabbitMQ. Of course, we will build the applications that send and receive messages. We will use Kubernetes as a target platform for our system. In the last step, we are going to enable the tracing plugin. It helps us in collecting the list of incoming messages.

Source code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my repository sample-spring-amqp. Inside k8s directory, you will find all the required deployment manifests. Both Spring Boot test applications are available inside listener and producer directories. If you would like to test the cluster of RabbitMQ please refer to the article RabbitMQ in cluster.

Step 1 – Building a RabbitMQ image

In the first step, we are overriding the Docker image of RabbitMQ. In that case, we need to extend the base image with a tag 3-management and add two plugins. The plugin rabbitmq_prometheus adds Prometheus exporter of core RabbitMQ metrics. The second of them, rabbitmq_tracing allows us to log the payloads of incoming messages. That’s all that we need to define in our Dockerfile, which is visible below.

FROM rabbitmq:3-management
RUN rabbitmq-plugins enable --offline rabbitmq_prometheus rabbitmq_tracing

Then you just need to build an already defined image. Let’s say its name is piomin/rabbitmq-monitoring. After building I’m pushing it to my remote Docker registry.

$ docker build -t piomin/rabbitmq-monitoring .
$ docker push piomin/rabbitmq-monitoring

Step 2 – Deploying RabbitMQ on Kubernetes

Now, we are going to deploy our custom image of RabbitMQ on Kubernetes. For the purpose of this article, we will run a standalone version of RabbitMQ. In that case, the only thing we need to do is to override some configuration properties in the rabbitmq.conf and enabled_plugins files. First, I’m enabling logging to the console at the debug level. Then I’m also enabling all the required plugins.

apiVersion: v1
kind: ConfigMap
metadata:
  name: rabbitmq
  labels:
    name: rabbitmq
data:
  rabbitmq.conf: |-
    loopback_users.guest = false
    log.console = true
    log.console.level = debug
    log.exchange = true
    log.exchange.level = debug
  enabled_plugins: |-
    [rabbitmq_management,rabbitmq_prometheus,rabbitmq_tracing].

Both rabbitmq.conf and enabled_plugins files should be placed inside the /etc/rabbitmq directory. Therefore, I’m mounting them inside the volume assigned to the RabbitMQ Deployment. Additionally, we are exposing three ports outside the container. The port 5672 is used in communication with applications through AMQP protocol. The Prometheus plugin exposes metrics on the dedicated port 15692. In order to access Management UI and HTTP endpoints, you should use the 15672 port.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rabbitmq
  labels:
    app: rabbitmq
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rabbitmq
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      containers:
        - name: rabbitmq
          image: piomin/rabbitmq-monitoring:latest
          ports:
            - containerPort: 15672
              name: http
            - containerPort: 5672
              name: amqp
            - containerPort: 15692
              name: prometheus
          volumeMounts:
            - name: rabbitmq-config-map
              mountPath: /etc/rabbitmq/
      volumes:
        - name: rabbitmq-config-map
          configMap:
            name: rabbitmq

Step 3 – Building Spring Boot listener application

Our sample listener application uses Spring Boot AMQP project for integration with RabbitMQ. Thanks to Spring Boot Actuator module it is also exposing metrics including RabbiMQ specific values. It is important to expose them in the format readable by Prometheus.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
   <groupId>io.micrometer</groupId>
   <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

The listener application defines and creates two exchanges. First of them, trx-events-topic, is used for multicast communication. On the other hand, trx-events-direct takes a part in point-to-point communication. Both our sample applications are exchanging messages in JSON format. Therefore we have to override a default Spring Boot AMQP message converter with Jackson2JsonMessageConverter.

@SpringBootApplication
public class ListenerApplication {

   public static void main(String[] args) {
      SpringApplication.run(ListenerApplication.class, args);
   }

   @Bean
   public TopicExchange topic() {
      return new TopicExchange("trx-events-topic");
   }

   @Bean
   public DirectExchange queue() {
      return new DirectExchange("trx-events-direct");
   }

   @Bean
   public MessageConverter jsonMessageConverter() {
      return new Jackson2JsonMessageConverter();
   }
}

The listener application receives messages from the both topic and direct exchanges. Each running instance of this application is creating a queue binding with the random name. With a direct exchange, only a single queue is receiving incoming messages. On the other hand, all the queues related to a topic exchange are receiving incoming messages.

@Component
@Slf4j
public class ListenerComponent {

   @RabbitListener(bindings = {
      @QueueBinding(
         exchange = @Exchange(type = ExchangeTypes.TOPIC, name = "trx-events-topic"),
         value = @Queue("${topic.queue.name}")
      )
   })
   public void onTopicMessage(SampleMessage message) {
      log.info("Message received: {}", message);
   }

   @RabbitListener(bindings = {
      @QueueBinding(
         exchange = @Exchange(type = ExchangeTypes.DIRECT, name = "trx-events-direct"),
         value = @Queue("${direct.queue.name}")
      )
   })
   public void onDirectMessage(SampleMessage message) {
      log.info("Message received: {}", message);
   }

}

The name of queues assigned to the topic and direct exchanges is configured inside application.yml file.

topic.queue.name: t-${random.uuid}
direct.queue.name: d-${random.uuid}

Step 4 – Building Spring Boot producer application

The producer application also uses Spring Boot AMQP for integration with RabbitMQ. It sends messages to the exchanges with RabbitTemplate. Similarly to the listener application it formats all the messages as JSON string.

@SpringBootApplication
@EnableScheduling
public class ProducerApplication {

   public static void main(String[] args) {
      SpringApplication.run(ProducerApplication.class, args);
   }

   @Bean
   public RabbitTemplate rabbitTemplate(final ConnectionFactory connectionFactory) {
      final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
      rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
      return rabbitTemplate;
   }

   @Bean
   public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
      return new Jackson2JsonMessageConverter();
   }

}

The producer application starts sending messages just after startup. Each message contains id, type and message fields. They are sent to the topic exchange with 5 seconds interval, and to the direct exchange with 2 seconds interval.

@Component
@Slf4j
public class ProducerComponent {

   int index = 1;
   private RabbitTemplate rabbitTemplate;

   ProducerComponent(RabbitTemplate rabbitTemplate) {
      this.rabbitTemplate = rabbitTemplate;
   }

   @Scheduled(fixedRate = 5000)
   public void sendToTopic() {
      SampleMessage msg = new SampleMessage(index++, "abc", "topic");
      rabbitTemplate.convertAndSend("trx-events-topic", null, msg);
      log.info("Sending message: {}", msg);
   }

   @Scheduled(fixedRate = 2000)
   public void sendToDirect() {
      SampleMessage msg = new SampleMessage(index++, "abc", "direct");
      rabbitTemplate.convertAndSend("trx-events-direct", null, msg);
      log.info("Sending message: {}", msg);
   }
   
}

Step 5 – Deploying Prometheus and Grafana for RabbitMQ monitoring

We will use Prometheus for collecting metrics from RabbitMQ, and our both Spring Boot applications. Prometheus detects endpoints with metrics by the Kubernetes Service app label and a HTTP port name. Of course, you can define different search criteria for that. Because Spring Boot and RabbitMQ metrics are defined under different endpoints, we need to define two jobs. On the source code below you can see the ConfigMap that contains the Prometheus configuration file.

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus
  labels:
    name: prometheus
data:
  prometheus.yml: |-
    scrape_configs:
      - job_name: 'springboot'
        metrics_path: /actuator/prometheus
        scrape_interval: 5s
        kubernetes_sd_configs:
        - role: endpoints
          namespaces:
            names:
              - default

        relabel_configs:
          - source_labels: [__meta_kubernetes_service_label_app]
            separator: ;
            regex: (producer|listener)
            replacement: $1
            action: keep
          - source_labels: [__meta_kubernetes_endpoint_port_name]
            separator: ;
            regex: http
            replacement: $1
            action: keep
          # ...
      - job_name: 'rabbitmq'
        metrics_path: /metrics
        scrape_interval: 5s
        kubernetes_sd_configs:
        - role: endpoints
          namespaces:
            names:
              - default

        relabel_configs:
          - source_labels: [__meta_kubernetes_service_label_app]
            separator: ;
            regex: rabbitmq
            replacement: $1
            action: keep
          - source_labels: [__meta_kubernetes_endpoint_port_name]
            separator: ;
            regex: prometheus
            replacement: $1
            action: keep
          # ...

For the full version of Prometheus deployment please refer to the source code. Prometheus tries to discover metric endpoints by the Kubernetes Service label and port name. Let’s take a look on the Service for the listener application.

apiVersion: v1
kind: Service
metadata:
  name: listener-service
  labels:
    app: listener
spec:
  type: ClusterIP
  selector:
    app: listener
  ports:
  - port: 8080
    name: http

Similarly, we should create Service for RabbitMQ.

apiVersion: v1
kind: Service
metadata:
  name: rabbitmq-service
  labels:
    app: rabbitmq
spec:
  type: NodePort
  selector:
    app: rabbitmq
  ports:
  - port: 15672
    name: http
  - port: 5672
    name: amqp
  - port: 15692
    name: prometheus

Step 6 – Deploying stack for RabbitMQ monitoring

We can finally proceed to the deployment on Kubernetes. In summary, we have five running applications. Three of them, RabbitMQ with the management console, Prometheus, and Grafana are a part of the RabbitMQ monitoring stack. We also have a single instance of the Spring Boot AMQP producer application, and two instances of the Spring Boot AMQP listener application. You can see the final list of pods in the picture below.

rabbitmq-monitoring-pods

If you deploy Spring Boot applications with skaffold dev --port-forward command, you can easily access them on the local port. Other applications can be accessed via Kubernetes Service NodePort.

Step 7 – RabbitMQ monitoring with Prometheus metrics

We can easily verify a list of metrics generated by Spring Boot applications by calling /actuator/prometheus endpoint. First, let’s take a look at the metrics returned by the listener application.

rabbitmq_not_acknowledged_published_total{name="rabbit",} 0.0
rabbitmq_unrouted_published_total{name="rabbit",} 0.0
rabbitmq_channels{name="rabbit",} 2.0
rabbitmq_consumed_total{name="rabbit",} 2432.0
rabbitmq_connections{name="rabbit",} 1.0
rabbitmq_acknowledged_total{name="rabbit",} 2432.0
spring_rabbitmq_listener_seconds_max{exception="none",listener_id="org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#1",queue="d-ea28bd07-929d-4928-8d2c-5dceeec9950a",result="success",} 0.0025406
spring_rabbitmq_listener_seconds_max{exception="none",listener_id="org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#0",queue="t-e8990fe4-7d8c-4a2c-96d7-fff4fe503265",result="success",} 0.0024175
spring_rabbitmq_listener_seconds_count{exception="none",listener_id="org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#1",queue="d-ea28bd07-929d-4928-8d2c-5dceeec9950a",result="success",} 1712.0
spring_rabbitmq_listener_seconds_sum{exception="none",listener_id="org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#1",queue="d-ea28bd07-929d-4928-8d2c-5dceeec9950a",result="success",} 0.992886413
spring_rabbitmq_listener_seconds_count{exception="none",listener_id="org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#0",queue="t-e8990fe4-7d8c-4a2c-96d7-fff4fe503265",result="success",} 720.0
spring_rabbitmq_listener_seconds_sum{exception="none",listener_id="org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#0",queue="t-e8990fe4-7d8c-4a2c-96d7-fff4fe503265",result="success",} 0.598468801
rabbitmq_acknowledged_published_total{name="rabbit",} 0.0
rabbitmq_failed_to_publish_total{name="rabbit",} 0.0
rabbitmq_rejected_total{name="rabbit",} 0.0
rabbitmq_published_total{name="rabbit",} 0.0

Similarly, we can verify the list of metrics from the producer application. In contrast to the listener application, it is generating rabbitmq_published_total instead of rabbitmq_consumed_total.

rabbitmq_acknowledged_published_total{name="rabbit",} 0.0
rabbitmq_unrouted_published_total{name="rabbit",} 0.0
rabbitmq_acknowledged_total{name="rabbit",} 0.0
rabbitmq_rejected_total{name="rabbit",} 0.0
rabbitmq_connections{name="rabbit",} 1.0
rabbitmq_not_acknowledged_published_total{name="rabbit",} 0.0
rabbitmq_consumed_total{name="rabbit",} 0.0
rabbitmq_failed_to_publish_total{name="rabbit",} 0.0
rabbitmq_published_total{name="rabbit",} 2553.0
rabbitmq_channels{name="rabbit",} 1.0

The list of metrics generated by the RabbitMQ Prometheus plugin is pretty impressive. I decided to use only some of them.

rabbitmq_channel_consumers 6
rabbitmq_channel_messages_published_total 2926
rabbitmq_channel_messages_delivered_total 2022
rabbitmq_channel_messages_acked_total 5922
rabbitmq_connections_opened_total 9
rabbitmq_connection_incoming_bytes_total 700878
rabbitmq_connection_outgoing_bytes_total 1388158
rabbitmq_connection_channels 5
rabbitmq_queue_messages 5917
rabbitmq_queue_consumers 6
rabbitmq_queues 10

We can visualize all the metrics on the Grafana dashboard. Grafana is using Prometheus as a data source. To repeat, Prometheus collects data from the Spring Boot applications endpoints and RabbitMQ.

rabbitmq-monitoring-grafana

Step 8 – RabbitMQ monitoring with Tracing plugin

The tracing plugin allows us to log all incoming and outgoing messages. We can configure it in the RabbitMQ management console. In order to do that we need to switch to the “Admin” tab. Then, we can enable the logging of all messages or just those incoming to the particular queue or exchange.

rabbitmq-monitoring-tracing

The logs are available inside a file. You can download it. Each entry in the file contains detailed information about a message. You can verify the name of the node, change, or queue. Of course, it also contains the message payload, properties, and time of the event as shown below.

Conclusion

RabbitMQ monitoring tools allow you to verify general metrics of the node and detailed logs of every message. In addition, Spring Boot AMQP offers dedicated metrics for applications that interact with RabbitMQ. In this article, I described how to run the full monitoring stack on Kubernetes. Enjoy 🙂

The post RabbitMQ Monitoring on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/09/29/rabbitmq-monitoring-on-kubernetes/feed/ 6 8883
Best Practices For Microservices on Kubernetes https://piotrminkowski.com/2020/03/10/best-practices-for-microservices-on-kubernetes/ https://piotrminkowski.com/2020/03/10/best-practices-for-microservices-on-kubernetes/#comments Tue, 10 Mar 2020 11:37:56 +0000 http://piotrminkowski.com/?p=7798 There are several best practices for building microservices architecture properly. You may find many articles about it online. One of them is my previous article Spring Boot Best Practices For Microservices. I focused there on the most important aspects that should be considered when running microservice applications built on top of Spring Boot on production. […]

The post Best Practices For Microservices on Kubernetes appeared first on Piotr's TechBlog.

]]>
There are several best practices for building microservices architecture properly. You may find many articles about it online. One of them is my previous article Spring Boot Best Practices For Microservices. I focused there on the most important aspects that should be considered when running microservice applications built on top of Spring Boot on production. I didn’t assume there is any platform used for orchestration or management, but just a group of independent applications. In this article, I’m going to extend the list of already introduced best practices with some new rules dedicated especially to microservices deployed on the Kubernetes platform.
The first question is if it makes any difference when you deploy your microservices on Kubernetes instead of running them independently without any platform? Well, actually yes and no… Yes, because now you have a platform that is responsible for running and monitoring your applications, and it launches some of its own rules. No, because you still have microservices architecture, a group of loosely coupled, independent applications, and you should not forget about it! In fact, many of the previously introduced best practices are actual, some of them need to be redefined a little. There are also some new, platform-specific rules, which should be mentioned.
One thing that needs to be explained before proceeding. This list of Kubernetes microservices best practices is built based on my experience in running microservices-based architecture on cloud platforms like Kubernetes. I didn’t copy it from other articles or books. In my organization, we have already migrated our microservices from Spring Cloud (Eureka, Zuul, Spring Cloud Config) to OpenShift. We are continuously improving this architecture based on experience in maintaining it.

Example

The sample Spring Boot application that implements currently described Kubernetes microservices best practices is written in Kotlin. It is available on GitHub in repository sample-spring-kotlin-microservice under branch kubernetes: https://github.com/piomin/sample-spring-kotlin-microservice/tree/kubernetes.

1. Allow platform to collect metrics

I have also put a similar section in my article about best practices for Spring Boot. However, metrics are also one of the important Kubernetes microservices best practices. We were using InfluxDB as a target metrics store. Since our approach to gathering metrics data is being changed after migration to Kubernetes I redefined the title of this point into Allow platform to collect metrics. The main difference between current and previous approaches is in the way of collecting data. We use Prometheus, because that process may be managed by the platform. InfluxDB is a push-based system, where your application actively pushes data into the monitoring system. Prometheus is a pull-based system, where a server fetches the metrics values from the running application periodically. So, our main responsibility at this point is to provide endpoints on the application side for Prometheus.
Fortunately, it is very easy to provide metrics for Prometheus with Spring Boot. You need to include Spring Boot Actuator and a dedicated Micrometer library for integration with Prometheus.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
   <groupId>io.micrometer</groupId>
   <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

We should also enable exposing Actuator HTTP endpoints outside application. You can enable a single endpoint dedicated for Prometheus or just expose all Actuator endpoints as shown below.

management.endpoints.web.exposure.include: '*'

After running your application endpoint is by default available under path /actuator/prometheus.

best-practices-microservices-kubernetes-actuator

Assuming you run your application on Kubernetes you need to deploy and configure Prometheus to scrape logs from your pods. The configuration may be delivered as Kubernetes ConfigMap. The prometheus.yml file should contain section scrape_config with path of endpoint serving metrics and Kubernetes discovery settings. Prometheus is trying to localize all application pods by Kubernetes Endpoints. The application should be labeled with app=sample-spring-kotlin-microservice and have a port with name http exposed outside the container.

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus
  labels:
    name: prometheus
data:
  prometheus.yml: |-
    scrape_configs:
      - job_name: 'springboot'
        metrics_path: /actuator/prometheus
        scrape_interval: 5s
        kubernetes_sd_configs:
        - role: endpoints
          namespaces:
            names:
              - default

        relabel_configs:
          - source_labels: [__meta_kubernetes_service_label_app]
            separator: ;
            regex: sample-spring-kotlin-microservice
            replacement: $1
            action: keep
          - source_labels: [__meta_kubernetes_endpoint_port_name]
            separator: ;
            regex: http
            replacement: $1
            action: keep
          - source_labels: [__meta_kubernetes_namespace]
            separator: ;
            regex: (.*)
            target_label: namespace
            replacement: $1
            action: replace
          - source_labels: [__meta_kubernetes_pod_name]
            separator: ;
            regex: (.*)
            target_label: pod
            replacement: $1
            action: replace
          - source_labels: [__meta_kubernetes_service_name]
            separator: ;
            regex: (.*)
            target_label: service
            replacement: $1
            action: replace
          - source_labels: [__meta_kubernetes_service_name]
            separator: ;
            regex: (.*)
            target_label: job
            replacement: ${1}
            action: replace
          - separator: ;
            regex: (.*)
            target_label: endpoint
            replacement: http
            action: replace

The last step is to deploy Prometheus on Kubernetes. You should attach ConfigMap with Prometheus configuration to the Deployment via Kubernetes mounted volume. After that you may set the location of a configuration file using config.file parameter: --config.file=/prometheus2/prometheus.yml.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus
  labels:
    app: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
        - name: prometheus
          image: prom/prometheus:latest
          args:
            - "--config.file=/prometheus2/prometheus.yml"
            - "--storage.tsdb.path=/prometheus/"
          ports:
            - containerPort: 9090
              name: http
          volumeMounts:
            - name: prometheus-storage-volume
              mountPath: /prometheus/
            - name: prometheus-config-map
              mountPath: /prometheus2/
      volumes:
        - name: prometheus-storage-volume
          emptyDir: {}
        - name: prometheus-config-map
          configMap:
            name: prometheus

Now you can verify if Prometheus has discovered your application running on Kubernetes by accessing endpoint /targets.

best-practices-microservices-kubernetes-prometheus

2. Prepare logs in right format

The approach to collecting logs is pretty similar to collecting metrics. Our application should not handle the process of sending logs by itself. It just should take care of formatting logs sent to the output stream properly. Since Docker has a built-in logging driver for Fluentd it is very convenient to use it as a log collector for applications running on Kubernetes. This means no additional agent is required on the container to push logs to Fluentd. Logs are directly shipped to Fluentd service from STDOUT and no additional logs file or persistent storage is required. Fluentd tries to structure data as JSON to unify logging across different sources and destinations.
In order to format our logs to JSON readable by Fluentd we may include the Logstash Logback Encoder library to our dependencies.

<dependency>
   <groupId>net.logstash.logback</groupId>
   <artifactId>logstash-logback-encoder</artifactId>
   <version>6.3</version>
</dependency>

Then we just need to set a default console log appender for our Spring Boot application in the file logback-spring.xml.

<configuration>
    <appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>
    <logger name="jsonLogger" additivity="false" level="DEBUG">
        <appender-ref ref="consoleAppender"/>
    </logger>
    <root level="INFO">
        <appender-ref ref="consoleAppender"/>
    </root>
</configuration>

The logs are printed into STDOUT in the format visible below.

kubernetes-log-format

It is very simple to install Fluentd, Elasticsearch and Kibana on Minikube. Disadvantage of this approach is that we are installing older versions of these tools.

$ minikube addons enable efk
* efk was successfully enabled
$ minikube addons enable logviewer
* logviewer was successfully enabled

After enabling efk and logviewer addons Kubernetes pulls and starts all the required pods as shown below.

best-practices-microservices-kubernetes-pods-logging

Thanks to the logstash-logback-encoder library we may automatically create logs compatible with Fluentd including MDC fields. Here’s a screen from Kibana that shows logs from our test application.

best-practices-microservices-kubernetes-kibana

Optionally, you can add my library for logging requests/responses for Spring Boot application.

<dependency>
   <groupId>com.github.piomin</groupId>
   <artifactId>logstash-logging-spring-boot-starter</artifactId>
   <version>1.2.2.RELEASE</version>
</dependency>

3. Implement both liveness and readiness health check

It is important to understand the difference between liveness and readiness probes in Kubernetes. If these probes are not implemented carefully, they can degrade the overall operation of a service, for example by causing unnecessary restarts. Liveness probe is used to decide whether to restart the container or not. If an application is unavailable for any reason, restarting the container sometimes can make sense. On the other hand, a readiness probe is used to decide if a container can handle incoming traffic. If a pod has been recognized as not ready, it is removed from load balancing. Fail of the readiness probe does not result in pod restart. The most typical liveness or readiness probe for web applications is realized via HTTP endpoint.
In a typical web application running outside a platform like Kubernetes, you won’t distinguish liveness and readiness health checks. That’s why most web frameworks provide only a single built-in health check implementation. For Spring Boot application you may easily enable health check by including Spring Boot Actuator to your dependencies. The important information about the Actuator health check is that it may behave differently depending on integrations between your application and third-party systems. For example, if you define a Spring data source for connecting to a database or declare a connection to the message broker, a health check may automatically include such validation through auto-configuration. Therefore, if you set a default Spring Actuator health check implementation as a liveness probe endpoint, it may result in unnecessary restart if the application is unable to connect the database or message broker. Since such behavior is not desired, I suggest you should implement a very simple liveness endpoint, that just verifies the availability of application without checking connection to other external systems.
Adding a custom implementation of a health check is not very hard with Spring Boot. There are some different ways to do that. One of them is visible below. We are using the mechanism provided within Spring Boot Actuator. It is worth noting that we won’t override a default health check, but we are adding another, custom implementation. The following implementation is just checking if an application is able to handle incoming requests.

@Component
@Endpoint(id = "liveness")
class LivenessHealthEndpoint {

    @ReadOperation
    fun health() : Health = Health.up().build()

    @ReadOperation
    fun name(@Selector name: String) : String = "liveness"

    @WriteOperation
    fun write(@Selector name: String) {

    }

    @DeleteOperation
    fun delete(@Selector name: String) {

    }

}

In turn, a default Spring Boot Actuator health check may be the right solution for a readiness probe. Assuming your application would connect to database Postgres and RabbitMQ message broker you should add the following dependencies to your Maven pom.xml.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
   <groupId>org.postgresql</groupId>
   <artifactId>postgresql</artifactId>
   <scope>runtime</scope>
</dependency>

Now, just for information add the following property to your application.yml. It enables displaying detailed information for auto-configured Actuator /health endpoint.

management:
  endpoint:
    health:
      show-details: always

Finally, let’s call /actuator/health to see the detailed result. As you see in the picture below, a health check returns information about Postgres and RabbitMQ connections.

best-practices-microservices-kubernetes-readiness

There is another aspect of using liveness and readiness probes in your web application. It is related to thread pooling. In a standard web container like Tomcat, each request is handled by the HTTP thread pool. If you are processing each request in the main thread and you have some long-running tasks in your application you may block all available HTTP threads. If your liveness will fail several times in row an application pod will be restarted. Therefore, you should consider implementing long-running tasks using another thread pool. Here’s the example of HTTP endpoint implementation with DeferredResult and Kotlin coroutines.

@PostMapping("/long-running")
fun addLongRunning(@RequestBody person: Person): DeferredResult<Person> {
   var result: DeferredResult<Person>  = DeferredResult()
   GlobalScope.launch {
      logger.info("Person long-running: {}", person)
      delay(10000L)
      result.setResult(repository.save(person))
   }
   return result
}

4. Consider your integrations

Hardly ever our application is able to exist without any external solutions like databases, message brokers or just other applications. There are two aspects of integration with third-party applications that should be carefully considered: connection settings and auto-creation of resources.
Let’s start with connection settings. As you probably remember, in the previous section we were using the default implementation of Spring Boot Actuator /health endpoint as a readiness probe. However, if you leave default connection settings for Postgres and Rabbit each call of the readiness probe takes a long time if they are unavailable. That’s why I suggest decreasing these timeouts to lower values as shown below.

spring:
  application:
    name: sample-spring-kotlin-microservice
  datasource:
    url: jdbc:postgresql://postgres:5432/postgres
    username: postgres
    password: postgres123
    hikari:
      connection-timeout: 2000
      initialization-fail-timeout: 0
  jpa:
    database-platform: org.hibernate.dialect.PostgreSQLDialect
  rabbitmq:
    host: rabbitmq
    port: 5672
    connection-timeout: 2000

Except properly configured connection timeouts you should also guarantee auto-creation of resources required by the application. For example, if you use RabbitMQ queue for asynchronous messaging between two applications you should guarantee that the queue is created on startup if it does not exist. To do that first declare a queue – usually on the listener side.

@Configuration
class RabbitMQConfig {

    @Bean
    fun myQueue(): Queue {
        return Queue("myQueue", false)
    }

}

Here’s a listener bean with receiving method implementation.

@Component
class PersonListener {

    val logger: Logger = LoggerFactory.getLogger(PersonListener::class.java)

    @RabbitListener(queues = ["myQueue"])
    fun listen(msg: String) {
        logger.info("Received: {}", msg)
    }

}

The similar case is with database integration. First, you should ensure that your application starts even if the connection to the database fails. That’s why I declared PostgreSQLDialect. It is required if the application is not able to connect to the database. Moreover, each change in the entities model should be applied on tables before application startup.
Fortunately, Spring Boot has auto-configured support for popular tools for managing database schema changes: Liquibase and Flyway. To enable Liquibase we just need to include the following dependency in Maven pom.xml.

<dependency>
   <groupId>org.liquibase</groupId>
   <artifactId>liquibase-core</artifactId>
</dependency>

Then you just need to create a change log and put in the default location db/changelog/db.changelog-master.yaml. Here’s a sample Liquibase changelog YAML file for creating table person.

databaseChangeLog:
  - changeSet:
      id: 1
      author: piomin
      changes:
        - createTable:
            tableName: person
            columns:
              - column:
                  name: id
                  type: int
                  autoIncrement: true
                  constraints:
                    primaryKey: true
                    nullable: false
              - column:
                  name: name
                  type: varchar(50)
                  constraints:
                    nullable: false
              - column:
                  name: age
                  type: int
                  constraints:
                    nullable: false
              - column:
                  name: gender
                  type: smallint
                  constraints:
                    nullable: false

5. Use Service Mesh

If you are building microservices architecture outside Kubernetes, such mechanisms like load balancing, circuit breaking, fallback, or retrying are realized on the application side. Popular cloud-native frameworks like Spring Cloud simplify the implementation of these patterns in your application and just reduce it to adding a dedicated library to your project. However, if you migrate your microservices to Kubernetes you should not still use these libraries for traffic management. It is becoming some kind of anti-pattern. Traffic management in communication between microservices should be delegated to the platform. This approach on Kubernetes is known as Service Mesh. One of the most important Kubernetes microservices best practices is to use dedicated software for building a service mesh.
Since originally Kubernetes has not been dedicated to microservices, it does not provide any built-in mechanism for advanced managing of traffic between many applications. However, there are some additional solutions dedicated for traffic management, which may be easily installed on Kubernetes. One of the most popular of them is Istio. Besides traffic management it also solves problems related to security, monitoring, tracing and metrics collecting.
Istio can be easily installed on your cluster or on standalone development instances like Minikube. After downloading it just run the following command.

$ istioctl manifest apply

Istio components need to be injected into a deployment manifest. After that, we can define traffic rules using YAML manifests. Istio gives many interesting options for configuration. The following example shows how to inject faults into the existing route. It can be either delays or aborts. We can define a percentage level of error using the percent field for both types of fault. In the Istio resource I have defined a 2 seconds delay for every single request sent to Service account-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: account-service
spec:
  hosts:
    - account-service
  http:
  - fault:
      delay:
        fixedDelay: 2s
        percent: 100
    route:
    - destination:
        host: account-service
        subset: v1

Besides VirtualService we also need to define DestinationRule for account-service. It is really simple – we have just defined the version label of the target service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: account-service
spec:
  host: account-service
  subsets:
  - name: v1
    labels:
      version: v1

6. Be open for framework-specific solutions

There are many interesting tools and solutions around Kubernetes, which may help you in running and managing applications. However, you should also not forget about some interesting tools and solutions offered by a framework you use. Let me give you some examples. One of them is the Spring Boot Admin. It is a useful tool designed for monitoring Spring Boot applications across a single discovery. Assuming you are running microservices on Kubernetes you may also install Spring Boot Admin there.
There is another interesting project within Spring Cloud – Spring Cloud Kubernetes. It provides some useful features that simplify integration between a Spring Boot application and Kubernetes. One of them is a discovery across all namespaces. If you use that feature together with Spring Boot Admin, you may easily create a powerful tool, which is able to monitor all Spring Boot microservices running on your Kubernetes cluster. For more details about implementation details you may refer to my article Spring Boot Admin on Kubernetes.
Sometimes you may use Spring Boot integrations with third-party tools to easily deploy such a solution on Kubernetes without building separated Deployment. You can even build a cluster of multiple instances. This approach may be used for products that can be embedded in a Spring Boot application. It can be, for example RabbitMQ or Hazelcast (popular in-memory data grid). If you are interested in more details about running Hazelcast cluster on Kubernetes using this approach please refer to my article Hazelcast with Spring Boot on Kubernetes.

7. Be prepared for a rollback

Kubernetes provides a convenient way to rollback an application to an older version based on ReplicaSet and Deployment objects. By default, Kubernetes keeps 10 previous ReplicaSets and lets you roll back to any of them. However, one thing needs to be pointed out. A rollback does not include configuration stored inside ConfigMap and Secret. Sometimes it is desired to rollback not only application binaries, but also configuration.
Fortunately, Spring Boot gives us really great possibilities for managing externalized configuration. We may keep configuration files inside the application and also load them from an external location. On Kubernetes we may use ConfigMap and Secret for defining Spring configuration files. The following definition of ConfigMap creates application-rollbacktest.yml Spring configuration containing only a single property. This configuration is loaded by the application only if Spring profile rollbacktest is active.

apiVersion: v1
kind: ConfigMap
metadata:
  name: sample-spring-kotlin-microservice
data:
  application-rollbacktest.yml: |-
    property1: 123456

A ConfigMap is included to the application through a mounted volume.

spec:
  containers:
  - name: sample-spring-kotlin-microservice
    image: piomin/sample-spring-kotlin-microservice
    ports:
    - containerPort: 8080
       name: http
    volumeMounts:
    - name: config-map-volume
       mountPath: /config/
  volumes:
    - name: config-map-volume
       configMap:
         name: sample-spring-kotlin-microservice

We also have application.yml on the classpath. The first version contains only a single property.

property1: 123

In the second we are going to activate the rollbacktest profile. Since, a profile-specific configuration file has higher priority than application.yml, the value of property1 property is overridden with value taken from application-rollbacktest.yml.

property1: 123
spring.profiles.active: rollbacktest

Let’s test the mechanism using a simple HTTP endpoint that prints the value of the property.

@RestController
@RequestMapping("/properties")
class TestPropertyController(@Value("\${property1}") val property1: String) {

    @GetMapping
    fun printProperty1(): String  = property1
    
}

Let’s take a look how we are rolling back a version of deployment. First, let’s see how many revisions we have.

$ kubectl rollout history deployment/sample-spring-kotlin-microservice
deployment.apps/sample-spring-kotlin-microservice
REVISION  CHANGE-CAUSE
1         
2         
3         

Now, we are calling endpoint /properties of current deployment, that returns value of property property1. Since, profile rollbacktest is active it returns value from file application-rollbacktest.yml.

$ curl http://localhost:8080/properties
123456

Let’s roll back to the previous revision.


$ kubectl rollout undo deployment/sample-spring-kotlin-microservice --to-revision=2
deployment.apps/sample-spring-kotlin-microservice rolled back

As you see below the revision=2 is not visible, it is now deployed as the newest revision=4.

$ kubectl rollout history deployment/sample-spring-kotlin-microservice
deployment.apps/sample-spring-kotlin-microservice
REVISION  CHANGE-CAUSE
1         
3         
4         

In this version of application profile rollbacktest wasn’t active, so value of property property1 is taken from application.yml.

 $ curl http://localhost:8080/properties
123

The post Best Practices For Microservices on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/03/10/best-practices-for-microservices-on-kubernetes/feed/ 1 7798
Exporting metrics to InfluxDB and Prometheus using Spring Boot Actuator https://piotrminkowski.com/2018/05/11/exporting-metrics-to-influxdb-and-prometheus-using-spring-boot-actuator/ https://piotrminkowski.com/2018/05/11/exporting-metrics-to-influxdb-and-prometheus-using-spring-boot-actuator/#respond Fri, 11 May 2018 09:44:10 +0000 https://piotrminkowski.wordpress.com/?p=6551 Spring Boot Actuator is one of the most modified projects after the release of Spring Boot 2. It has been through major improvements, which aimed to simplify customization and include some new features like support for other web technologies, for example, the new reactive module – Spring WebFlux. Spring Boot Actuator also adds out-of-the-box support […]

The post Exporting metrics to InfluxDB and Prometheus using Spring Boot Actuator appeared first on Piotr's TechBlog.

]]>
Spring Boot Actuator is one of the most modified projects after the release of Spring Boot 2. It has been through major improvements, which aimed to simplify customization and include some new features like support for other web technologies, for example, the new reactive module – Spring WebFlux. Spring Boot Actuator also adds out-of-the-box support for exporting metrics to InfluxDB – an open-source time-series database designed to handle high volumes of timestamped data. It is really a great simplification in comparison to the version used with Spring Boot 1.5. You can see for yourself how much by reading one of my previous articles Custom metrics visualization with Grafana and InfluxDB. I described there how to export metrics generated by Spring Boot Actuator to InfluxDB using @ExportMetricsWriter bean. The sample Spring Boot application has been available for that article on GitHub repository sample-spring-graphite (https://github.com/piomin/sample-spring-graphite.git) in the branch master. For the current article, I have created the branch spring2 (https://github.com/piomin/sample-spring-graphite/tree/spring2), which show how to implement the same feature as before using version 2.0 of Spring Boot and Spring Boot Actuator.

Additionally, I’m going to show you how to use Spring Boot Actuator to export the same metrics to another popular monitoring system – Prometheus. There is one major difference between models of exporting metrics between InfluxDB and Prometheus. The first of them is a push based system, while the second is a pull based system. So, our sample application needs to actively send data to the InfluxDB monitoring system, while with Prometheus it only has to expose endpoints that will be fetched for data periodically. Let’s begin from InfluxDB.

1. Running InfluxDB

In the previous article I didn’t write much about this database and its configuration. The first step is typical for my examples – we will run a Docker container with InfluxDB. Here’s the simplest command that runs InfluxDB on your local machine and exposes HTTP API over 8086 port.

$ docker run -d --name influx -p 8086:8086 influxdb

Once we started that container, you would probably want to login there and execute some commands. Nothing simpler, just run the following command and you would be able to do it. After login, you should see the version of InfluxDB running on the target Docker container.

$ docker exec -it influx influx
Connected to http://localhost:8086 version 1.5.2
InfluxDB shell version: 1.5.2

The first step is to create a database. As you can probably guess, it can be achieved using the command create database. Then switch to the newly created database.

$ create database springboot
$ use springboot

Does that semantic look familiar for you? InfluxDB provides a very similar query language to SQL. It is called InfluxQL, and allows you to define SELECT statements, GROUP BY or INTO clauses, and many more. However, before executing such queries, we should have data stored inside the database, am I right? Now, let’s proceed to the next steps in order to generate some test metrics.

2. Integrating Spring Boot Actuator with InfluxDB

If you include artifact micrometer-registry-influx to the project’s dependencies, an export to InfluxDB will be enabled automatically. Of course, we also need to include starter spring-boot-starter-actuator.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
   <groupId>io.micrometer</groupId>
   <artifactId>micrometer-registry-influx</artifactId>
</dependency>

The only thing you have to do is to override the default address of InfluxDB because we are running InfluxDB a Docker container on VM. By default, Spring Boot Data tries to connect to a database named mydb. However, I have already created a database springboot, so I should also override this default value. In version 2 of Spring Boot all the configuration properties related to Spring Boot Actuator endpoints have been moved to management.* section.


management:
  metrics:
    export:
      influx:
        db: springboot
        uri: http://192.168.99.100:8086

You may be surprised a little after starting Spring Boot application with actuator included on the classpath, that it exposes only two HTTP endpoints by default /actuator/info and /actuator/health. That’s why in the newest version of Spring Boot all actuators other than /health and /info are disabled by default, for security purposes. To enable all the actuator endpoints, you have to set property management.endpoints.web.exposure.include to '*'.
In the newest version of Spring Boot monitoring of HTTP metrics has been improved significantly. We can enable collecting all Spring MVC metrics by setting the property management.metrics.web.server.auto-time-requests to true. Alternatively, when it is set to false, you can enable metrics for the specific REST controller by annotating it with @Timed. You can also annotate a single method inside the controller, to generate metrics only for specific endpoints.
After application boot you may check out the full list of generated metrics by calling endpoint GET /actuator/metrics. By default, metrics for Spring MVC controller are generated under the name http.server.requests. This name can be customized by setting the management.metrics.web.server.requests-metric-name property. If you run the sample application available inside my GitHub repository it is by default available under port 2222. Now, you can check out the list of statistics generated for a single metric by calling the endpoint GET /actuator/metrics/{requiredMetricName}, as shown in the following picture.

actuator-6

3. Building Spring Boot application

The sample Spring Boot application used for generating metrics consists of a single controller that implements basic CRUD operations for manipulating Person entity, repository bean and entity class. The application connects to MySQL database using Spring Data JPA repository providing CRUD implementation. Here’s the controller class.

@RestController
@Timed
public class PersonController {

   protected Logger logger = Logger.getLogger(PersonController.class.getName());

   @Autowired
   PersonRepository repository;

   @GetMapping("/persons/pesel/{pesel}")
   public List findByPesel(@PathVariable("pesel") String pesel) {
      logger.info(String.format("Person.findByPesel(%s)", pesel));
      return repository.findByPesel(pesel);
   }

   @GetMapping("/persons/{id}")
   public Person findById(@PathVariable("id") Integer id) {
      logger.info(String.format("Person.findById(%d)", id));
      return repository.findById(id).get();
   }

   @GetMapping("/persons")
   public List findAll() {
      logger.info(String.format("Person.findAll()"));
      return (List) repository.findAll();
   }

   @PostMapping("/persons")
   public Person add(@RequestBody Person person) {
      logger.info(String.format("Person.add(%s)", person));
      return repository.save(person);
   }

   @PutMapping("/persons")
   public Person update(@RequestBody Person person) {
      logger.info(String.format("Person.update(%s)", person));
      return repository.save(person);
   }

   @DeleteMapping("/persons/{id}")
   public void remove(@PathVariable("id") Integer id) {
      logger.info(String.format("Person.remove(%d)", id));
      repository.deleteById(id);
   }

}

Before running the application we have set up a MySQL database. The most convenient way to achieve it is through MySQL Docker image. Here’s the command that runs a container with database grafana, defines user and password, and exposes MySQL 5 on port 33306.

$ docker run -d --name mysql -e MYSQL_DATABASE=grafana -e MYSQL_USER=grafana -e MYSQL_PASSWORD=grafana -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -p 33306:3306 mysql:5
 

Then we need to set some database configuration properties on the application side. All the required tables will be created on application’s boot thanks to setting property spring.jpa.properties.hibernate.hbm2ddl.auto to update.


spring:
  datasource:
    url: jdbc:mysql://192.168.99.100:33306/grafana?useSSL=false
    username: grafana
    password: grafana
    driverClassName: com.mysql.jdbc.Driver
  jpa:
    properties:
      hibernate:
        dialect: org.hibernate.dialect.MySQL5Dialect
        hbm2ddl.auto: update
 

4. Generating metrics with Spring Boot Actuator

After starting the application and the required Docker containers, the only thing that needs to be is done is to generate some test statistics. I created the JUnit test class that generates some test data and calls endpoints exposed by the application in a loop. Here’s the fragment of that test method.

int ix = new Random().nextInt(100000);
Person p = new Person();
p.setFirstName("Jan" + ix);
p.setLastName("Testowy" + ix);
p.setPesel(new DecimalFormat("0000000").format(ix) + new DecimalFormat("000").format(ix%100));
p.setAge(ix%100);
p = template.postForObject("http://localhost:2222/persons", p, Person.class);
LOGGER.info("New person: {}", p);

p = template.getForObject("http://localhost:2222/persons/{id}", Person.class, p.getId());
p.setAge(ix%100);
template.put("http://localhost:2222/persons", p);
LOGGER.info("Person updated: {} with age={}", p, ix%100);

template.delete("http://localhost:2222/persons/{id}", p.getId());

Now, let’s move back to step 1. As you probably remember, I have shown you how to run the influx client in the InfluxDB Docker container. After some minutes of working, the test unit should call exposed endpoints many times. We can check out the values of metric http_server_requests stored on Influx. The following query returns a list of measurements collected during the last 3 minutes.

spring-boot-actuator-prometheus-1

As you see, all the metrics generated by Spring Boot Actuator are tagged with the following information: method, uri, status and exception. Thanks to that tag we may easily group metrics per single endpoint including failures and success percentage. Let’s see how to configure and view it in Grafana.

5. Metrics visualization using Grafana

Once we have exported succesfully metrics to InfluxDB, it is time to visualize them using Grafana. First, let’s run Docker container with Grafana.


$ docker run -d --name grafana -p 3000:3000 grafana/grafana
 

Grafana provides a user friendly interface for creating influx queries. We define a graph that visualizes requests processing time per each of calling endpoints and the total number of requests received by the application. If we filter the statistics stored in the table http_server_requests by method type and uri, we would collect all metrics generated per single endpoint.

spring-boot-actuator-prometheus-4

The similar definition should be created for the other endpoints. We will illustrate them all on a single graph.

actuator-5

Here’s the final result.

spring-boot-actuator-prometheus-2

Here’s the graph that visualizes the total number of requests sent to the application.

actuator-3

6. Running Prometheus

The most suitable way to run Prometheus locally is obviously through a Docker container. The API is exposed under port 9090. We should also pass the initial configuration file and name of Docker network. Why? You will find all the answers in the next part of this step description.


docker run -d --name prometheus -p 9090:9090 -v /tmp/prometheus.yml:/etc/prometheus/prometheus.yml --network springboot prom/prometheus

In contrast to InfluxDB, Prometheus pulls metrics from an application. Therefore, we need to enable the Spring Boot Actuator endpoint that exposes metrics for Prometheus, which is disabled by default. To enable it, set property management.endpoint.prometheus.enabled to true, as shown on the configuration fragment below.

management:
  endpoint:
    prometheus:
      enabled: true

Then we should set the address of the Spring Boot Actuator endpoint exposed by the application in the Prometheus configuration file. A scrape_config section is responsible for specifying a set of targets and parameters describing how to connect with them. By default, Prometheus tries to collect data from the target endpoint once a minute.

scrape_configs:
  - job_name: 'springboot'
    metrics_path: '/actuator/prometheus'
    static_configs:
    - targets: ['person-service:2222']

Similarly for integration with InfluxDB we need to include the following artifact to the project’s dependencies.

<dependency>
   <groupId>io.micrometer</groupId>
   <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

In my case, Docker is running on VM, and is available under IP 192.168.99.100. If I would like Prometheus, which is launched as a Docker container, to be able to connect my application, I also should launch it as a Docker container. The most convenient way to link two independent containers is through the Docker network. If both containers are assigned to the same network, they would be able to connect to each other using the container’s name as a target address. Dockerfile is available in the root directory of the sample application’s source code. Second command visible below (docker build) is not required, because the required image piomin/person-service is available on my Docker Hub repository.

$ docker network create springboot
$ docker build -t piomin/person-service .
$ docker run -d --name person-service -p 2222:2222 --network springboot piomin/person-service
 

7. Integrate Prometheus with Grafana

Prometheus exposes a web console under address 192.168.99.100:9090, where you can specify query and display graphs with metrics. However, we can integrate it with Grafana to take an advantage of nicer visualization offered by this tool. First, you should create a Prometheus data source.

actuator-9

Then we should define queries for collecting metrics from Prometheus API. Spring Boot Actuator exposes three different metrics related to HTTP traffic: http_server_requests_seconds_count, http_server_requests_seconds_sum and http_server_requests_seconds_max. For example, we may calculate a per-second average rate of increase of the time series for http_server_requests_seconds_sum, that returns the total number of seconds spent on processing requests by using rate() function. The values can be filtered by method and uri using expression inside {}. The following picture illustrates configuration of rate() function per each endpoint.

actuator-8

Here’s the graph.

actuator-7

Summary

The improvement in metrics generation between version 1.5 and 2.0 of Spring Boot is significant. Exporting data to such popular monitoring systems like InfluxDB or Prometheus is now much easier than before with Spring Boot Actuator, and does not require any additional development. The metrics relating to HTTP traffic are more detailed and they may be easily associated with specific endpoints, thanks to tags indicating the uri, type and status of HTTP request. I think that modifications in Spring Boot Actuator in relation to the previous version of Spring Boot, could be one of the main motivations to migrate your applications to the newest version.

The post Exporting metrics to InfluxDB and Prometheus using Spring Boot Actuator appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2018/05/11/exporting-metrics-to-influxdb-and-prometheus-using-spring-boot-actuator/feed/ 0 6551
Microservices traffic management using Istio on Kubernetes https://piotrminkowski.com/2018/05/09/microservices-traffic-management-using-istio-on-kubernetes/ https://piotrminkowski.com/2018/05/09/microservices-traffic-management-using-istio-on-kubernetes/#respond Wed, 09 May 2018 09:56:44 +0000 https://piotrminkowski.wordpress.com/?p=6513 I have already described a simple example of route configuration between two microservices deployed on Kubernetes in one of my previous articles: Service Mesh with Istio on Kubernetes in 5 steps. You can refer to this article if you are interested in the basic information about Istio, and its deployment on Kubernetes via Minikube. Today […]

The post Microservices traffic management using Istio on Kubernetes appeared first on Piotr's TechBlog.

]]>
I have already described a simple example of route configuration between two microservices deployed on Kubernetes in one of my previous articles: Service Mesh with Istio on Kubernetes in 5 steps. You can refer to this article if you are interested in the basic information about Istio, and its deployment on Kubernetes via Minikube. Today we will create some more advanced traffic management rules based on the same sample applications as used in the previous article about Istio.

The source code of sample applications is available on GitHub in repository sample-istio-services (https://github.com/piomin/sample-istio-services.git). There are two sample applications callme-service and caller-service deployed in two different versions 1.0 and 2.0. Version 1.0 is available in branch v1 (https://github.com/piomin/sample-istio-services/tree/v1), while version 2.0 in the branch v2 (https://github.com/piomin/sample-istio-services/tree/v2). Using these sample applications in different versions I’m going to show you different strategies of traffic management depending on a HTTP header set in the incoming requests.

We may force caller-service to route all the requests to the specific version of callme-service by setting header x-version to v1 or v2. We can also not set this header in the request which results in splitting traffic between all existing versions of service. If the request comes to version v1 of caller-service the traffic is splitted 50-50 between two instances of callme-service. If the request is received by v2 instance of caller-service 75% traffic is forwarded to version v2 of callme-service, while only 25% to v1. The scenario described above has been illustrated on the following diagram.

istio-advanced-1

Before we proceed to the example, I should say some words about traffic management with Istio. If you have read my previous article about Istio, you would probably know that each rule is assigned to a destination. Rules control a process of requests routing within a service mesh. The one very important information about them,especially for the purposes of the example illustrated on the diagram above, is that multiple rules can be applied to the same destination. The priority of every rule is determined by the precedence field of the rule. There is one principle related to a value of this field: the higher value of this integer field, the greater priority of the rule. As you may probably guess, if there is more than one rule with the same precedence value the order of rules evaluation is undefined. In addition to a destination, we may also define a source of the request in order to restrict a rule only to a specific caller. If there are multiple deployments of a calling service, we can even filter them out by setting source’s label field. Of course, we can also specify the attributes of an HTTP request such as uri, scheme or headers that are used for matching a request with a defined rule.

Ok, now let’s take a look at the rule with the highest priority. Its name is callme-service-v1 (1). It applies to callme-service (2), and has the highest priority in comparison to other rules (3). It is applied only to requests sent by caller-service (4), that contain HTTP header x-version with value v1 (5). This route rule applies only to version v1 of callme-service (6).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v1 # (1)
spec:
  destination:
    name: callme-service # (2)
  precedence: 4 # (3)
    match:
      source:
        name: caller-service # (4)
    request:
      headers:
        x-version:
          exact: "v1" # (5)
    route:
    - labels:
    version: v1 # (6)

Here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-7

The next rule callme-service-v2 (1) has a lower priority (2). However, it does not conflict with the first rule, because it applies only to the requests containing x-version header with value v2 (3). It forwards all requests to version v2 of callme-service (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v2 # (1)
spec:
  destination:
    name: callme-service
  precedence: 3 # (2)
    match:
      source:
        name: caller-service
      request:
        headers:
          x-version:
            exact: "v2" # (3)
    route:
    - labels:
    version: v2 # (4)

As before, here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-6

The rule callme-service-v1-default (1) visible in the code fragment below has a lower priority (2) than two previously described rules. In practice it means that it is executed only if conditions defined in two previous rules were not fulfilled. Such a situation occurs if you do not pass the header x-version inside HTTP request, or it would have different value than v1 or v2. The rule visible below applies only to the instance of service labeled with v1 version (3). Finally, the traffic to callme-service is load balanced in proportions 50-50 between two versions of that service (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v1-default # (1)
spec:
  destination:
    name: callme-service
  precedence: 2 # (2)
  match:
    source:
      name: caller-service
    labels:
      version: v1 # (3)
    route: # (4)
    - labels:
      version: v1
      weight: 50
    - labels:
      version: v2
      weight: 50

Here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-4

The last rule is pretty similar to the previously described callme-service-v1-default. Its name is callme-service-v2-default (1), and it applies only to version v2 of caller-service (3). It has the lowest priority (2), and splits traffic between two version of callme-service in proportions 75-25 in favor of version v2 (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v2-default # (1)
spec:
  destination:
    name: callme-service
  precedence: 1 # (2)
    match:
      source:
        name: caller-service
      labels:
        version: v2 # (3)
      route: # (4)
      - labels:
        version: v1
        weight: 25
      - labels:
        version: v2
        weight: 75

The same as before, I have also included the diagram illustrating the behaviour of this rule.

istio-advanced-5

All the rules may be placed inside a single file. In that case they should be separated with line ---. This file is available in code’s repository inside callme-service module as multi-rule.yaml. To deploy all defined rules on Kubernetes just execute the following command.

$ kubectl apply -f multi-rule.yaml

After successful deploy you may check out the list of available rules by running command istioctl get routerule.

istio-advanced-2

Before we will start any tests, we obviously need to have sample applications deployed on Kubernetes. These applications are really simple and pretty similar to the applications used for tests in my previous article about Istio. The controller visible below implements method GET /callme/ping, which prints version of application taken from pom.xml and value of x-version HTTP header received in the request.

[code language=”java”]@RestController
@RequestMapping(“/callme”)
public class CallmeController {

private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);

@Autowired
BuildProperties buildProperties;

@GetMapping(“/ping”)
public String ping(@RequestHeader(name = “x-version”, required = false) String version) {
LOGGER.info(“Ping: name={}, version={}, header={}”, buildProperties.getName(), buildProperties.getVersion(), version);
return buildProperties.getName() + “:” + buildProperties.getVersion() + ” with version ” + version;
}

}

Here’s the controller class that implements method GET /caller/ping. It prints a version of caller-service taken from pom.xml and calls method GET callme/ping exposed by callme-service. It needs to include x-version header to the request when sending it to the downstream service.

[code language=”java”]@RestController
@RequestMapping(“/caller”)
public class CallerController {

private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

@Autowired
BuildProperties buildProperties;
@Autowired
RestTemplate restTemplate;

@GetMapping(“/ping”)
public String ping(@RequestHeader(name = “x-version”, required = false) String version) {
LOGGER.info(“Ping: name={}, version={}, header={}”, buildProperties.getName(), buildProperties.getVersion(), version);
HttpHeaders headers = new HttpHeaders();
if (version != null)
headers.set(“x-version”, version);
HttpEntity entity = new HttpEntity(headers);
ResponseEntity response = restTemplate.exchange(“http://callme-service:8091/callme/ping”, HttpMethod.GET, entity, String.class);
return buildProperties.getName() + “:” + buildProperties.getVersion() + “. Calling… ” + response.getBody() + ” with header ” + version;
}

}

Now, we may proceed to applications build and deployment on Kubernetes. Here are are the further steps.

1. Building application

First, switch to branch v1 and build the whole project sample-istio-services by executing mvn clean install command.

2. Building Docker image

The Dockerfiles are placed in the root directory of every application. Build their Docker images by executing the following commands.

$ docker build -t piomin/callme-service:1.0 .
$ docker build -t piomin/caller-service:1.0 .

Alternatively, you may omit this step, because images piomin/callme-service and piomin/caller-service are available on my Docker Hub account.

3. Inject Istio components to Kubernetes deployment file

Kubernetes YAML deployment file is available in the root directory of every application as deployment.yaml. The result of the following command should be saved as a separated file, for example deployment-with-istio.yaml.

$ istioctl kube-inject -f deployment.yaml

4. Deployment on Kubernetes

Finally, you can execute a well-known kubectl command in order to deploy a Docker container with our sample application.


$ kubectl apply -f deployment-with-istio.yaml

Then switch to branch v2, and repeat the steps described above for version 2.0 of the sample applications. The final deployment result is visible in the picture below.

istio-advanced-3

One very useful thing when running Istio on Kubernetes is out-of-the-box integration with such tools like Zipkin, Grafana or Prometheus. Istio automatically sends some metrics, that are collected by Prometheus, for example total number of requests in metric istio_request_count. YAML deployment files for these plugins ara available inside directory ${ISTIO_HOME}/install/kubernetes/addons. Before installing Prometheus using kubectl command I suggest to change service type from default ClusterIP to NodePort by adding the line type: NodePort.

apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
  labels:
    name: prometheus
  name: prometheus
  namespace: istio-system
spec:
  type: NodePort
  selector:
    app: prometheus
  ports:
  - name: prometheus
    protocol: TCP
    port: 9090

Then we should run command kubectl apply -f prometheus.yaml in order to deploy Prometheus on Kubernetes. The deployment is available inside istio-system namespace. To check the external port of service run the following command. For me, it is available under address http://192.168.99.100:32293.

istio-advanced-14

In the following diagram visualized using Prometheus I filtered out only the requests sent to callme-service. Green color points to requests received by version v2 of the service, while red color points to requests processed by version v1 of the service. Like you can see in this diagram, in the beginning I have sent the requests to caller-service with HTTP header x-version set to value v2, then I didn’t set this header and traffic has been splitted between to deployed instances of the service. Finally I set it to v1. I defined an expression rate(istio_request_count{callme-service.default.svc.cluster.local}[1m]), which returns per-second rate of requests received by callme-service.

istio-advanced-13

Testing

Before sending some test requests to caller-service we need to obtain its address on Kubernetes. After executing the following command you see that it is available under address http://192.168.99.100:32237/caller/ping.

istio-services-16

We have four possible scenarios. First, when we set header x-version to v1 the request will be always routed to callme-service-v1.

istio-advanced-10

If a header x-version is not included in the requests the traffic will be splitted between callme-service-v1

istio-advanced-11

… and callme-service-v2.

istio-advanced-12

Finally, if we set header x-version to v2 the request will be always routed to callme-service-v2.

istio-advanced-14

Conclusion

Using Istio you can easily create and apply simple and more advanced traffic management rules to the applications deployed on Kubernetes. You can also monitor metrics and traces through the integration between Istio and Zipkin, Prometheus and Grafana.

The post Microservices traffic management using Istio on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2018/05/09/microservices-traffic-management-using-istio-on-kubernetes/feed/ 0 6513