Minikube Archives - Piotr's TechBlog https://piotrminkowski.com/tag/minikube/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 09 Jul 2024 10:21:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Minikube Archives - Piotr's TechBlog https://piotrminkowski.com/tag/minikube/ 32 32 181738725 Multi-node Kubernetes Cluster with Minikube https://piotrminkowski.com/2024/07/09/multi-node-kubernetes-cluster-with-minikube/ https://piotrminkowski.com/2024/07/09/multi-node-kubernetes-cluster-with-minikube/#comments Tue, 09 Jul 2024 10:20:59 +0000 https://piotrminkowski.com/?p=15346 This article will teach you how to run and manage a multi-node Kubernetes cluster locally with Minikube. We will run this cluster on Docker. After that, we will enable some useful add-ons, install Kubernetes-native tools for monitoring and observability, and run a sample app that requires storage. You can compare this article with a similar […]

The post Multi-node Kubernetes Cluster with Minikube appeared first on Piotr's TechBlog.

]]>
This article will teach you how to run and manage a multi-node Kubernetes cluster locally with Minikube. We will run this cluster on Docker. After that, we will enable some useful add-ons, install Kubernetes-native tools for monitoring and observability, and run a sample app that requires storage. You can compare this article with a similar post about the Azure Kubernetes Service.

Prerequisites

Before you begin, you need to install Docker on your local machine. Then you need to download and install Minikube. On macOS, we can do it using the Homebrew command as shown below:

$ brew install minikube
ShellSession

Once we successfully installed Minikube, we can use its CLI. Let’s verify the version used in this article:

$ minikube version
minikube version: v1.33.1
commit: 5883c09216182566a63dff4c326a6fc9ed2982ff
ShellSession

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. This time, we won’t work much with the source code. However, you can access the repository with the sample Spring Boot app that uses storage exposed on the Kubernetes cluster. Once you clone the repository, go to the volumes/files-app directory. Then you should follow my instructions.

Create a Multi-node Kubernetes Cluster with Minikube

In order to create a multi-node Kubernetes cluster with Minikube, we need to use the --nodes or -n parameter in the minikube start command. Additionally, we can increase the default value of memory and CPUs reserved for the cluster with the --memory and --cpus parameters. Here’s the required command to execute:

$ minikube start --memory='12g' --cpus='4' -n 3
ShellSession

By the way, if you increase the resources assigned to the Minikube instance, you should also take care of resource reservations for Docker.

Once we run the minikube start command, the cluster creation begins. You should see a similar output, if everything goes fine.

minikube-kubernetes-create

Now, we can use Minikube with the kubectl tool:

$ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:52879
CoreDNS is running at https://127.0.0.1:52879/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ShellSession

We can display a list of running nodes:

$ kubectl get nodes
NAME           STATUS   ROLES           AGE     VERSION
minikube       Ready    control-plane   4h10m   v1.30.0
minikube-m02   Ready    <none>          4h9m    v1.30.0
minikube-m03   Ready    <none>          4h9m    v1.30.0
ShellSession

Sample Spring Boot App

Our Spring Boot app is simple. It exposes some REST endpoints for file-based operations on the target directory attached as a mounted volume. In order to expose REST endpoints, we need to include the Spring Boot Web starter. We will build the image using the Jib Maven plugin.

<properties>
  <spring-boot.version>3.3.1</spring-boot.version>
</properties>

<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
  </dependency>
</dependencies>

<build>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
  </plugins>
</build>

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-dependencies</artifactId>
      <version>${spring-boot.version}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>
    
<build>
  <plugins>
    <plugin>
      <groupId>com.google.cloud.tools</groupId>
      <artifactId>jib-maven-plugin</artifactId>
      <version>3.4.3</version>
    </plugin>
  </plugins>
</build>
XML

Let’s take a look at the main @RestController in our app. It exposes endpoints for listing all the files inside the target directory (GET /files/all), another one for creating a new file (POST /files/{name}), and also for adding a new string line to the existing file (POST /files/{name}/line).

package pl.piomin.services.files.controller;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.*;

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.List;

import static java.nio.file.Files.list;
import static java.nio.file.Files.writeString;

@RestController
@RequestMapping("/files")
public class FilesController {

    private static final Logger LOG = LoggerFactory.getLogger(FilesController.class);

    @Value("${MOUNT_PATH:/mount/data}")
    String root;

    @GetMapping("/all")
    public List<String> files() throws IOException {
        return list(Path.of(root)).map(Path::toString).toList();
    }

    @PostMapping("/{name}")
    public String createFile(@PathVariable("name") String name) throws IOException {
        return Files.createFile(Path.of(root + "/" + name)).toString();
    }

    @PostMapping("/{name}/line")
    public void addLine(@PathVariable("name") String name,
                        @RequestBody String line) {
        try {
            writeString(Path.of(root + "/" + name), line, StandardOpenOption.APPEND);
        } catch (IOException e) {
            LOG.error("Error while writing to file", e);
        }
    }
}
Java

Usually, I deploy the apps on Kubernetes with Skaffold. But this time, there are some issues with integration between the multi-node Minikube cluster and Skaffold. You can find a detailed description of those issues here. Therefore we build the image directly with the Jib Maven plugin and then just run the app with kubectl CLI.

Install Addons and Tools

Minikube comes with a set of predefined add-ons for Kubernetes. We can install each of them with a single minikube addons enable <ADDON_NAME> command. Although there are several plugins available, we still need to install some useful Kubernetes-native tools like Prometheus, for example using the Helm chart. In order to list all available plugins, we should execute the following command:

$ minikube addons list
ShellSession

Install Addon for Storage

The default storage provider in Minikube doesn’t support the multi-node clusters. It also doesn’t implement the CSI interface and is not able to handle volume snapshots. Fortunately, Minikube offers the csi-hostpath-driver addon for deploying the “CSI Hostpath Driver”. Since this plugin is disabled, we need to enable it.

$ minikube addons enable csi-hostpath-driver
ShellSession

Then, we can set the csi-hostpath-driver as a default storage class for the dynamic volume claims.

$ kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
$ kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
ShellSession

Install Monitoring Stack with Helm

The monitoring stack is not available as an add-on. However, we can easily install it using the Helm chart. We will use the official community chart for that kube-prometheus-stack. Firstly, let’s add the required repository.

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
ShellSession

Then, we can install the Prometheus monitoring stack in the monitoring namespace by executing the following command:

$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
  -n monitoring --create-namespace
ShellSession

Once you install Prometheus in your Minikube, you take advantage of the several default metrics exposed by this tool. For example, the Lens IDE automatically integrates with Prometheus metrics and displays the graphs with cluster overview.

minikube-kubernetes-cluster-metrics

We can also see the visualization of resource usage for all running pods, deployments, or stateful sets.

minikube-kubernetes-pod-metrics

Install Postgres with Helm

We will also install the Postgres database for multi-node cluster testing purposes. Once again, there is a Helm chart that simplifies Postgres installation on Kubernetes. It is published in the Bitnami repository. Firstly, let’s add the required repository:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
ShellSession

Then, we can install Postgres in the db namespace. We increase the default number of instances to 3.

$ helm install postgresql bitnami/postgresql \
  --set readReplicas.replicaCount=3 \
  -n db --create-namespace
ShellSession

The chart creates the StatefulSet object with 3 replicas.

$ kubectl get statefulset -n db
NAME         READY   AGE
postgresql   3/3     55m
ShellSession

We can display a list of running pods. As you see, Kubernetes scheduled 2 pods on the minikube-m02 node, and a single pod on the minikube node.

$ kubectl get po -n db -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP            NODE 
postgresql-0   1/1     Running   0          56m   10.244.1.9    minikube-m02
postgresql-1   1/1     Running   0          23m   10.244.1.10   minikube-m02
postgresql-2   1/1     Running   0          23m   10.244.0.4    minikube
ShellSession

Under the hood, there are 3 persistence volumes created. They use a default csi-hostpath-sc storage class and RWO mode.

$ kubectl get pvc -n db
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data-postgresql-0   Bound    pvc-e9b55ce8-978a-44ae-8fab-d5d6f911f1f9   8Gi        RWO            csi-hostpath-sc   <unset>                 65m
data-postgresql-1   Bound    pvc-d93af9ad-a034-4fbb-8377-f39005cddc99   8Gi        RWO            csi-hostpath-sc   <unset>                 32m
data-postgresql-2   Bound    pvc-b683f1dc-4cd9-466c-9c99-eb0d356229c3   8Gi        RWO            csi-hostpath-sc   <unset>                 32m
ShellSession

Build and Deploy Sample Spring Boot App on Minikube

In the first step, we build the app image. We use the Jib Maven plugin for that. I’m pushing the image to my own Docker registry under the piomin name. So you can change to your registry account.

$ cd volumes/files-app
$ mvn clean compile jib:build -Dimage=piomin/files-app:latest
ShellSession

The image is successfully pushed to the remote registry and is available under the piomin/files-app:latest tag.

Let’s create a new namespace on Minikube. We will run our app in the demo namespace.

$ kubectl create ns demo
ShellSession

Then, let’s create the PersistenceVolumeClaim. Since we will run multiple app pods distributed across all the Kubernetes nodes and the same volume is shared between all the instances we need the ReadWriteMany mode.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data
  namespace: demo
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
YAML

xxx

$ kubectl get pvc -n demo
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data   Bound    pvc-08fe242a-6599-4282-b03c-ee38e092431e   1Gi        RWX            csi-hostpath-sc
ShellSession

After that, we can deploy our app. In order, to spread the pods across all the cluster nodes we need to define the PodAntiAffinity rule (1). We will enable the running of only a single pod on each node. The deployment also mounts the data volume into all the app pods (2) (3).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: files-app
  namespace: demo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: files-app
  template:
    metadata:
      labels:
        app: files-app
    spec:
      # (1)
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - files-app
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: files-app
        image: piomin/files-app:latest
        imagePullPolicy: Always
        resources:
          requests:
            memory: 200Mi
            cpu: 100m
        ports:
        - containerPort: 8080
        env:
          - name: MOUNT_PATH
            value: /mount/data
        # (2)
        volumeMounts:
          - name: data
            mountPath: /mount/data
      # (3)
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: data
YAML

Let’s verify a list of running pods deploying the app.

$ kubectl get po -n demo
NAME                         READY   STATUS    RESTARTS   AGE
files-app-84897d9b57-5qqdr   0/1     Pending   0          36m
files-app-84897d9b57-7gwgp   1/1     Running   0          36m
files-app-84897d9b57-bjs84   0/1     Pending   0          36m
ShellSession

Although, we created the RWX volume, only a single pod is running. As you see, the CSI Hostpath Provider doesn’t fully support the read-write-many mode on Minikube.

In order to solve that problem, we can enable the Storage Provisioner Gluster addon in Minikube.

$ minikube addons enable storage-provisioner-gluster
ShellSession

After enabling it, several new pods are running in the storage-gluster namespace.

$ kubectl -n storage-gluster get pods
NAME                                       READY   STATUS    RESTARTS   AGE
glusterfile-provisioner-79cf7f87d5-87p57   1/1     Running   0          5m25s
glusterfs-d8pfp                            1/1     Running   0          5m25s
glusterfs-mp2qx                            1/1     Running   0          5m25s
glusterfs-rlnxz                            1/1     Running   0          5m25s
heketi-778d755cd-jcpqb                     1/1     Running   0          5m25s
ShellSession

Also, there is a new default StorageClass with the glusterfile name.

$ kubectl get sc
NAME                    PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-hostpath-sc         hostpath.csi.k8s.io        Delete          Immediate           false                  20h
glusterfile (default)   gluster.org/glusterfile    Delete          Immediate           false                  19s
standard                k8s.io/minikube-hostpath   Delete          Immediate           false                  21h
ShellSession

Once we redeploy our app and recreate the PVC using a new default storage class, we can expose our sample Spring Boot app as a Kubernetes service:

apiVersion: v1
kind: Service
metadata:
  name: files-app
spec:
  selector:
    app: files-app
  ports:
  - port: 8080
    protocol: TCP
    name: http
  type: ClusterIP
YAML

Then, let’s enable port forwarding for that service to access it over the localhost:8080:

$ kubectl port-forward svc/files-app 8080 -n demo
ShellSession

Finally, we can run some tests to list and create some files on the target volume:

$ curl http://localhost:8080/files/all
[]

$ curl http://localhost:8080/files/test1.txt -X POST
/mount/data/test1.txt

$ curl http://localhost:8080/files/test2.txt -X POST
/mount/data/test2.txt

$ curl http://localhost:8080/files/all
["/mount/data/test1.txt","/mount/data/test2.txt"]

$ curl http://localhost:8080/files/test1.txt/line -X POST -d "hello1"
$ curl http://localhost:8080/files/test1.txt/line -X POST -d "hello2"
ShellSession

And verify the content of a particular inside the volume:

Final Thoughts

In this article, I wanted to share my experience working with the multi-node Kubernetes cluster simulation on Minikube. It was a very quick introduction. I hope it helps 🙂

The post Multi-node Kubernetes Cluster with Minikube appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/07/09/multi-node-kubernetes-cluster-with-minikube/feed/ 2 15346
Monitor Kubernetes Cost Across Teams with Kubecost https://piotrminkowski.com/2023/08/25/monitor-kubernetes-cost-across-teams-with-kubecost/ https://piotrminkowski.com/2023/08/25/monitor-kubernetes-cost-across-teams-with-kubecost/#respond Fri, 25 Aug 2023 12:04:42 +0000 https://piotrminkowski.com/?p=14429 In this article, you will learn how to monitor the real-time cost of the Kubernetes cluster shared across several teams with Kubecost. We won’t focus on the cloud aspects of this tool, like cost optimization. Our main goal is to count cost sharing between several different teams using the same Kubernetes cluster. Sometimes it can […]

The post Monitor Kubernetes Cost Across Teams with Kubecost appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to monitor the real-time cost of the Kubernetes cluster shared across several teams with Kubecost. We won’t focus on the cloud aspects of this tool, like cost optimization. Our main goal is to count cost sharing between several different teams using the same Kubernetes cluster. Sometimes it can be a very important aspect of working with Kubernetes – even in the on-premise environment. If you want to run several apps in your organization within the project, you probably need to guarantee an internal budget for it. Therefore tools that allow to count resource usage as a cost may be very useful in such a scenario. Kubecost seems to be the most popular tool in this area. Let’s try it 🙂

If you are interested in more advanced exercises related to Kubernetes and Prometheus metrics you can read about autoscaling with HPA in my article here.

Prerequisites

In order to perform the exercise you need to have a Kubernetes cluster. It can be a managed cluster on the cloud provider. However, you can also run Kubernetes locally as me, for example with Minikube. The only important thing is to reserve enough resources since we will run several apps. My proposal is to guarantee 16GB of memory and 4 CPUs:

$ minikube start --memory='16g' --cpus='4'

Once we will start the cluster we may proceed to the next section.

Install Kubecost on Kubernetes

We will use the Helm chart to install Kubecost on Minikube. Here’s the installation command:

$ helm install kubecost cost-analyzer \
    --repo https://kubecost.github.io/cost-analyzer/ \
    --namespace kubecost --create-namespace \
    --set kubecostProductConfigs.productKey.key="123"

It will install our tool in the kubecost namespace. Kubecost provides a UI dashboard for visualizing basic aspects related to cost monitoring.

The nice feature here is that it also installs the whole required stack for monitoring including Prometheus and Grafana.

$ kubectl get po -n kubecost
NAME                                          READY   STATUS    RESTARTS   AGE
kubecost-cost-analyzer-66b8cf968c-4j7bb       2/2     Running   0          4m
kubecost-grafana-5fcd9f86c6-44njc             2/2     Running   0          4m
kubecost-kube-state-metrics-c566bb85f-qp5tw   1/1     Running   0          4m
kubecost-prometheus-node-exporter-k2g5g       1/1     Running   0          4m
kubecost-prometheus-server-b8bd4479d-ldvx2    2/2     Running   0          4m

Finally, let’s just enable port forwarding according to the message after the Helm chart installation:

$ kubectl port-forward -n kubecost deployment/kubecost-cost-analyzer 9090

After that, we can access the UI under the http://localhost:9090 address:

kubernetes-cost-ui

Then, click the Settings tab and scroll down to the Labels section. We will find here a list of labels to use in our apps. As you see we can assign each app to a different team, department, or environment. Of course, we can also override the name of each label. However, I don’t think it is necessary for the purpose of our exercise.

kubernetes-cost-labels

Deploy Test Apps on Kubernetes

Let’s deploy some apps on Kubernetes in the different namespaces. First of all, I created five namespaces on my cluster (from demo-1 to demo-5):

$ kubectl create ns demo-1
$ kubectl create ns demo-2
$ kubectl create ns demo-3
$ kubectl create ns demo-4
$ kubectl create ns demo-5

There are two sample apps available. Both of them are written using the Spring Boot framework. The first of them connect to the Mongo database, while the second just uses an in-memory data store. Let’s begin with the Mongo Deployment. We will run it in three namespaces demo-1, demo-2, and demo-3. There are three labels included in the Deployment object: team, env, and department, which are considered by Kubecost. Here’s the illustration of the labeling strategy per namespace.

kubernetes-cost-labeling-strategy

Assuming I’m deploying Mongo in the demo-1, here’s the YAML manifest. Please, pay attention to the values of labels (1) (2) (3). We will change those values according to the illustration above and depending on the namespace.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb
data:
  database-name: admin
---
apiVersion: v1
kind: Secret
metadata:
  name: mongodb
type: Opaque
data:
  database-password: dGVzdDEyMw==
  database-user: dGVzdA==
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb
  labels:
    app: mongodb
    team: team-a # (1)
    env: dev # (2)
    department: dep-a # (3) 
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
        team: team-a # (1)
        env: dev # (2)
        department: dep-a # (3)
    spec:
      containers:
        - name: mongodb
          image: mongo:5.0
          ports:
            - containerPort: 27017
          env:
            - name: MONGO_INITDB_DATABASE
              valueFrom:
                configMapKeyRef:
                  name: mongodb
                  key: database-name
            - name: MONGO_INITDB_ROOT_USERNAME
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-user
            - name: MONGO_INITDB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-password
          resources:
            requests:
              memory: 256Mi
              cpu: 100m
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  ports:
    - port: 27017
      protocol: TCP
  selector:
    app: mongodb

In the next step, we will deploy our Spring Boot that connects to the Mongo database. Since it connects to Mongo, it has to run in the same namespaces. We should also use the same labeling strategy as in the previous Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app
  labels:
    app: sample-app
    team: team-a
    env: dev
    department: dep-2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sample-app
  template:
    metadata:
      labels:
        app: sample-app
        team: team-a
        env: dev
        department: dep-2
    spec:
      containers:
      - name: sample-spring-boot-on-kubernetes
        image: piomin/sample-spring-boot-on-kubernetes:latest
        ports:
        - containerPort: 8080
        env:
          - name: MONGO_DATABASE
            valueFrom:
              configMapKeyRef:
                name: mongodb
                key: database-name
          - name: MONGO_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-user
          - name: MONGO_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-password
          - name: MONGO_URL
            value: mongodb
        readinessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/readiness
            scheme: HTTP
          timeoutSeconds: 1
          periodSeconds: 10
          successThreshold: 1
          failureThreshold: 3
        resources:
          requests:
            memory: 512Mi
            cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
  name: sample-app
spec:
  type: ClusterIP
  selector:
    app: sample-app
  ports:
  - port: 8080

Now, let’s install both Mongo and our app on Kubernetes using the following commands (run the same actions for the demo-2 and demo-3 namespaces after changing the labels and replicas):

$ kubectl apply -f mongodb.yaml -n demo-1
$ kubectl apply -f spring-app.yaml -n demo-1

We will also deploy the Spring Boot app with an in-memory store in the next two namespaces demo-4 and demo-5. Of course, please remember the labeling strategy described above. Here’s the YAML manifest for the demo-4 namespace. To distinguish we can set the higher number of replicas in the demo-5 namespace, e.g. 5.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app-im
  labels:
    app: sample-app-im
    team: team-c # (1)
    env: dev # (2)
    department: dep-3 # (3)
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sample-app-im
  template:
    metadata:
      labels:
        app: sample-app-im
        team: team-c # (1)
        env: dev # (2)
        department: dep-3 # (3)
    spec:
      containers:
      - name: sample-spring-kotlin-microservice
        image: piomin/sample-spring-kotlin-microservice:latest
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: 384Mi
            cpu: 150m
---
apiVersion: v1
kind: Service
metadata:
  name: sample-app-im
spec:
  type: ClusterIP
  selector:
    app: sample-app-im
  ports:
  - port: 8080

Let’s apply the objects from the YAML manifest above (run the same action for the demo-5 after changing the labels and replicas):

$ kubectl apply -f spring-app-im.yaml -n demo-4

Here’s a full list of our deployments with labels and namespaces:

Once we deployed all required apps we can switch to the Kubecost dashboard.

Using Kubecost Dashboard for Kubernetes Cost Monitoring

In the meantime, I changed the currency to Polish “Złoty” 🙂 We can create diagrams using various criteria. Let’s focus on the criteria we set in the custom Kubecost labels in all the deployments. In order to create diagrams go to the Monitor menu item.

We can choose between several available aggregation labels. It is also possible to choose a custom label not predefined by Kubecost. We can use a single aggregation or combine several labels together (multi-aggregation).

kubernetes-cost-ui-aggregation

Here’s the diagram that illustrates aggregation per namespace. We can filter out only important data. In that case, I defined the filter demo* to show only our namespaces with sample apps.

kubernetes-cost-diagram-namespace

Here’s a similar diagram, but aggregated by the team label. As you see we have all the three teams we defined in the label for our sample apps.

After that, we can change the diagram type into e.g. Proportional cost:

Let’s prepare the same diagram type, but using two search criteria. We combine the team with the department in the proportional costs:

kubernetes-cost-diagram-multi

And the last diagram in the article. It displays costs per environment (dev, test and prod). Instead of a cumulative cost, we can show e.g. hourly rate.

Final Thoughts

I had a lot of fun while visualizing the cost of my Kubernetes cluster with Kubecost. Of course, Kubecost has some additional features, but I focused mainly on cost visualization across teams and their apps. I have no major problems with running and understanding the core features of that tool.

The post Monitor Kubernetes Cost Across Teams with Kubecost appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/08/25/monitor-kubernetes-cost-across-teams-with-kubecost/feed/ 0 14429
Development on Kubernetes: Choose a platform https://piotrminkowski.com/2020/08/05/development-on-kubernetes-choose-a-platform/ https://piotrminkowski.com/2020/08/05/development-on-kubernetes-choose-a-platform/#comments Wed, 05 Aug 2020 13:01:38 +0000 http://piotrminkowski.com/?p=8290 An important step before you begin the implementation of microservices is to choose the Kubernetes cluster for development. In this article, I’m going to describe several available solutions. You can find a video version of every part of this tutorial on my YouTube channel. The second part is available here: Microservices on Kubernetes: Part 2 […]

The post Development on Kubernetes: Choose a platform appeared first on Piotr's TechBlog.

]]>
An important step before you begin the implementation of microservices is to choose the Kubernetes cluster for development. In this article, I’m going to describe several available solutions.

youtube.pngYou can find a video version of every part of this tutorial on my YouTube channel. The second part is available here: Microservices on Kubernetes: Part 2 – Cluster setup

The first important question is if I should prefer a local single-node instance or maybe deploy my applications directly on a remote cluster. Sometimes an installation of the local Kubernetes cluster for development may be troublesome, especially if you use Windows OS. You also need to have sufficient RAM or CPU resources on your machine. On the other hand, communication with a remote platform can take more time, and such a managed Kubernetes cluster may not be free.
This article is the second part of my guide, where I’ll be showing you tools, frameworks, and platforms that speed up the development of JVM microservices on Kubernetes. We are going to implement sample microservices-based architectures using Kotlin and then deploy and run them on different Kubernetes clusters.

The previous part of my guide is available here: Development on Kubernetes: IDE & Tools

Minikube

Minikube runs a single-node Kubernetes cluster for development inside a VM on your local machine. It supports VM drivers like VirtualBox, HyperV, KVM2. Since Minikube is relatively a mature solution in the Kubernetes world, the list of supported features is pretty impressive. These features are LoadBalancer, Multi-cluster, NodePorts, Persistent Volumes, Ingress, Dashboard, or Container runtimes.
All you need is Docker (or similarly compatible) container or a Virtual Machine environment, and Kubernetes may be started using a single command: minikube start. The minimal requirements are 2 CPUs, 2GB of free memory, and 20GB of free disk space.
Something especially useful during development is the ability to install addons. For example, we may easily enable the whole EFK stack with predefined configuration using a single command.

$ minikube addons enable efk

Kubernetes on Docker Desktop

Kubernetes on Docker Desktop is an interesting alternative to Minikube for running cluster on your local machine. Docker Desktop includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, is not configurable, and is a single-node cluster.
Unfortunately, it is not available for all Windows users. You need Windows Windows 10 64-bit: Pro, Enterprise, or Education. For Windows 10 Home you first need to enable the WSL 2 feature. We also need to have 4GB of RAM, Hyper-V, and Containers Windows features enabled. In return, you have both Docker and Kubernetes in a single tool and UI dashboard, where you may change the configuration or perform some basic troubleshooting.

development-on-kubernetes-docker-desktop

Kubernetes in Docker (kind)

kind is a tool for running local Kubernetes clusters using Docker container “nodes”. It supports multi-node clusters including HA, and may be installed on Linux, macOS, and Windows. Creating a cluster is very similar to minikube’s approach – we need to execute command kind create cluster. Since it does not use VM, but moves the cluster into Docker containers, it leads to a significantly faster startup speed than Minikube or Kubernetes on Docker Desktop. That’s why it may be an interesting option during local development.

Civo

Civo seems to be an interesting alternative to other Kubernetes hosted platforms. Since it is based on a lightweight version of Kubernetes called k3s, a new cluster can be created much faster than on other platforms. For me, it took less than 2 minutes for the 3-nodes managed cluster. The other good news is that you may be a beta tester of that product where you receive a free credit 70$ monthly. Of course, Civo is a relatively new solution not free from errors and shortcomings.
We can download Civo CLI to interact with our cluster. We can easily install there a popular software like Postgresql, MongoDB, Redis, or cloud-native edge router Traefik.

development-on-kubernetes-civo

To interact with Civo using their CLI we first need to copy the API key, that is available in section “Security”. Then you should execute the following CLI commands. After that, you can use the Civo cluster with kubectl. Since creating a new cluster takes only 2 minutes you can remove it after your development is finished and create a new one on demand.

$ civo apikey add piomin-civo-newkey 
$ civo apikey current piomin-civo-newkey
$ civo k3s config piomin-civo --merge

Google Kubernetes Engine

Google Kubernetes Engine is probably your first choice for a remote cluster. Not only that Kubernetes is associated with Google. It offers the best free trial plan including 300$ credit for 12 months. You can choose between many products that can be installed on your cluster in one click. If you are running Kubernetes cluster for development with default settings (3 nodes with total capacity 6vCPU and 12GB of RAM) on-demand a free credit would be enough for the whole year.
With Google Cloud Console you can manage your cluster easily.

development-on-kubernetes-gke

There are also some disadvantages. It takes relatively much time to create a new cluster. But the good news is that we can scale down the existing cluster to 0 using Node Pool, and scale it up if needed. Such an operation is much faster.

development-on-kubernetes-gkepool

Another disadvantage is the lack of the latest versions of the software. For example, it is possible to install version 1.16 of Kubernetes or Istio 1.4 when using predefined templates (of course you can install Istio manually using the latest version 1.6). If you are looking for a guide to deploying JVM-based application on GKE you may refer to my article Running Kotlin Microservice on Google Kubernetes Engine.

Digital Ocean

Digital Ocean is advertised as being designed for developers. It allows you to spin up a managed Kubernetes cluster for development in just a few clicks. For me, it took around 7 minutes to create a 3-nodes cluster there. The estimated cost of such a plan is 60$ per month. You are getting 100$ free credit for two months in the beginning.
You can scale it down to a single node or destroy the whole cluster and create a new one on demand. It is also possible to use some predefined templates to install additional products like Linkerd, NGINX Ingress Controller, Jeager, or even Okteto platform in one click. By default, the total cluster capacity is 6vCPU, 12GB of RAM, and 240GB of disk space.
A pricing plan on the Digital Ocean is pretty clear. You are paying just for running worker nodes. For standard node (2vCPU, 4GB RAM) it is 0.03$/hour. So if you would use such a cluster for development needs and destroy it after every usage the total monthly cost shouldn’t be large. It comes with a preinstalled Kubernetes Dashboard as shown below.

development-on-kubernetes-digitalocean

Something that can make it stand out is a possibility to install version 1.18 of Kubernetes. For example, on Google Cloud or Amazon Web Services, we may currently install version 1.16. However, when comparing with GKE it offers a much shorter trial period and smaller free credit.

Okteto

I have already written about Okteto in one of my previous articles Development on Kubernetes with Okteto and Spring Boot. I described there a process of local development and running Spring Boot application on a remote cluster. The main idea behind Okteto is “Code locally with the tools you know and love. Run and debug directly in Okteto Cloud.”. With this development platform you do not have a whole Kubernetes cluster available, but only a single namespace, where you can deploy your applications.
Their current offer for developers is pretty attractive. In a free plan, you get a single namespace, 4vCPU, 8GB of memory, and 5GB of disk space. All applications are shutting down after 24 hours of inactivity. You can also buy a Developer Pro Plan, which offers 2 namespaces and never sleeps for 20$/month.
With Okteto you can easily deploy popular databases and message brokers like MongoDB, Postgresql, Redis, or RabbitMQ in one click. You may also integrate your application with such software by defining Okteto manifest in the root directory of your project.

okteto-webui

Conclusion

I’m using most of these solutions. Which of them is chosen depends on the use case. For example, if I need to set up a predefined EFK stack quickly I can do it easily on Minikube. Otherwise, if my application is connecting with some third-party solutions like RabbitMQ, or databases (MongoDB, Postgresql) I can easily deploy such an environment on Okteto or Civo. In a standard situation, I’m using Kubernetes on Docker Desktop, which automatically starts as a service on Windows.

The post Development on Kubernetes: Choose a platform appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/08/05/development-on-kubernetes-choose-a-platform/feed/ 5 8290
Best Practices For Microservices on Kubernetes https://piotrminkowski.com/2020/03/10/best-practices-for-microservices-on-kubernetes/ https://piotrminkowski.com/2020/03/10/best-practices-for-microservices-on-kubernetes/#comments Tue, 10 Mar 2020 11:37:56 +0000 http://piotrminkowski.com/?p=7798 There are several best practices for building microservices architecture properly. You may find many articles about it online. One of them is my previous article Spring Boot Best Practices For Microservices. I focused there on the most important aspects that should be considered when running microservice applications built on top of Spring Boot on production. […]

The post Best Practices For Microservices on Kubernetes appeared first on Piotr's TechBlog.

]]>
There are several best practices for building microservices architecture properly. You may find many articles about it online. One of them is my previous article Spring Boot Best Practices For Microservices. I focused there on the most important aspects that should be considered when running microservice applications built on top of Spring Boot on production. I didn’t assume there is any platform used for orchestration or management, but just a group of independent applications. In this article, I’m going to extend the list of already introduced best practices with some new rules dedicated especially to microservices deployed on the Kubernetes platform.
The first question is if it makes any difference when you deploy your microservices on Kubernetes instead of running them independently without any platform? Well, actually yes and no… Yes, because now you have a platform that is responsible for running and monitoring your applications, and it launches some of its own rules. No, because you still have microservices architecture, a group of loosely coupled, independent applications, and you should not forget about it! In fact, many of the previously introduced best practices are actual, some of them need to be redefined a little. There are also some new, platform-specific rules, which should be mentioned.
One thing that needs to be explained before proceeding. This list of Kubernetes microservices best practices is built based on my experience in running microservices-based architecture on cloud platforms like Kubernetes. I didn’t copy it from other articles or books. In my organization, we have already migrated our microservices from Spring Cloud (Eureka, Zuul, Spring Cloud Config) to OpenShift. We are continuously improving this architecture based on experience in maintaining it.

Example

The sample Spring Boot application that implements currently described Kubernetes microservices best practices is written in Kotlin. It is available on GitHub in repository sample-spring-kotlin-microservice under branch kubernetes: https://github.com/piomin/sample-spring-kotlin-microservice/tree/kubernetes.

1. Allow platform to collect metrics

I have also put a similar section in my article about best practices for Spring Boot. However, metrics are also one of the important Kubernetes microservices best practices. We were using InfluxDB as a target metrics store. Since our approach to gathering metrics data is being changed after migration to Kubernetes I redefined the title of this point into Allow platform to collect metrics. The main difference between current and previous approaches is in the way of collecting data. We use Prometheus, because that process may be managed by the platform. InfluxDB is a push-based system, where your application actively pushes data into the monitoring system. Prometheus is a pull-based system, where a server fetches the metrics values from the running application periodically. So, our main responsibility at this point is to provide endpoints on the application side for Prometheus.
Fortunately, it is very easy to provide metrics for Prometheus with Spring Boot. You need to include Spring Boot Actuator and a dedicated Micrometer library for integration with Prometheus.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
   <groupId>io.micrometer</groupId>
   <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

We should also enable exposing Actuator HTTP endpoints outside application. You can enable a single endpoint dedicated for Prometheus or just expose all Actuator endpoints as shown below.

management.endpoints.web.exposure.include: '*'

After running your application endpoint is by default available under path /actuator/prometheus.

best-practices-microservices-kubernetes-actuator

Assuming you run your application on Kubernetes you need to deploy and configure Prometheus to scrape logs from your pods. The configuration may be delivered as Kubernetes ConfigMap. The prometheus.yml file should contain section scrape_config with path of endpoint serving metrics and Kubernetes discovery settings. Prometheus is trying to localize all application pods by Kubernetes Endpoints. The application should be labeled with app=sample-spring-kotlin-microservice and have a port with name http exposed outside the container.

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus
  labels:
    name: prometheus
data:
  prometheus.yml: |-
    scrape_configs:
      - job_name: 'springboot'
        metrics_path: /actuator/prometheus
        scrape_interval: 5s
        kubernetes_sd_configs:
        - role: endpoints
          namespaces:
            names:
              - default

        relabel_configs:
          - source_labels: [__meta_kubernetes_service_label_app]
            separator: ;
            regex: sample-spring-kotlin-microservice
            replacement: $1
            action: keep
          - source_labels: [__meta_kubernetes_endpoint_port_name]
            separator: ;
            regex: http
            replacement: $1
            action: keep
          - source_labels: [__meta_kubernetes_namespace]
            separator: ;
            regex: (.*)
            target_label: namespace
            replacement: $1
            action: replace
          - source_labels: [__meta_kubernetes_pod_name]
            separator: ;
            regex: (.*)
            target_label: pod
            replacement: $1
            action: replace
          - source_labels: [__meta_kubernetes_service_name]
            separator: ;
            regex: (.*)
            target_label: service
            replacement: $1
            action: replace
          - source_labels: [__meta_kubernetes_service_name]
            separator: ;
            regex: (.*)
            target_label: job
            replacement: ${1}
            action: replace
          - separator: ;
            regex: (.*)
            target_label: endpoint
            replacement: http
            action: replace

The last step is to deploy Prometheus on Kubernetes. You should attach ConfigMap with Prometheus configuration to the Deployment via Kubernetes mounted volume. After that you may set the location of a configuration file using config.file parameter: --config.file=/prometheus2/prometheus.yml.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus
  labels:
    app: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
        - name: prometheus
          image: prom/prometheus:latest
          args:
            - "--config.file=/prometheus2/prometheus.yml"
            - "--storage.tsdb.path=/prometheus/"
          ports:
            - containerPort: 9090
              name: http
          volumeMounts:
            - name: prometheus-storage-volume
              mountPath: /prometheus/
            - name: prometheus-config-map
              mountPath: /prometheus2/
      volumes:
        - name: prometheus-storage-volume
          emptyDir: {}
        - name: prometheus-config-map
          configMap:
            name: prometheus

Now you can verify if Prometheus has discovered your application running on Kubernetes by accessing endpoint /targets.

best-practices-microservices-kubernetes-prometheus

2. Prepare logs in right format

The approach to collecting logs is pretty similar to collecting metrics. Our application should not handle the process of sending logs by itself. It just should take care of formatting logs sent to the output stream properly. Since Docker has a built-in logging driver for Fluentd it is very convenient to use it as a log collector for applications running on Kubernetes. This means no additional agent is required on the container to push logs to Fluentd. Logs are directly shipped to Fluentd service from STDOUT and no additional logs file or persistent storage is required. Fluentd tries to structure data as JSON to unify logging across different sources and destinations.
In order to format our logs to JSON readable by Fluentd we may include the Logstash Logback Encoder library to our dependencies.

<dependency>
   <groupId>net.logstash.logback</groupId>
   <artifactId>logstash-logback-encoder</artifactId>
   <version>6.3</version>
</dependency>

Then we just need to set a default console log appender for our Spring Boot application in the file logback-spring.xml.

<configuration>
    <appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>
    <logger name="jsonLogger" additivity="false" level="DEBUG">
        <appender-ref ref="consoleAppender"/>
    </logger>
    <root level="INFO">
        <appender-ref ref="consoleAppender"/>
    </root>
</configuration>

The logs are printed into STDOUT in the format visible below.

kubernetes-log-format

It is very simple to install Fluentd, Elasticsearch and Kibana on Minikube. Disadvantage of this approach is that we are installing older versions of these tools.

$ minikube addons enable efk
* efk was successfully enabled
$ minikube addons enable logviewer
* logviewer was successfully enabled

After enabling efk and logviewer addons Kubernetes pulls and starts all the required pods as shown below.

best-practices-microservices-kubernetes-pods-logging

Thanks to the logstash-logback-encoder library we may automatically create logs compatible with Fluentd including MDC fields. Here’s a screen from Kibana that shows logs from our test application.

best-practices-microservices-kubernetes-kibana

Optionally, you can add my library for logging requests/responses for Spring Boot application.

<dependency>
   <groupId>com.github.piomin</groupId>
   <artifactId>logstash-logging-spring-boot-starter</artifactId>
   <version>1.2.2.RELEASE</version>
</dependency>

3. Implement both liveness and readiness health check

It is important to understand the difference between liveness and readiness probes in Kubernetes. If these probes are not implemented carefully, they can degrade the overall operation of a service, for example by causing unnecessary restarts. Liveness probe is used to decide whether to restart the container or not. If an application is unavailable for any reason, restarting the container sometimes can make sense. On the other hand, a readiness probe is used to decide if a container can handle incoming traffic. If a pod has been recognized as not ready, it is removed from load balancing. Fail of the readiness probe does not result in pod restart. The most typical liveness or readiness probe for web applications is realized via HTTP endpoint.
In a typical web application running outside a platform like Kubernetes, you won’t distinguish liveness and readiness health checks. That’s why most web frameworks provide only a single built-in health check implementation. For Spring Boot application you may easily enable health check by including Spring Boot Actuator to your dependencies. The important information about the Actuator health check is that it may behave differently depending on integrations between your application and third-party systems. For example, if you define a Spring data source for connecting to a database or declare a connection to the message broker, a health check may automatically include such validation through auto-configuration. Therefore, if you set a default Spring Actuator health check implementation as a liveness probe endpoint, it may result in unnecessary restart if the application is unable to connect the database or message broker. Since such behavior is not desired, I suggest you should implement a very simple liveness endpoint, that just verifies the availability of application without checking connection to other external systems.
Adding a custom implementation of a health check is not very hard with Spring Boot. There are some different ways to do that. One of them is visible below. We are using the mechanism provided within Spring Boot Actuator. It is worth noting that we won’t override a default health check, but we are adding another, custom implementation. The following implementation is just checking if an application is able to handle incoming requests.

@Component
@Endpoint(id = "liveness")
class LivenessHealthEndpoint {

    @ReadOperation
    fun health() : Health = Health.up().build()

    @ReadOperation
    fun name(@Selector name: String) : String = "liveness"

    @WriteOperation
    fun write(@Selector name: String) {

    }

    @DeleteOperation
    fun delete(@Selector name: String) {

    }

}

In turn, a default Spring Boot Actuator health check may be the right solution for a readiness probe. Assuming your application would connect to database Postgres and RabbitMQ message broker you should add the following dependencies to your Maven pom.xml.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
   <groupId>org.postgresql</groupId>
   <artifactId>postgresql</artifactId>
   <scope>runtime</scope>
</dependency>

Now, just for information add the following property to your application.yml. It enables displaying detailed information for auto-configured Actuator /health endpoint.

management:
  endpoint:
    health:
      show-details: always

Finally, let’s call /actuator/health to see the detailed result. As you see in the picture below, a health check returns information about Postgres and RabbitMQ connections.

best-practices-microservices-kubernetes-readiness

There is another aspect of using liveness and readiness probes in your web application. It is related to thread pooling. In a standard web container like Tomcat, each request is handled by the HTTP thread pool. If you are processing each request in the main thread and you have some long-running tasks in your application you may block all available HTTP threads. If your liveness will fail several times in row an application pod will be restarted. Therefore, you should consider implementing long-running tasks using another thread pool. Here’s the example of HTTP endpoint implementation with DeferredResult and Kotlin coroutines.

@PostMapping("/long-running")
fun addLongRunning(@RequestBody person: Person): DeferredResult<Person> {
   var result: DeferredResult<Person>  = DeferredResult()
   GlobalScope.launch {
      logger.info("Person long-running: {}", person)
      delay(10000L)
      result.setResult(repository.save(person))
   }
   return result
}

4. Consider your integrations

Hardly ever our application is able to exist without any external solutions like databases, message brokers or just other applications. There are two aspects of integration with third-party applications that should be carefully considered: connection settings and auto-creation of resources.
Let’s start with connection settings. As you probably remember, in the previous section we were using the default implementation of Spring Boot Actuator /health endpoint as a readiness probe. However, if you leave default connection settings for Postgres and Rabbit each call of the readiness probe takes a long time if they are unavailable. That’s why I suggest decreasing these timeouts to lower values as shown below.

spring:
  application:
    name: sample-spring-kotlin-microservice
  datasource:
    url: jdbc:postgresql://postgres:5432/postgres
    username: postgres
    password: postgres123
    hikari:
      connection-timeout: 2000
      initialization-fail-timeout: 0
  jpa:
    database-platform: org.hibernate.dialect.PostgreSQLDialect
  rabbitmq:
    host: rabbitmq
    port: 5672
    connection-timeout: 2000

Except properly configured connection timeouts you should also guarantee auto-creation of resources required by the application. For example, if you use RabbitMQ queue for asynchronous messaging between two applications you should guarantee that the queue is created on startup if it does not exist. To do that first declare a queue – usually on the listener side.

@Configuration
class RabbitMQConfig {

    @Bean
    fun myQueue(): Queue {
        return Queue("myQueue", false)
    }

}

Here’s a listener bean with receiving method implementation.

@Component
class PersonListener {

    val logger: Logger = LoggerFactory.getLogger(PersonListener::class.java)

    @RabbitListener(queues = ["myQueue"])
    fun listen(msg: String) {
        logger.info("Received: {}", msg)
    }

}

The similar case is with database integration. First, you should ensure that your application starts even if the connection to the database fails. That’s why I declared PostgreSQLDialect. It is required if the application is not able to connect to the database. Moreover, each change in the entities model should be applied on tables before application startup.
Fortunately, Spring Boot has auto-configured support for popular tools for managing database schema changes: Liquibase and Flyway. To enable Liquibase we just need to include the following dependency in Maven pom.xml.

<dependency>
   <groupId>org.liquibase</groupId>
   <artifactId>liquibase-core</artifactId>
</dependency>

Then you just need to create a change log and put in the default location db/changelog/db.changelog-master.yaml. Here’s a sample Liquibase changelog YAML file for creating table person.

databaseChangeLog:
  - changeSet:
      id: 1
      author: piomin
      changes:
        - createTable:
            tableName: person
            columns:
              - column:
                  name: id
                  type: int
                  autoIncrement: true
                  constraints:
                    primaryKey: true
                    nullable: false
              - column:
                  name: name
                  type: varchar(50)
                  constraints:
                    nullable: false
              - column:
                  name: age
                  type: int
                  constraints:
                    nullable: false
              - column:
                  name: gender
                  type: smallint
                  constraints:
                    nullable: false

5. Use Service Mesh

If you are building microservices architecture outside Kubernetes, such mechanisms like load balancing, circuit breaking, fallback, or retrying are realized on the application side. Popular cloud-native frameworks like Spring Cloud simplify the implementation of these patterns in your application and just reduce it to adding a dedicated library to your project. However, if you migrate your microservices to Kubernetes you should not still use these libraries for traffic management. It is becoming some kind of anti-pattern. Traffic management in communication between microservices should be delegated to the platform. This approach on Kubernetes is known as Service Mesh. One of the most important Kubernetes microservices best practices is to use dedicated software for building a service mesh.
Since originally Kubernetes has not been dedicated to microservices, it does not provide any built-in mechanism for advanced managing of traffic between many applications. However, there are some additional solutions dedicated for traffic management, which may be easily installed on Kubernetes. One of the most popular of them is Istio. Besides traffic management it also solves problems related to security, monitoring, tracing and metrics collecting.
Istio can be easily installed on your cluster or on standalone development instances like Minikube. After downloading it just run the following command.

$ istioctl manifest apply

Istio components need to be injected into a deployment manifest. After that, we can define traffic rules using YAML manifests. Istio gives many interesting options for configuration. The following example shows how to inject faults into the existing route. It can be either delays or aborts. We can define a percentage level of error using the percent field for both types of fault. In the Istio resource I have defined a 2 seconds delay for every single request sent to Service account-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: account-service
spec:
  hosts:
    - account-service
  http:
  - fault:
      delay:
        fixedDelay: 2s
        percent: 100
    route:
    - destination:
        host: account-service
        subset: v1

Besides VirtualService we also need to define DestinationRule for account-service. It is really simple – we have just defined the version label of the target service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: account-service
spec:
  host: account-service
  subsets:
  - name: v1
    labels:
      version: v1

6. Be open for framework-specific solutions

There are many interesting tools and solutions around Kubernetes, which may help you in running and managing applications. However, you should also not forget about some interesting tools and solutions offered by a framework you use. Let me give you some examples. One of them is the Spring Boot Admin. It is a useful tool designed for monitoring Spring Boot applications across a single discovery. Assuming you are running microservices on Kubernetes you may also install Spring Boot Admin there.
There is another interesting project within Spring Cloud – Spring Cloud Kubernetes. It provides some useful features that simplify integration between a Spring Boot application and Kubernetes. One of them is a discovery across all namespaces. If you use that feature together with Spring Boot Admin, you may easily create a powerful tool, which is able to monitor all Spring Boot microservices running on your Kubernetes cluster. For more details about implementation details you may refer to my article Spring Boot Admin on Kubernetes.
Sometimes you may use Spring Boot integrations with third-party tools to easily deploy such a solution on Kubernetes without building separated Deployment. You can even build a cluster of multiple instances. This approach may be used for products that can be embedded in a Spring Boot application. It can be, for example RabbitMQ or Hazelcast (popular in-memory data grid). If you are interested in more details about running Hazelcast cluster on Kubernetes using this approach please refer to my article Hazelcast with Spring Boot on Kubernetes.

7. Be prepared for a rollback

Kubernetes provides a convenient way to rollback an application to an older version based on ReplicaSet and Deployment objects. By default, Kubernetes keeps 10 previous ReplicaSets and lets you roll back to any of them. However, one thing needs to be pointed out. A rollback does not include configuration stored inside ConfigMap and Secret. Sometimes it is desired to rollback not only application binaries, but also configuration.
Fortunately, Spring Boot gives us really great possibilities for managing externalized configuration. We may keep configuration files inside the application and also load them from an external location. On Kubernetes we may use ConfigMap and Secret for defining Spring configuration files. The following definition of ConfigMap creates application-rollbacktest.yml Spring configuration containing only a single property. This configuration is loaded by the application only if Spring profile rollbacktest is active.

apiVersion: v1
kind: ConfigMap
metadata:
  name: sample-spring-kotlin-microservice
data:
  application-rollbacktest.yml: |-
    property1: 123456

A ConfigMap is included to the application through a mounted volume.

spec:
  containers:
  - name: sample-spring-kotlin-microservice
    image: piomin/sample-spring-kotlin-microservice
    ports:
    - containerPort: 8080
       name: http
    volumeMounts:
    - name: config-map-volume
       mountPath: /config/
  volumes:
    - name: config-map-volume
       configMap:
         name: sample-spring-kotlin-microservice

We also have application.yml on the classpath. The first version contains only a single property.

property1: 123

In the second we are going to activate the rollbacktest profile. Since, a profile-specific configuration file has higher priority than application.yml, the value of property1 property is overridden with value taken from application-rollbacktest.yml.

property1: 123
spring.profiles.active: rollbacktest

Let’s test the mechanism using a simple HTTP endpoint that prints the value of the property.

@RestController
@RequestMapping("/properties")
class TestPropertyController(@Value("\${property1}") val property1: String) {

    @GetMapping
    fun printProperty1(): String  = property1
    
}

Let’s take a look how we are rolling back a version of deployment. First, let’s see how many revisions we have.

$ kubectl rollout history deployment/sample-spring-kotlin-microservice
deployment.apps/sample-spring-kotlin-microservice
REVISION  CHANGE-CAUSE
1         
2         
3         

Now, we are calling endpoint /properties of current deployment, that returns value of property property1. Since, profile rollbacktest is active it returns value from file application-rollbacktest.yml.

$ curl http://localhost:8080/properties
123456

Let’s roll back to the previous revision.


$ kubectl rollout undo deployment/sample-spring-kotlin-microservice --to-revision=2
deployment.apps/sample-spring-kotlin-microservice rolled back

As you see below the revision=2 is not visible, it is now deployed as the newest revision=4.

$ kubectl rollout history deployment/sample-spring-kotlin-microservice
deployment.apps/sample-spring-kotlin-microservice
REVISION  CHANGE-CAUSE
1         
3         
4         

In this version of application profile rollbacktest wasn’t active, so value of property property1 is taken from application.yml.

 $ curl http://localhost:8080/properties
123

The post Best Practices For Microservices on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/03/10/best-practices-for-microservices-on-kubernetes/feed/ 1 7798
Spring Boot Admin on Kubernetes https://piotrminkowski.com/2020/02/18/spring-boot-admin-on-kubernetes/ https://piotrminkowski.com/2020/02/18/spring-boot-admin-on-kubernetes/#comments Tue, 18 Feb 2020 07:43:26 +0000 http://piotrminkowski.com/?p=7723 The main goal of this article is to show how to monitor Spring Boot applications running on Kubernetes with Spring Boot Admin. I have already written about Spring Boot Admin more than two years ago in the article Monitoring Microservices With Spring Boot Admin. You can find there a detailed description of its main features. […]

The post Spring Boot Admin on Kubernetes appeared first on Piotr's TechBlog.

]]>
The main goal of this article is to show how to monitor Spring Boot applications running on Kubernetes with Spring Boot Admin. I have already written about Spring Boot Admin more than two years ago in the article Monitoring Microservices With Spring Boot Admin. You can find there a detailed description of its main features. During this time some new features have been added. They have also changed the look of the application to more modern. But the principles of working have not been changes anymore, so you can still refer to my previous article to understand the main concept around Spring Boot Admin.
I was pretty surprised that there is no comprehensive article about running Spring Boot Admin on Kubernetes online. That’s why I decided to build this tutorial. Today I’m going to show you how to use Spring Cloud Kubernetes with Spring Boot Admin to enable monitoring for all Spring Boot applications running across the whole cluster. That’s a challenging task, but fortunately it is not very hard with Spring Cloud Kubernetes.

Example

As usual the source code with sample applications is available on GitHub in the repository https://github.com/piomin/sample-spring-microservices-kubernetes.git. It was used as the example for some other articles on my blog, which may be helpful to better understand the idea behind Spring Cloud Kubernetes. One of them is Microservices with Spring Cloud Kubernetes.
Anyway, I’m using three sample Spring Boot applications from this repository for a demo purpose. Each application would be run in a different namespace inside Minikube. Spring Boot Admin should monitor only Spring Boot applications, with the assumptions that there are different types of applications and solutions running on Kubernetes. All the three applications are optimized to be built with Skaffold and Jib. So the only thing you have to do is to run command skaffold dev in the directory of each application.

The picture illustrating an architecture of applications for the current article is visible below. The application employee-service is deployed inside namespace a, department-service is deployed inside namespace b, while organization-service is deployed inside namespace c. Spring Boot Admin is also started using Spring Boot and is deployed in the namespace default.

spring-boot-admin-on-kubernetes.png

Dependencies

Let’s begin from dependencies. Spring Boot Admin Server is available as a starter library. The current stable version is 2.2.2. We also need to include Spring Boot Web starter. Because our Spring Boot Admin application is integrating with Kubernetes discovery we should include Spring Cloud Kubernetes Discovery. If you also use Spring Cloud Kubernetes for direct integration with ConfigMap and Secret you may just include spring-cloud-starter-kubernetes-all starter instead of starter dedicated only for discovery. Here’s the list of dependencies inside our sample module admin-server with embedded Spring Boot Admin.

<parent>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-parent</artifactId>
   <version>2.2.4.RELEASE</version>
</parent>
<dependencies>
   <dependency>
      <groupId>de.codecentric</groupId>
      <artifactId>spring-boot-admin-starter-server</artifactId>
      <version>2.2.2</version>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-security</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-kubernetes-all</artifactId>
   </dependency>
</dependencies>
<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-dependencies</artifactId>
         <version>Hoxton.RELEASE</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

Spring Boot Admin Server and Kubernetes discovery

We need to add some annotations to the Spring Boot main class to enable Spring Boot Admin with Kubernetes Discovery. First of them is of course @EnableAdminServer (1). With Spring Cloud Kubernetes we still need to add annotation @EnableDiscoveryClient to enable DiscoveryClient based on Kubernetes API (2). The last annotation @EnableScheduling is not so obvious, but also required (3). Without it a discovery client won’t periodically call Kubernetes API to refresh the list of running services, and do it only once on startup. Since, we always want to have the current list of pods (for example after scaling up the number of application instances), we need to enable a scheduler responsible for watching service catalog for changes and updating the DiscoveryClient implementation accordingly.

@SpringBootApplication
@EnableAdminServer // (1)
@EnableDiscoveryClient // (2)
@EnableScheduling // (3)
public class AdminApplication {

   public static void main(String[] args) {
      SpringApplication.run(AdminApplication.class, args);
   }

}

Now, a crucial issue throughout the game – configuration. Those two properties visible below do the whole “magic”. First of them spring.cloud.kubernetes.discovery.all-namespace enables DiscoveryClient from all namespaces (1). After enabling it Spring Boot Admin would be able to monitor all the applications across the whole cluster! Great, but since we would like to force it to monitor just the Spring Boot application, we need to be able to filter them. Here comes Spring Cloud Kubernetes with another smart solution. The property spring.cloud.kubernetes.discovery.serviceLabels allows us to define the set of labels used for filtering list of services fetched from Kubernetes API. In simple words, we fetch only those Kubernetes Service, that has labels defined on the Spring Boot Admin server side with such values. Here’s application.yml for admin-service defined inside ConfigMap.

kind: ConfigMap
apiVersion: v1
metadata:
  name: admin
data:
  application.yml: |-
    spring:
     cloud:
      kubernetes:
        discovery:
          all-namespaces: true # (1)
          service-labels:
            spring-boot: true # (2)

Implementation on the client side

Thanks to using DiscoveryClient on Spring Boot Admin Server we don’t have to include any additional Spring Boot Admin Client library to our sample applications. Of course, they also don’t have to include Spring Cloud Kubernetes, since Kubernetes manages pods registration by itself. In fact, the only thing we have to do is to add label to the Kubernetes Service, which has been set as a filtering condition on the server side. Here’s the example of Service definition for department-service.

apiVersion: v1
kind: Service
metadata:
  name: department
  labels:
    app: department
    spring-boot: "true"
spec:
  ports:
  - port: 8080
    protocol: TCP
  selector:
    app: department

Spring Boot Admin is based on Spring Boot Actuator endpoints, so each application should at least include that project to Maven dependencies.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

By default, not all the endpoints are exposed outside application. Therefore we need to expose them by setting property management.endpoints.web.exposure.include to *. It is also worth showing details about application in health endpoint. That step is optional. Here’s our application.yml for department-service.

spring:
  application:
    name: department
management:
  endpoints:
    web:
      exposure:
        include: "*"
  endpoint:
    health:
      show-details: ALWAYS

It’s also worth generating a build-info.properties file during build to provide more details for Actuator /info endpoint.

<plugin>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-maven-plugin</artifactId>
   <executions>
      <execution>
         <goals>
            <goal>build-info</goal>
         </goals>
      </execution>
   </executions>
</plugin>

Running Spring Boot Admin on Kubernetes

First, we should start Minikube. Because we are running applications and MongoDB I suggest changing the default memory limit 4 GB.

$ minikube start --vm-driver=virtualbox --memory='4000mb'

Let’s start from creating all the required namespaces on Minikube. Except default namespace, we also need namespaces a, b, and c, as it has been described in the section Example.

$ kubectl create namespace a
namespace/a created
$ kubectl create namespace b
namespace/b created
$ kubectl create namespace c
namespace/c created

Spring Boot Admin uses Spring Cloud Kubernetes, which requires extra privileges in order to access Kubernetes API. Let’s set cluster-admin as a default role for ServiceAccount just for a development purpose.

$ kubectl create clusterrolebinding admin-default --clusterrole=cluster-admin --serviceaccount=default:default

Assuming we have already succesfully deploy all the sample microservices and Spring Boot Admin Server application on Minikube you will get the following list of Services for all the namespaces. Option -L=spring-boot prints value of label spring-boot for all services.

kubernetes-svc

We may also take a look into the list of pods. I have set two instances for employee deployment.

kubernetes-pods

As you can see on the picture below Spring Boot Admin manages only Kubernetes Services labeled with spring-boot=true. For example mongodb, kubernetes or admin inside default namespace were omitted.

spring-boot-admin-on-kubernetes-main-page

Spring Boot Admin provides some useful features for managing Spring Boot applications. It is worth considering using it on Kubernetes to monitor the whole set of your microservices distributed across many namespaces. The following picture illustrates the details page for a single application (pod) running on Minikube.

spring-boot-admin-on-kubernetes-details

The post Spring Boot Admin on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/02/18/spring-boot-admin-on-kubernetes/feed/ 13 7723
Local Java Development on Kubernetes https://piotrminkowski.com/2020/02/14/local-java-development-on-kubernetes/ https://piotrminkowski.com/2020/02/14/local-java-development-on-kubernetes/#comments Fri, 14 Feb 2020 10:06:59 +0000 http://piotrminkowski.com/?p=7706 There are many tools, which may simplify your local Java development on Kubernetes. For Java applications you may also take an advantage of integration between popular runtime frameworks and Kubernetes. In this article I’m going to present some of the available solutions. Skaffold Skaffold is a simple command-line tool that is able to handle the […]

The post Local Java Development on Kubernetes appeared first on Piotr's TechBlog.

]]>
There are many tools, which may simplify your local Java development on Kubernetes. For Java applications you may also take an advantage of integration between popular runtime frameworks and Kubernetes. In this article I’m going to present some of the available solutions.

Skaffold

Skaffold is a simple command-line tool that is able to handle the workflow for building, pushing and deploying your Java application on Kubernetes. It saves a lot of developer time by automating most of the work from source code to the deployment. It natively supports the most common image-building and application deployment strategies. Skaffold is an open-source project from Google. It is not the only one interesting tool from Google that may be used to help in local development on Kubernetes. Another one of them, Jib, is dedicated only for Java applications. It allows you to build optimized Docker and OCI images for your Java applications without a Docker daemon. It is available as Maven of Gradle plugin, or just as a Java library. With Jib you do not need to maintain a Dockerfile or even run a Docker daemon. It is also able to take advantage of image layering and registry caching to achieve fast, incremental builds. To use Jib during our application build we just need to include it to Maven pom.xml. We may easily customize the behaviour of Jib Maven Plugin by using properties inside configuration section. But for a standard Java application it is highly possible you could use default settings as shown below.

<plugin>
   <groupId>com.google.cloud.tools</groupId>
   <artifactId>jib-maven-plugin</artifactId>
   <version>1.8.0</version>
</plugin>

By default, Skaffold uses Dockerfile while building an image with our application. We may customize this behaviour to use Jib Maven Plugin instead of Dockerfile. We may do it by changing the Skaffold configuration file available in our project root directory – skaffold.yaml. We should also define there a name of the generated Docker image and its tagging policy.

apiVersion: skaffold/v2alpha1
kind: Config
build:
  artifacts:
    - image: piomin/department
       jib: {}
  tagPolicy:
    gitCommit: {}

If your Kubernetes deployment manifest is located inside k8s directory and its name is deployment.yaml you don’t have to provide any additional configuration. Here’s a structure of our sample project that fulfills Skaffold requirements.

local-java-development-kubernetes-skaffold

Assuming you have successfully run a Minikube instance on your local machine, you just need to run command skaffold dev in your root project directory. This command starts the process of building a Docker image with your application and then deploys it on Minikube. After that it watches for any change in your source code and triggers a new build after every change in the filesystem. There are some parameters, which may be used for customization. Option --port-forward is responsible for running command kubectl port-forward for all the ports exposed outside the container. We may also disable auto-build triggered after file change, and enable only manual mode that triggers build on demand. It may be especially useful, for example if you are using autosave mode in your IDE like IntelliJ. The last option used in the example of command visible below, --no-prune, is responsible for disable removal of images, containers and deployment created by Skaffold.

$ skaffold dev --port-forward --trigger=manual --no-prune

Another useful Skaffold command during development is skaffold debug. It is very similar to skaffold dev, but configures our pipeline for debugging. For Java applications it is running JWDP agent exposed on port 5005 outside the container. Then you may easily connect with the agent, for example using your IDE.

$ skaffold debug --port-forward --no-prune

I think the most suitable way to show you Skaffold in action is on video. Here’s a 9 minutes long movie that shows you how to use Skaffold for local Java development, running and debugging a Spring Boot application on Kubernetes.

[wpvideo 3Op96XNi]

Cloud Code

Not every developer likes command-line tools. At this point Google comes GUI tools, which may be easily installed as plugins on your IDEs: IntelliJ or Visual Studio Code. This set of tools called Cloud Code help you write, run, and debug cloud-native applications quickly and easily. Cloud Code uses Skaffold in background, but hides it behind two buttons available in your Run Configurations (IntelliJ): Develop on Kubernetes and Run on Kubernetes.
Develop on Kubernetes is running Skaffold in the default notify mode that triggers build after every change of file inside your project.

prez-3

Develop on Kubernetes is running Skaffold in the manual mode that starts the build on demand after you click that button.

prez-2

Cloud Cloud offers some other useful features. It provides an auto-completion for syntax inside Kubernetes YAML manifests.

local-java-development-kubernetes-cloud-code

You may also display a graphical representation of your Kubernetes cluster as shown below.

prez-1

Dekorate

We have already discussed some interesting tools for automating the deployment process beginning from a change in the source code to running an application on Kubernetes cluster (Minikube). Beginning from this section we will discuss interesting libraries and extensions to popular JVM frameworks, which helps you to speed-up your Java development on Kubernetes. First of them is Dekorate. Dekorate is a collection of compile time generators and decorators of Kubernetes manifests. It makes generating and decorating Kubernetes manifests as simple as adding a dependency to your project. It allows you to use well-known Java annotations style to define Kubernetes resources used by your application. It provides integration for Spring Boot and Quarkus frameworks.
To enable integration for your Spring Boot application you just need to include the following dependency to your Maven pom.xml.


<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>kubernetes-spring-starter</artifactId>
  <version>0.10.10</version>
</dependency>

Now, if you build your application using the Maven command visible below Dekorate is able to analyze your source code and generate Kubernetes manifest basing on that.


$ mvn clean install -Ddekorate.build=true -Ddekorate.deploy=true

Besides just an analysis of source code Dekarate allows to define Kubernetes resources using configuration files or annotations. The following code snippet shows you how to set 2 replicas of your application, expose it outside the cluster as route and refer to the existing ConfigMap on your OpenShift instance. You may also use @JvmOptions to set some JVM running parameters like maximum heap usage.

@SpringBootApplication
@OpenshiftApplication(replicas = 2, expose = true, envVars = {
        @Env(name="sample-app-config", configmap = "sample-app-config")
})
@JvmOptions(xms = 128, xmx = 256, heapDumpOnOutOfMemoryError = true)
@EnableSwagger2
public class SampleApp {

    public static void main(String[] args) {
        SpringApplication.run(SampleApp.class, args);
    }
   
}

Of course, I presented only a small set of options offered by Dekorate. You can also define Kubernetes labels, annotations, secrets, volumes and many more. For more details about using Dekorate with OpenShift you may in one of my previous articles Deploying Spring Boot Application on OpenShift with Dekorate.

Spring Cloud Kubernetes

If you are building your web applications on top of Spring Boot you should consider using Spring Cloud Kubernetes for integration with Kubernetes. Spring Cloud Kubernetes provides Spring Cloud common interface implementations that consume Kubernetes native services via master API. The main features of that project are:

  • Kubernetes PropertySource implementation including auto-reload of configuration after ConfigMap or Secret change
  • Kubernetes native discovery with DiscoveryClient implementation including multi-namespace discovery
  • Client side load balancing with Spring Cloud Netflix Ribbon
  • Pod health indicator

If you would like to use both Spring Cloud Kubernetes Discovery and Config modules you should include the following property to your Maven pom.xml

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-kubernetes-all</artifactId>
</dependency>   
<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-dependencies</artifactId>
         <version>Hoxton.RELEASE</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>

After that discovery and configuration based on ConfigMap is enabled. If you also would like to use Secret<.code> as property source for the application you need to enable it in bootstrap.yml.

spring:
  application:
    name: department
  cloud:
    kubernetes:
      secrets:
        enableApi: true

The name of ConfigMap or Secret (property manifest.name) should be the same as the application name to use them without any configuration customization. Here’s sample ConfigMap for department application. It is managed by Spring Cloud Kubernetes without a necessity to inject it Deployment manifest.

apiVersion: v1
kind: ConfigMap
metadata:
  name: department
data:
  application.yml: |-
    spring:
     cloud:
      kubernetes:
        discovery:
          all-namespaces: true
    spring:
      data:
       mongodb:
        database: admin
        host: mongodb

Spring Cloud Kubernetes Discovery and Ribbon integration allows you to use any of Spring Rest Client to communicate with other services via name. Here’s the example of Spring Cloud OpenFeign usage.

@FeignClient(name = "employee")
public interface EmployeeClient {

   @GetMapping("/department/{departmentId}")
   List<Employee> findByDepartment(@PathVariable("departmentId") String departmentId);
   
}

Another useful Spring Cloud Kubernetes feature is an ability to reload configuration after change in ConfigMap or Secret. That’s a pretty amazing thing for a developer, because it is possible to refresh some beans without restarting the whole pod with application. However, you should keep in mind that configuration beans annotated with @ConfigurationProperties or @RefreshScope are reloaded. By default, this feature is disabled. To enable you should use the following property.

spring:
  cloud:
    kubernetes:
      reload:
        enabled: true

For more details about Spring Cloud Kubernetes including source code examples you may refer to my previous article Microservices with Spring Cloud Kubernetes.

Micronaut

Similar to Spring Boot, Micronaut provides a library for integration with Kubernetes. In comparison to Spring Cloud Kubernetes it additionally allows to read ConfigMap and Secret from mounted volumes and allows to enable filtering services by their name during discovery. To enable Kubernetes discovery for Micronaut applications we first to include the following library to our Maven pom.xml.

<dependency>
    <groupId>io.micronaut.kubernetes</groupId>
    <artifactId>micronaut-kubernetes-discovery-client</artifactId>
</dependency>

This module also allows us to use Micronaut HTTP Client with discovery by service name.

@Client(id = "employee", path = "/employees")
public interface EmployeeClient {
 
    @Get("/department/{departmentId}")
    List<Employee> findByDepartment(Long departmentId);
 
}

You don’t have to include any additional library to enable integration with Kubernetes PropertySource, since it is provided in Micronaut Config Client core library. You just need to enable it in application bootstrap.yml. Unlike Spring Boot, Micronaut uses labels instead of metada.name to match ConfigMap or Secret with application. After enabling Kubernetes config client, also configuration auto-reload feature is enabled. Here’s our bootstrap.yml file.

micronaut:
  application:
    name: department
  config-client:
    enabled: true
kubernetes:
  client:
    config-maps:
      labels:
        - app: department
    secrets:
      enabled: true
      labels:
        - app: department

Now, our ConfigMap also needs to be labeled with app=department.

apiVersion: v1
kind: ConfigMap
metadata:
  name: department
  labels:
    app: department
data:
  application.yaml: |-
    mongodb:
      collection: department
      database: admin
    kubernetes:
      client:
        discovery:
          includes:
            - employee

For more details about integration between Micronaut and Kubernetes you may refer to my article Guide to Micronaut Kubernetes.

The post Local Java Development on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/02/14/local-java-development-on-kubernetes/feed/ 14 7706
Hazelcast with Spring Boot on Kubernetes https://piotrminkowski.com/2020/01/31/hazelcast-with-spring-boot-on-kubernetes/ https://piotrminkowski.com/2020/01/31/hazelcast-with-spring-boot-on-kubernetes/#respond Fri, 31 Jan 2020 08:26:38 +0000 http://piotrminkowski.com/?p=7675 Hazelcast is the leading in-memory data grid (IMDG) solution. The main idea behind IMDG is to distribute data across many nodes inside a cluster. Therefore, it seems to be an ideal solution for running on a cloud platform like Kubernetes, where you can easily scale up or scale down a number of running instances. Since […]

The post Hazelcast with Spring Boot on Kubernetes appeared first on Piotr's TechBlog.

]]>
Hazelcast is the leading in-memory data grid (IMDG) solution. The main idea behind IMDG is to distribute data across many nodes inside a cluster. Therefore, it seems to be an ideal solution for running on a cloud platform like Kubernetes, where you can easily scale up or scale down a number of running instances. Since Hazelcast is written in Java you can easily integrate it with your Java application using standard libraries. Something that can also simplify a start with Hazelcast is Spring Boot. You may also use an unofficial library implementing Spring Repositories pattern for Hazelcast – Spring Data Hazelcast.
The main goal of this article is to demonstrate how to embed Hazelcast into the Spring Boot application and run it on Kubernetes as a multi-instance cluster. Thanks to Spring Data Hazelcast we won’t have to get into the details of Hazelcast data types. Although Spring Data Hazelcast does not provide many advanced features, it is very good for a start.

Architecture

We are running multiple instances of a single Spring Boot application on Kubernetes. Each application exposes port 8080 for HTTP API access and 5701 for Hazelcast cluster members discovery. Hazelcast instances are embedded into Spring Boot applications. We are creating two services on Kubernetes. The first of them is dedicated for HTTP API access, while the second is responsible for enabling discovery between Hazelcast instances. HTTP API will be used for making some tests requests that add data to the cluster and find data there. Let’s proceed to the implementation.

hazelcast-spring-boot-kubernetes.png

Example

The source code with sample application is as usual available on GitHub. It is available here https://github.com/piomin/sample-hazelcast-spring-datagrid.git. You should access module employee-kubernetes-service.

Dependencies

An integration between Spring and Hazelcast is provided by hazelcast-spring library. The version of Hazelcast libraries is related to Spring Boot via dependency management, so we just need to define the version of Spring Boot to the newest stable 2.2.4.RELEASE. The current version of Hazelcast related to this version of Spring Boot is 3.12.5. In order to enable Hazelcast members discovery on Kubernetes we also need to include hazelcast-kubernetes dependency. Its versioning is independable from core libraries. The newest version 2.0 is dedicated for Hazelcast 4. Since we are still using Hazelcast 3 we are declaring version 1.5.2 of hazelcast-kubernetes. We also include Spring Data Hazelcast and optionally Lombok for simplification.

<parent>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-parent</artifactId>
   <version>2.2.4.RELEASE</version>
</parent>
<dependencies>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
   <dependency>
      <groupId>com.hazelcast</groupId>
      <artifactId>spring-data-hazelcast</artifactId>
      <version>2.2.2</version>
   </dependency>
   <dependency>
      <groupId>com.hazelcast</groupId>
      <artifactId>hazelcast-spring</artifactId>
   </dependency>
   <dependency>
      <groupId>com.hazelcast</groupId>
      <artifactId>hazelcast-client</artifactId>
   </dependency>
   <dependency>
      <groupId>com.hazelcast</groupId>
      <artifactId>hazelcast-kubernetes</artifactId>
      <version>1.5.2</version>
   </dependency>
   <dependency>
      <groupId>org.projectlombok</groupId>
      <artifactId>lombok</artifactId>
   </dependency>
</dependencies>

Enabling Kubernetes Discovery for Hazelcast

After including required dependencies Hazelcast has been enabled for our application. The only thing we need to do is to enable discovery through Kubernetes. The HazelcastInstance bean is already available in the context, so we may change its configuration by defining com.hazelcast.config.Config bean. We need to disable multicast discovery, which is enabled by default, and enable Kubernetes discovery in the network config as shown below. Kubernetes config requires setting a target namespace of Hazelcast deployment and its service name.

@Bean
Config config() {
   Config config = new Config();
   config.getNetworkConfig().getJoin().getTcpIpConfig().setEnabled(false);
   config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
   config.getNetworkConfig().getJoin().getKubernetesConfig().setEnabled(true)
         .setProperty("namespace", "default")
         .setProperty("service-name", "hazelcast-service");
   return config;
}

We also have to define Kubernetes Service hazelcast-service on port 5701. It is referenced to employee-service deployment.

apiVersion: v1
kind: Service
metadata:
  name: hazelcast-service
spec:
  selector:
    app: employee-service
  ports:
    - name: hazelcast
      port: 5701
  type: LoadBalancer

Here’s Kubernetes Deployment and Service definition for our sample application. We are setting three replicas for our deployment. We are also exposing two ports outside containers.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee-service
  labels:
    app: employee-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: employee-service
  template:
    metadata:
      labels:
        app: employee-service
    spec:
      containers:
        - name: employee-service
          image: piomin/employee-service
          ports:
            - name: http
              containerPort: 8080
            - name: multicast
              containerPort: 5701
---
apiVersion: v1
kind: Service
metadata:
  name: employee-service
  labels:
    app: employee-service
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: employee-service
  type: NodePort

In fact, that’s all that needs to be done to succesfully run Hazelcast cluster on Kubernetes. Before proceeding to the deployment let’s take a look on the application implementation details.

Implementation

Our application is very simple. It defines a single model object, which is stored in Hazelcast cluster. Such a class needs to have id – a field annotated with Spring Data @Id, and should implement Seriazable interface.

@Getter
@Setter
@EqualsAndHashCode
@ToString
public class Employee implements Serializable {

   @Id
   private Long id;
   @EqualsAndHashCode.Exclude
   private Integer personId;
   @EqualsAndHashCode.Exclude
   private String company;
   @EqualsAndHashCode.Exclude
   private String position;
   @EqualsAndHashCode.Exclude
   private int salary;

}

With Spring Data Hazelcast we may define repositories without using any queries or Hazelcast specific API for queries. We are using a well-known method naming pattern defined by Spring Data to build find methods as shown below. Our repository interface should extend HazelcastRepository.

public interface EmployeeRepository extends HazelcastRepository<Employee, Long> {

   Employee findByPersonId(Integer personId);
   List<Employee> findByCompany(String company);
   List<Employee> findByCompanyAndPosition(String company, String position);
   List<Employee> findBySalaryGreaterThan(int salary);

}

To enable Spring Data Hazelcast Repositories we should annotate the main class or the configuration class with @EnableHazelcastRepositories.

@SpringBootApplication
@EnableHazelcastRepositories
public class EmployeeApplication {

   public static void main(String[] args) {
      SpringApplication.run(EmployeeApplication.class, args);
   }
   
}

Finally, here’s the Spring controller implementation. It allows us to invoke all the find methods defined in the repository, add new Employee object to Hazelcast and remove the existing one.

@RestController
@RequestMapping("/employees")
public class EmployeeController {

   private static final Logger logger = LoggerFactory.getLogger(EmployeeController.class);

   private EmployeeRepository repository;

   EmployeeController(EmployeeRepository repository) {
      this.repository = repository;
   }

   @GetMapping("/person/{id}")
   public Employee findByPersonId(@PathVariable("id") Integer personId) {
      logger.info("findByPersonId({})", personId);
      return repository.findByPersonId(personId);
   }
   
   @GetMapping("/company/{company}")
   public List<Employee> findByCompany(@PathVariable("company") String company) {
      logger.info(String.format("findByCompany({})", company));
      return repository.findByCompany(company);
   }

   @GetMapping("/company/{company}/position/{position}")
   public List<Employee> findByCompanyAndPosition(@PathVariable("company") String company, @PathVariable("position") String position) {
      logger.info(String.format("findByCompany({}, {})", company, position));
      return repository.findByCompanyAndPosition(company, position);
   }
   
   @GetMapping("/{id}")
   public Employee findById(@PathVariable("id") Long id) {
      logger.info("findById({})", id);
      return repository.findById(id).get();
   }

   @GetMapping("/salary/{salary}")
   public List<Employee> findBySalaryGreaterThan(@PathVariable("salary") int salary) {
      logger.info(String.format("findBySalaryGreaterThan({})", salary));
      return repository.findBySalaryGreaterThan(salary);
   }
   
   @PostMapping
   public Employee add(@RequestBody Employee emp) {
      logger.info("add({})", emp);
      return repository.save(emp);
   }

   @DeleteMapping("/{id}")
   public void delete(@PathVariable("id") Long id) {
      logger.info("delete({})", id);
      repository.deleteById(id);
   }

}

Running Hazelcast on Kubernetes via Minikube

We will test our sample application on Minikube.

$ minikube start --vm-driver=virtualbox

The application is configured to run with Skaffold and Jib Maven Plugin. I have already described both these tools in one of my previous articles. They simplify the build and deployment process on Minikube. Assuming we are in the root directory of our application we just need to run the following command. Skaffold automatically builds our application using Maven, creates a Docker image based on Maven settings, applies a deployment file from k8s directory, and finally runs the application on Kubernetes.

$ skaffold dev

Since, we have declared three instances of our application in the deployment.yaml three pods are started. If Hazelcast discovery is succesfully finished you should see the following fragment of pods logs printed out by Skaffold.

hazelcast-spring-boot-kubernetes-cluster-members

Let’s take a look at the running pods.

pods

And the list of services. HTTP API is available outside Minikube under port 32090.

kubernetes-svc

Now, we may send some test requests. We will start by calling POST /employees method to add some Employee objects into Hazelcast cluster. Then we will perform some find methods using GET /employees/{id}. Since all the methods have finished succesfully, we should take a look at the logs that clearly show the working of Hazelcast cluster.

$ curl -X POST http://192.168.99.100:32090/employees -d '{"id":1,"personId":1,"company":"Test1","position":"Developer","salary":2000}' -H "Content-Type: application/json"
{"id":1,"personId":1,"company":"Test1","position":"Developer","salary":2000}
$ curl -X POST http://192.168.99.100:32090/employees -d '{"id":2,"personId":2,"company":"Test2","position":"Developer","salary":5000}' -H "Content-Type: application/json"
{"id":2,"personId":2,"company":"Test2","position":"Developer","salary":5000}
$ curl -X POST http://192.168.99.100:32090/employees -d '{"id":3,"personId":3,"company":"Test2","position":"Developer","salary":5000}' -H "Content-Type: application/json"
{"id":3,"personId":3,"company":"Test2","position":"Developer","salary":5000}
$ curl -X POST http://192.168.99.100:32090/employees -d '{"id":4,"personId":4,"company":"Test3","position":"Developer","salary":9000}' -H "Content-Type: application/json"
{"id":4,"personId":4,"company":"Test3","position":"Developer","salary":9000}
$ curl http://192.168.99.100:32090/employees/1
{"id":1,"personId":1,"company":"Test1","position":"Developer","salary":2000}
$ curl http://192.168.99.100:32090/employees/2
{"id":2,"personId":2,"company":"Test2","position":"Developer","salary":5000}
$ curl http://192.168.99.100:32090/employees/3
{"id":3,"personId":3,"company":"Test2","position":"Developer","salary":5000}

Here’s the screen with logs from pods printed out by Skaffold. Skaffold prints pod id for every single log line. Let’s take a closer look on the logs. The request for adding Employee with id=1 is processed by the application running on pod 5b758cc977-s6ptd. When we call find method using id=1 it is processed by the application on pod 5b758cc977-2fj2h. It proves that the Hazelcast cluster works properly. The same behaviour may be observed for other test requests.

skaffold

We may also call some other find methods.

$ curl http://192.168.99.100:32090/employees/company/Test2/position/Developer
[{"id":2,"personId":2,"company":"Test2","position":"Developer","salary":5000},{"id":3,"personId":3,"company":"Test2","position":"Developer","salary":5000}]
$ curl http://192.168.99.100:32090/employees/salary/3000
[{"id":2,"personId":2,"company":"Test2","position":"Developer","salary":5000},{"id":4,"personId":4,"company":"Test3","position":"Developer","salary":9000},{"id":3,"personId":3,"company":"Test2","position":"Developer","salary":5000}]

Let’s test another scenario. We will remove one pod from the cluster as shown below.

delete-pod

Then we send some test requests to GET /employees/{id}. No matter which instance of the application is processing the request the object is being returned.

hazelcast-spring-boot-kubernetes-skaffold-new-pod

The post Hazelcast with Spring Boot on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/01/31/hazelcast-with-spring-boot-on-kubernetes/feed/ 0 7675
Kubernetes Messaging with Java and KubeMQ https://piotrminkowski.com/2020/01/17/kubernetes-messaging-with-java-and-kubemq/ https://piotrminkowski.com/2020/01/17/kubernetes-messaging-with-java-and-kubemq/#comments Fri, 17 Jan 2020 08:53:31 +0000 http://piotrminkowski.com/?p=7632 Have you ever tried to run any message broker on Kubernetes? KubeMQ is a relatively new solution and is not as popular as competitive tools like RabbitMQ, Kafka, or ActiveMQ. However, it has one big advantage over them – it is Kubernetes native message broker, which may be deployed there using a single command without […]

The post Kubernetes Messaging with Java and KubeMQ appeared first on Piotr's TechBlog.

]]>
Have you ever tried to run any message broker on Kubernetes? KubeMQ is a relatively new solution and is not as popular as competitive tools like RabbitMQ, Kafka, or ActiveMQ. However, it has one big advantage over them – it is Kubernetes native message broker, which may be deployed there using a single command without preparing any additional templates or manifests. This convinced me to take a closer look at KubeMQ.
KubeMQ is an enterprise-grade, scalable, highly available, and secure message broker and message queue, designed as Kubernetes native solution in a lightweight container. It is written in Go. Therefore it is being advertised as a very fast solution running inside a small Docker container, which has about 30MB. It may be easily integrated with some popular third-party tools for observability like Zipkin, Prometheus, or Datadog.
When I’m reading comparison with competitive tools like RabbitMQ or Redis available on KubeMQ site (https://kubemq.io/product-overview/) it looks pretty amazing (for KubeMQ of course). It seems the authors wanted to merge some useful features of RabbitMQ and Kafka in a single product. In fact, KubeMQ provides many interesting mechanisms like delayed delivery, message peeking, message batch sending and receiving for queues, and consumer groups, load balancing and offsetting support for pub/sub.
Ok, when I’m looking at their SDK Java I see that it’s a new product, and there are still some things to do. However, all the features listed above seem to be very useful. Of course, I won’t be able to demonstrate all of them in this article, but I’m going to show you a simple Java application that uses message queue with transactions, and a pub/sub event store. Let’s begin our KubeMQ Java tutorial.

Example

The example application is written in Java 11, and uses Spring Boot. The source code is available as usual on GitHub. The repository address is https://github.com/piomin/sample-java-kubemq.git.

Before start

Before starting with KubeMQ you need to have a running instance of Minikube. I have tested it on version 1.6.1.

$ minikube start --vm-driver=virtualbox

Running KubeMQ on Kubernetes

First, you need to install KubeMQ. For Windows, you just need to download the latest version of CLI available on address https://github.com/kubemq-io/kubemqctl/releases/download/latest/kubemqctl.exe and copy it to the directory under PATH. Before installing KubeMQ on your Minikube instance we need to register on the web site https://account.kubemq.io/login/register. You will receive a token required for the installation. Installation is very easy with CLI. You just need to execute command kubemqctl cluster create with the registration token as shown below.

kubemq-java-tutorial-create

By default, KubeMQ creates a cluster consisting of three instances (pods). It is deployed as Kubernetes StatefulSet. The deployment is available inside the newly created namespace – kubemq. We can easily check the list of running pods with kubectl get pod command.

kubemq-java-tutorial-pods

The list of pods is not very important for us. We can easily scale up and scale down the number of instances in the cluster using command kubemqctl cluster scale. KubeMQ is exposed in the cluster under different interfaces. KubeMQ Java SDK is using GRPC protocol for communication, so we use service kubemq-cluster-grpc available under port 50000.

kubernetes-messaging-java-kubemq-svc

Since KubeMQ is a native Kubernetes message broker starting with it on Minikube is very simple. After executing a single command, we may now focus on development.

Example Architecture

We have an example application deployed on Kubernetes, which integrates with KubeMQ queue and event store. The diagram visible below illustrates an architecture of the application. It exposes REST endpoint POST /orders for creating new orders. Each order signifies a transfer between two in-memory accounts. The incoming order is sent to the queue orders (1). Then it is received by the listener (2), which is responsible for updating account balances using AccountRepository bean (3). If the transaction is finished, the event is sent to the pub/sub topic transactions. Incoming events may be listened to by many subscribers (4). In the example application we have two listeners: TransactionAmountListener and TransactionCountListener (5). They are responsible for adding extra money to the target order’s account based on the different criteria. The first criteria is an amount of a given transaction, while the second is the number of processed transactions per account.

kubernetes-messaging-java-kubemq-arch

On the described example application I’m going to show you the following features of KubeMQ and its SDK for Java:

  • Sending messages to a queue
  • Listening for incoming queue messages and handling transactions
  • Sending messages to pub/sub via Channel
  • Subscribing to pub/sub events and reading older events from a store
  • Using Spring Boot for integration with KubeMQ for standalone Java application

Let’s proceed to the implementation.

Implementation with Spring Boot and KubeMQ SDK

We are beginning with configuration. The URL to KubeMQ GRPC has been externalized in the application.yml.

spring:
  application:
    name: sampleapp-kubemq
kubemq:
  address: kubemq-cluster-grpc:50000

In the @Configuration class we are defining all required KubeMQ resources as Spring beans. Each of them requires a KubeMQ cluster address. We need to declare a queue, a channel for sending events and a subscriber for subscribing to the pub/sub events and events store.

@Configuration
@ConfigurationProperties("kubemq")
public class KubeMQConfiguration {

    private String address;

    @Bean
    public Queue queue() throws ServerAddressNotSuppliedException, SSLException {
        return new Queue("transactions", "orders", address);
    }

    @Bean
    public Subscriber subscriber() {
        return new Subscriber(address);
    }

    @Bean
    public Channel channel() {
        return new Channel("transactions", "orders", true, address);
    }

    String getAddress() {
        return address;
    }

    void setAddress(String address) {
        this.address = address;
    }

}

The first component in our architecture is a controller. It exposes HTTP endpoint for placing an order. OrderController injects Queue bean and uses it for sending messages to the KubeMQ queue. After receiving a response that message has been delivered it returns an order with id and status=ACCEPTED.

@RestController
@RequestMapping("/orders")
public class OrderController {

    private static final Logger LOGGER = LoggerFactory.getLogger(OrderController.class);

    private Queue queue;

    public OrderController(Queue queue) {
        this.queue = queue;
    }

    @PostMapping
    public Order sendOrder(@RequestBody Order order) {
        try {
            LOGGER.info("Sending: {}", order);
            final SendMessageResult result = queue.SendQueueMessage(new Message()
                    .setBody(Converter.ToByteArray(order)));
            order.setId(result.getMessageID());
            order.setStatus(OrderStatus.ACCEPTED);
            LOGGER.info("Sent: {}", order);
        } catch (ServerAddressNotSuppliedException | IOException e) {
            LOGGER.error("Error sending", e);
            order.setStatus(OrderStatus.ERROR);
        }
        return order;
    }

}

The message is processed asynchronously. Since the current KubeMQ Java SDK does not provide any message listener for asynchronous processing, we use synchronous methods inside the infinitive loop. The loop is started inside a new thread handled using Spring TaskExecutor. When a new message is received, we are starting a KubeMQ transaction. It may be acknowledged or rejected. A transaction is confirmed if the source account has sufficient funds to perform a transfer to a target account. If a transaction is confirmed it sends an event to KubeMQ transactions pub/sub with information about it using Channel bean.

@Component
public class OrderListener {

	private static final Logger LOGGER = LoggerFactory.getLogger(OrderListener.class);

	private Queue queue;
	private Channel channel;
	private OrderProcessor orderProcessor;
	private TaskExecutor taskExecutor;

	public OrderListener(Queue queue, Channel channel, OrderProcessor orderProcessor, TaskExecutor taskExecutor) {
		this.queue = queue;
		this.channel = channel;
		this.orderProcessor = orderProcessor;
		this.taskExecutor = taskExecutor;
	}

	@PostConstruct
	public void listen() {
		taskExecutor.execute(() -> {
			while (true) {
			    try {
                    Transaction transaction = queue.CreateTransaction();
                    TransactionMessagesResponse response = transaction.Receive(10, 10);
                    if (response.getMessage().getBody().length > 0) {
                        Order order = orderProcessor
                                .process((Order) Converter.FromByteArray(response.getMessage().getBody()));
                        LOGGER.info("Processed: {}", order);
                        if (order.getStatus().equals(OrderStatus.CONFIRMED)) {
                            transaction.AckMessage();
                            Event event = new Event();
                            event.setEventId(response.getMessage().getMessageID());
                            event.setBody(Converter.ToByteArray(order));
							LOGGER.info("Sending event: id={}", event.getEventId());
                            channel.SendEvent(event);
                        } else {
                            transaction.RejectMessage();
                        }
                    } else {
                        LOGGER.info("No messages");
                    }
                    Thread.sleep(10000);
                } catch (Exception e) {
					LOGGER.error("Error", e);
                }
			}
		});

	}

}

OrderListener class is using AccountRepository bean for account balance management. It is a simple in-memory store just for a demo purpose.

@Repository
public class AccountRepository {

    private List<Account> accounts = new ArrayList<>();

    public Account updateBalance(Integer id, int amount) throws InsufficientFundsException {
        Optional<Account> accOptional = accounts.stream().filter(a -> a.getId().equals(id)).findFirst();
        if (accOptional.isPresent()) {
            Account account = accOptional.get();
            account.setBalance(account.getBalance() + amount);
            if (account.getBalance() < 0)
                throw new InsufficientFundsException();
            int index = accounts.indexOf(account);
            accounts.set(index, account);
            return account;
        }
        return null;
    }

    public Account add(Account account) {
        account.setId(accounts.size() + 1);
        accounts.add(account);
        return account;
    }

    public List<Account> getAccounts() {
        return accounts;
    }

    @PostConstruct
    public void init() {
        add(new Account(null, "123456", 2000));
        add(new Account(null, "123457", 2000));
        add(new Account(null, "123458", 2000));
    }
}

And the last components in our architecture – event listeners. Both of them are subscribing to the same EventsStore transactions. The TransactionAmountListener is the simpler one. It is processing only a single event in order transfer percentage bonus counter from transaction amount to a target account. That’s why we have defined it as a listener just for new events (EventsStoreType.StartNewOnly).

@Component
public class TransactionAmountListener implements StreamObserver<EventReceive> {

    private static final Logger LOGGER = LoggerFactory.getLogger(TransactionAmountListener.class);

    private Subscriber subscriber;
    private AccountRepository accountRepository;

    public TransactionAmountListener(Subscriber subscriber, AccountRepository accountRepository) {
        this.subscriber = subscriber;
        this.accountRepository = accountRepository;
    }

    @Override
    public void onNext(EventReceive eventReceive) {
        try {
            Order order = (Order) Converter.FromByteArray(eventReceive.getBody());
            LOGGER.info("Amount event: {}", order);
            accountRepository.updateBalance(order.getAccountIdTo(), (int) (order.getAmount() * 0.1));
        } catch (IOException | ClassNotFoundException | InsufficientFundsException e) {
            LOGGER.error("Error", e);
        }
    }

    @Override
    public void onError(Throwable throwable) {

    }

    @Override
    public void onCompleted() {

    }

    @PostConstruct
    public void init() {
        SubscribeRequest subscribeRequest = new SubscribeRequest();
        subscribeRequest.setChannel("transactions");
        subscribeRequest.setClientID("amount-listener");
        subscribeRequest.setSubscribeType(SubscribeType.EventsStore);
        subscribeRequest.setEventsStoreType(EventsStoreType.StartNewOnly);
        try {
            subscriber.SubscribeToEvents(subscribeRequest, this);
        } catch (ServerAddressNotSuppliedException | SSLException e) {
            e.printStackTrace();
        }
    }
}

The other situation is with TransactionCountListener. It should be able to retrieve a list of all events published on pub/sub after every startup of our application. That’s why we are defining StartFromFirst as EventStoreType for Subscriber. Also a clientId needs to be dynamically generated on apply startup in order to retrieve all stored events. The listener send bonus to a target account after the fifth transaction addressed to that account succesfully processed by the application.

@Component
public class TransactionCountListener implements StreamObserver<EventReceive> {

    private static final Logger LOGGER = LoggerFactory.getLogger(TransactionCountListener.class);
    private Map<Integer, Integer> transactionsCount = new HashMap<>();

    private Subscriber subscriber;
    private AccountRepository accountRepository;

    public TransactionCountListener(Subscriber subscriber, AccountRepository accountRepository) {
        this.subscriber = subscriber;
        this.accountRepository = accountRepository;
    }

    @Override
    public void onNext(EventReceive eventReceive) {
        try {
            Order order = (Order) Converter.FromByteArray(eventReceive.getBody());
            LOGGER.info("Count event: {}", order);
            Integer accountIdTo = order.getAccountIdTo();
            Integer noOfTransactions = transactionsCount.get(accountIdTo);
            if (noOfTransactions == null)
                transactionsCount.put(accountIdTo, 1);
            else {
                transactionsCount.put(accountIdTo, ++noOfTransactions);
                if (noOfTransactions > 5) {
                    accountRepository.updateBalance(order.getAccountIdTo(), (int) (order.getAmount() * 0.1));
                    LOGGER.info("Adding extra to: id={}", order.getAccountIdTo());
                }
            }
        } catch (IOException | ClassNotFoundException | InsufficientFundsException e) {
            LOGGER.error("Error", e);
        }
    }

    @Override
    public void onError(Throwable throwable) {

    }

    @Override
    public void onCompleted() {

    }

    @PostConstruct
    public void init() {
        final SubscribeRequest subscribeRequest = new SubscribeRequest();
        subscribeRequest.setChannel("transactions");
        subscribeRequest.setClientID("count-listener-" + System.currentTimeMillis());
        subscribeRequest.setSubscribeType(SubscribeType.EventsStore);
        subscribeRequest.setEventsStoreType(EventsStoreType.StartFromFirst);
        try {
            subscriber.SubscribeToEvents(subscribeRequest, this);
        } catch (ServerAddressNotSuppliedException | SSLException e) {
            e.printStackTrace();
        }
    }

}

Running on Minikube

The easiest way to run our sample application on Minikube is with Skaffold and Jib. We don’t have to prepare any Dockerfile, only a single deployment manifest in k8s directory. Here’s our deployment.yaml file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sampleapp-kubemq
  namespace: kubemq
  labels:
    app: sampleapp-kubemq
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sampleapp-kubemq
  template:
    metadata:
      labels:
        app: sampleapp-kubemq
    spec:
      containers:
        - name: sampleapp-kubemq
          image: piomin/sampleapp-kubemq
          ports:
            - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: sampleapp-kubemq
  namespace: kubemq
  labels:
    app: sampleapp-kubemq
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: sampleapp-kubemq
  type: NodePort

The source code is prepared to use Skaffold and Jib. It contains the skaffold.yaml file in the project root directory.

apiVersion: skaffold/v2alpha1
kind: Config
build:
  artifacts:
    - image: piomin/sampleapp-kubemq
      jib: {}
  tagPolicy:
    gitCommit: {}

We also need to have a jib-maven-plugin Maven plugin in our pom.xml.

<plugin>
	<groupId>com.google.cloud.tools</groupId>
	<artifactId>jib-maven-plugin</artifactId>
	<version>1.8.0</version>
</plugin>

Now, we only have to execute the following command.


$ skaffold dev

Since our application is deployed on Minikube, we may perform some test calls. Assuming that Minikube node is available under address 192.168.99.100, here’s the example of test request and response from application.

$ curl -s http://192.168.99.100:30833/orders -d '{"type":"TRANSFER","accountIdFrom":1,"accountIdTo":2,"amount":300,"status":"NEW"}' -H 'Content-Type: application/json'
{"type":"TRANSFER","accountIdFrom":1,"accountIdTo":2,"date":null,"amount":300,"id":"10","status":"ACCEPTED"}

We may check a list of queues created on KubeMQ using command kubemqctl queues list as shown below.

kubemq-java-tutorial-queues

After sending some other test requests and performing some restarts of the application pod we may take a look at the event_store list using command kubemqctl events_store list as shown below. We may see that there are multiple clients with id count-listener* registered, but only the current is active.

kubemq-java-tutorial-events

Let’s take a look on application logs. They are automatically displayed on the screen after running the skaffold dev command. As you see each message sent to the queue is received by the listener, which performs transfer between accounts and then sends events to pub/sub. Finally both event_store listeners receive the event.

logs-1

If you restart the pod with the application TransactionCountListener receives all events available inside event_store and counts them for each target account id. If a total number of transactions for a single account extends 5 it sends extra funds to that account.

logs-2

If a transaction is rejected by OrderListener due to lack of funds on source account the message is re-delivered to the queue.

logs-3

Conclusion

In this article I show you a sample application that integrates with KubeMQ to realize standard use cases based on queues and topics (pub/sub). Starting with KubeMQ on Kubernetes and management is extremely easy with KubeMQ CLI. It has many interesting features described in quite well prepared documentation available on site https://docs.kubemq.io/. As a modern, cloud-native message broker KubeMQ is able to transfer billions of messages daily. However, we should bear in mind, it is a relatively new product, and features are not completely refined as in competition. For example, you can compare KubeMQ dashboard (available after executing command kubemqctl cluster dashboard) with RabbitMQ Web Admin. Of course, everything takes a little time, and I will follow the progress in KubeMQ development.

The post Kubernetes Messaging with Java and KubeMQ appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/01/17/kubernetes-messaging-with-java-and-kubemq/feed/ 2 7632
Guide To Micronaut Kubernetes https://piotrminkowski.com/2020/01/07/guide-to-micronaut-kubernetes/ https://piotrminkowski.com/2020/01/07/guide-to-micronaut-kubernetes/#respond Tue, 07 Jan 2020 10:24:11 +0000 http://piotrminkowski.com/?p=7597 Micronaut provides a library that eases the development of applications deployed on Kubernetes or on a local single-node cluster like Minikube. The project Micronaut Kubernetes is relatively new in the Micronaut family, its current release version is 1.0.3. It allows you to integrate a Micronaut application with Kubernetes discovery, and use Micronaut Configuration Client to […]

The post Guide To Micronaut Kubernetes appeared first on Piotr's TechBlog.

]]>
Micronaut provides a library that eases the development of applications deployed on Kubernetes or on a local single-node cluster like Minikube. The project Micronaut Kubernetes is relatively new in the Micronaut family, its current release version is 1.0.3. It allows you to integrate a Micronaut application with Kubernetes discovery, and use Micronaut Configuration Client to read Kubernetes ConfigMap and Secret as a property sources. Additionally, it provides a health check indicator based on communication with Kubernetes API.
Thanks to that module you can simplify and speed up your Micronaut application deployment on Kubernetes during development. In this article I’m going to show how to use Micronaut Kubernetes together with some other interesting tools to simplify local development with Minikube. The topics covered in this article are:

  • Using Skaffold together with Jib Maven Plugin to automatically publish application to Minikube after source code change
  • Providing communication between applications using Micronaut HTTP Client basing on Kubernetes Endpoints name
  • Enabling Kubernetes ConfigMap and Secret as Micronaut Property Sources
  • Using application health check
  • Integrating application with MongoDB running on Minikube

Micronaut Kubernetes example on GitHub

The source code with Micronaut Kubernetes example is as usual available on GitHub: https://github.com/piomin/sample-micronaut-kubernetes.git. Here’s the architecture of our example system consisting of three microservices built on top of Micronaut Framework.

guide-to-micronaut-kubernetes-architecture.png

Using Skaffold and Jit

Development with Minikube may be a little bit complicated in comparison to the standard approach when you are testing an application locally without running it on the platform. First you need to build your application from source code, then build its Docker image and finally redeploy application on Kubernetes using the newest image. Skaffold performs all these steps automatically for you. The only thing you need to do is to install it on a machine and enable it for your maven project using command skaffold init. The command skaffold init just creates a file skaffold.yaml in the root of the project. Of course, you can create such a manifest by yourself, especially if you would like to use Skaffold together with Jib. Here’s my skaffold.yaml manifest. We set the name of Docker image, tagging policy to Git commit id and also enabled Jib.

apiVersion: skaffold/v2alpha1
kind: Config
build:
  artifacts:
    - image: piomin/employee
      jib: {}
  tagPolicy:
    gitCommit: {}

Why do we need to use Jib? By default, Skaffold is based on Dockerfile, so each change will be published to Kubernetes only after the JAR file changes. With Jib it is watching for changes in the source code and first automatically rebuilding your Maven projects.

<plugin>
   <groupId>com.google.cloud.tools</groupId>
   <artifactId>jib-maven-plugin</artifactId>
   <version>1.8.0</version>
</plugin>

Now you just need to run command skaffold dev on a selected Maven project, and your application will be automatically deployed to Kubernetes on every change in the source code. Additionally, Skaffold may apply Kubernetes manifest file if it is located in k8s directory.

k8s

Implementation of Micronaut Kubernetes example

Let’s begin from implementation. Each of our applications uses MongoDB as a backend store. We are using a synchronous Java client for integration with MongoDB. Micronaut comes with project micronaut-mongo-reactive that provides auto-configuration for both reactive and non-reactive drivers.

<dependency>
   <groupId>io.micronaut.configuration</groupId>
   <artifactId>micronaut-mongo-reactive</artifactId>
</dependency>
<dependency>
   <groupId>org.mongodb</groupId>
   <artifactId>mongo-java-driver</artifactId>
</dependency>

It is based on mongodb.uri property and allows you to inject preconfigured MongoClient bean. Then, we use MongoClient for save and find operations. When using it we first need to set a current database and collection name. All required parameters uri, database and collection are taken from external configuration.

@Singleton
public class EmployeeRepository {

   private MongoClient mongoClient;

   @Property(name = "mongodb.database")
   private String mongodbDatabase;
   @Property(name = "mongodb.collection")
   private String mongodbCollection;

   EmployeeRepository(MongoClient mongoClient) {
      this.mongoClient = mongoClient;
   }

   public Employee add(Employee employee) {
      employee.setId(repository().countDocuments() + 1);
      repository().insertOne(employee);
      return employee;
   }

   public Employee findById(Long id) {
      return repository().find().first();
   }

   public List<Employee> findAll() {
      final List<Employee> employees = new ArrayList<>();
      repository()
            .find()
            .iterator()
            .forEachRemaining(employees::add);
      return employees;
   }

   public List<Employee> findByDepartment(Long departmentId) {
      final List<Employee> employees = new ArrayList<>();
      repository()
            .find(Filters.eq("departmentId", departmentId))
            .iterator()
            .forEachRemaining(employees::add);
      return employees;
   }

   public List<Employee> findByOrganization(Long organizationId) {
      final List<Employee> employees = new ArrayList<>();
      repository()
            .find(Filters.eq("organizationId", organizationId))
            .iterator()
            .forEachRemaining(employees::add);
      return employees;
   }

   private MongoCollection<Employee> repository() {
      return mongoClient.getDatabase(mongodbDatabase).getCollection(mongodbCollection, Employee.class);
   }

}

Each application exposes REST endpoints for CRUD operations. Here’s controller implementation for employee-service.

@Controller("/employees")
public class EmployeeController {

   private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);

   @Inject
   EmployeeRepository repository;

   @Post
   public Employee add(@Body Employee employee) {
      LOGGER.info("Employee add: {}", employee);
      return repository.add(employee);
   }

   @Get("/{id}")
   public Employee findById(Long id) {
      LOGGER.info("Employee find: id={}", id);
      return repository.findById(id);
   }

   @Get
   public List<Employee> findAll() {
      LOGGER.info("Employees find");
      return repository.findAll();
   }

   @Get("/department/{departmentId}")
   public List<Employee> findByDepartment(Long departmentId) {
      LOGGER.info("Employees find: departmentId={}", departmentId);
      return repository.findByDepartment(departmentId);
   }

   @Get("/organization/{organizationId}")
   public List<Employee> findByOrganization(Long organizationId) {
      LOGGER.info("Employees find: organizationId={}", organizationId);
      return repository.findByOrganization(organizationId);
   }

}

We may use Micronaut declarative HTTP client for communication with REST endpoints. We just need to create an interface annotated with @Client that declares calling methods.

@Client(id = "employee", path = "/employees")
public interface EmployeeClient {

   @Get("/department/{departmentId}")
   List<Employee> findByDepartment(Long departmentId);

}

It allows you to integrate Micronaut HTTP Clients with Kubernetes discovery in order to use the name of Kubernetes Endpoints as a service id. Then the client is injected into the controller. In the following code you may see the implementation of a controller in the department-service that uses EmployeeClient.

@Controller("/departments")
public class DepartmentController {

   private static final Logger LOGGER = LoggerFactory.getLogger(DepartmentController.class);

   private DepartmentRepository repository;
   private EmployeeClient employeeClient;

   DepartmentController(DepartmentRepository repository, EmployeeClient employeeClient) {
      this.repository = repository;
      this.employeeClient = employeeClient;
   }

   @Post
   public Department add(@Body Department department) {
      LOGGER.info("Department add: {}", department);
      return repository.add(department);
   }

   @Get("/{id}")
   public Department findById(Long id) {
      LOGGER.info("Department find: id={}", id);
      return repository.findById(id);
   }

   @Get
   public List<Department> findAll() {
      LOGGER.info("Department find");
      return repository.findAll();
   }

   @Get("/organization/{organizationId}")
   public List<Department> findByOrganization(Long organizationId) {
      LOGGER.info("Department find: organizationId={}", organizationId);
      return repository.findByOrganization(organizationId);
   }

   @Get("/organization/{organizationId}/with-employees")
   public List<Department> findByOrganizationWithEmployees(Long organizationId) {
      LOGGER.info("Department find: organizationId={}", organizationId);
      List<Department> departments = repository.findByOrganization(organizationId);
      departments.forEach(d -> d.setEmployees(employeeClient.findByDepartment(d.getId())));
      return departments;
   }

}

Discovery with Micronaut Kubernetes

Using serviceId for communication with Micronaut HTTP Client requires integration with service discovery. Since we are running our applications on Kubernetes we are going to use its service registry. Here comes Micronaut Kubernetes. It integrates Micronaut application and Kubernetes discovery via Endpoints object. First, let’s add the required dependency.

<dependency>
   <groupId>io.micronaut.kubernetes</groupId>
   <artifactId>micronaut-kubernetes-discovery-client</artifactId>
</dependency>

In fact we don’t have to do anything else, because after adding the required dependency integration with Kubernetes discovery is enabled. We may proceed to the deployment. In Kubernetes Service definition the field metadata.name should be the same as field id inside @Client annotation.


apiVersion: v1
kind: Service
metadata:
  name: employee
  labels:
    app: employee
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: employee
  type: NodePort

Here’s a YAML deployment manifest for Service employee. The container is exposed on port 8080 and uses the latest tag of image piomin/employee, which is set in Skaffold manifest.


apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee
  labels:
    app: employee
spec:
  replicas: 1
  selector:
    matchLabels:
      app: employee
  template:
    metadata:
      labels:
        app: employee
    spec:
      containers:
        - name: employee
          image: piomin/employee
          ports:
            - containerPort: 8080

We can also increase log level for Kubernetes API client calls and for the whole Micronaut Kubernetes project to DEBUG. Here’s the fragment of our logback.xml.

<logger name="io.micronaut.http.client" level="DEBUG"/>
<logger name="io.micronaut.kubernetes" level="DEBUG"/>

Micronaut Kubernetes Discovery additionally allows us to filter the list of registered services. We may define the list of included or excluded services using property kubernetes.client.discovery.includes or kubernetes.client.discovery.excludes. Assuming we have many services registered in the same namespace, this feature may be applicable. Here’s the list of services registered in the default namespace after deploying all our sample microservices and MongoDB.

guide-to-micronaut-kubernetes-services

Since one of our applications department-service is communicating only with employee-service we may reduce the list of discovered services only to employee.


kubernetes:
  client:
    discovery:
      includes:
        - employee

Configuration Client

The Configuration client is reading Kubernetes ConfigMaps and Secrets, and making them available as PropertySources for your application. Since configuration parsing happens in the bootstrap phase, we need to define the following property in bootstrap.yml in order to enable distributed configuration clients.


micronaut:
  application:
    name: employee
  config-client:
    enabled: true

By default, the configuration client is reading all the ConfigMaps and Secrets for the configured namespace. You can filter the list of config map names by defining kubernetes.client.config-maps.includes or kubernetes.client.config-maps.excludes. Alternatively we may use Kubernetes labels, which give us more flexibility. This configuration also needs to be provided in the bootstrap phase. Reading Secrets is disabled by default. Therefore, we also need to enable it. Here’s the configuration for department-service, which is similar for all other apps.


kubernetes:
  client:
    config-maps:
      labels:
        - app: department
    secrets:
      enabled: true
      labels:
        - app: department

Kubernetes ConfigMap and Secret also need to be labeled with app=department.


apiVersion: v1
kind: ConfigMap
metadata:
  name: department
  labels:
    app: department
data:
  application.yaml: |-
    mongodb:
      collection: department
      database: admin
    kubernetes:
      client:
        discovery:
          includes:
            - employee

Here’s Secret definition for department-service. We configure there mongodb.uri property, which contains sensitive data like username or password. It is used by MongoClient for establishing connection with the server.


apiVersion: v1
kind: Secret
metadata:
  name: department
  labels:
    app: department
type: Opaque
data:
  mongodb.uri: bW9uZ29kYjovL21pY3JvbmF1dDptaWNyb25hdXRfMTIzQG1vbmdvZGI6MjcwMTcvYWRtaW4=

Running sample applications

Before running any application in default namespace we need to set the appropriate permissions. Micronaut Kubernetes requires read access to pods, endpoints, secrets, services and config maps. For development needs we may set the highest level of permissions by creating ClusterRoleBinding pointing to cluster-admin role.

$ kubectl create clusterrolebinding admin --clusterrole=cluster-admin --serviceaccount=default:default

One of useful Skaffold features is an ability to print standard output of the started container to a console. Thanks to that you don’t have to execute command kubectl logs on a pod. Let’s take a closer look on the logs during application startup. After increasing a level of logging we may find here some interesting information, for example client calls od Kubernetes API. As you see on the screen below our application tries to find ConfigMap and Secret with the label departament following configuration provided in bootstrap.yaml.

guide-to-micronaut-kubernetes-config.PNG

Let’s add some test data to our database by calling endpoints exposed by our applications running on Kubernetes. Each of them is exposed outside the node thanks to NodePort service type.

$ curl http://192.168.99.100:32356/employees -d '{"name":"John Smith","age":30,"position":"director","departmentId":2,"organizationId":2}' -H "Content-Type: application/json"
{"id":1,"organizationId":2,"departmentId":2,"name":"John Smith","age":30,"position":"director"}
$ curl http://192.168.99.100:32356/employees -d '{"name":"Paul Walker","age":50,"position":"director","departmentId":2,"organizationId":2}' -H "Content-Type: application/json"
{"id":2,"organizationId":2,"departmentId":2,"name":"Paul Walker","age":50,"position":"director"}
$ curl http://192.168.99.100:31144/departments -d '{"name":"Test2","organizationId":2}' -H "Content-Type: application/json"
{"id":2,"organizationId":2,"name":"Test2"}

Now, we can test HTTP communication between department-service and employee by calling method GET /organization/{organizationId}/with-employees that finds all departments with employees belonging to a given organization.

$ curl http://192.168.99.100:31144/departments/organization/2/with-employees

Here’s the current list of endpoints registered in the namespace default.

guide-to-micronaut-kubernetes-endpoints

Let’s take a look on the Micronaut HTTP Client logs from department-service. As you see below when it tries to call endpoint GET /employees/department/{departmentId} it finds the container under IP 172.17.0.11.

guide-to-micronaut-kubernetes-client

Health checks

To enable health checks for Micronaut applications we first need to add the following dependency to Maven pom.xml.

<dependency>
    <groupId>io.micronaut</groupId>
    <artifactId>micronaut-management</artifactId>
</dependency>

Micronaut configuration module provides a health check that probes communication with the Kubernetes API, and shows some information about the pod and application. To enable a detailed view for unauthenticated users we need to set the following property.


endpoints:
  health:
    details-visible: ANONYMOUS

After that we can take advantage of quite detailed information about an application including MongoDB connection status or HTTP Client status as shown below. By default, a health check is available under path /health.

guide-to-micronaut-kubernetes-health

Conclusion

Our Micronaut Kubernetes example integrates with Kubernetes API in order to allow applications to read components responsible for discovery and configuration. Integration between Micronaut HTTP Client and Kubernetes Endpoints or between Micronaut Configuration Client and Kubernetes ConfigMap or Secret are useful features. I’m looking for some other interesting features which may be included in Micronaut Kubernetes, since it is a relatively new project within Micronaut. Before starting with Micronaut Kubernetes example you should learn about Micronaut basics: Micronaut Tutorial – Beans and Scopes.

The post Guide To Micronaut Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/01/07/guide-to-micronaut-kubernetes/feed/ 0 7597
Spring Cloud Kubernetes For Hybrid Microservices Architecture https://piotrminkowski.com/2020/01/03/spring-cloud-kubernetes-for-hybrid-microservices-architecture/ https://piotrminkowski.com/2020/01/03/spring-cloud-kubernetes-for-hybrid-microservices-architecture/#respond Fri, 03 Jan 2020 11:06:53 +0000 http://piotrminkowski.com/?p=7580 You might use Spring Cloud Kubernetes to build applications running both inside and outside a Kubernetes cluster. The only problem with starting an example application outside Kubernetes is that there is no auto-configured registration mechanism. Spring Cloud Kubernetes delegates registration to the platform, what is an obvious behavior if you are deploying your application internally […]

The post Spring Cloud Kubernetes For Hybrid Microservices Architecture appeared first on Piotr's TechBlog.

]]>
You might use Spring Cloud Kubernetes to build applications running both inside and outside a Kubernetes cluster. The only problem with starting an example application outside Kubernetes is that there is no auto-configured registration mechanism. Spring Cloud Kubernetes delegates registration to the platform, what is an obvious behavior if you are deploying your application internally using Kubernetes objects. With an external application, the situation is different. In fact, you should guarantee registration by yourself on the application side.
This article is an explanation of motivation to add auto-registration mechanisms to Spring Cloud Kubernetes project only for external applications. Let’s consider the architecture where some microservices are running outside the Kubernetes cluster and some others are running inside it. There can be many explanations for such a situation. The most obvious explanation seems to be a migration of your microservices from older infrastructure to Kubernetes. Assuming it is still in progress, you have some microservices already moved to the cluster, while some others are still running on the older infrastructure. Moreover, you can decide to start some kind of experimental cluster with only a few of your applications, until you have more experience with using Kubernetes in production. I think it is not a very rare case.
Of course, there are different approaches to that issue. For example, you may maintain two independent microservices-based architectures, with different discovery registry and configuration sources. But you can also connect external microservices through Kubernetes API with the cluster to load configuration from ConfigMap or Secret, and register them there to allow inter-service communication with Spring Cloud Kubernetes Ribbon.
The sample application source code is available on GitHub under branch hybrid in sample-spring-microservices-kubernetes repository: https://github.com/piomin/sample-spring-microservices-kubernetes/tree/hybrid.

Architecture

For the current we may change a little architecture presented in my previous article about Spring Cloud and Kubernetes – Microservices With Spring Cloud Kubernetes. We move one of the sample microservices employee-service, described in the mentioned article, outside Kubernetes cluster. Now, the applications which are communicating with employee-service need to use the addresses outside the cluster. Also they should be able to handle a port number dynamically generated on the application during startup (server.port=0). Our applications are still distributed across different namespaces, so it is important to enable multi-namespaces discovery features – also described in my previous article. The application employee-service is connecting to MongoDB, which is still deployed on Kubernetes. In that case the integration is performed via Kubernetes Service. The following picture illustrates our current architecture.

spring-cloud-kubernetes-microservices-hybrid-architecture.png

Spring Cloud Kubernetes PropertySource

The situation with distributed configuration is clear. We don’t have to implement any additional code to be able to use it externally. Just before starting a client application we have to set the environment variable KUBERNETES_NAMESPACE. Since we set it to external we first need to create such a namespace.

spring-cloud-kubernetes-hybrib-architecture-namespace

Then we may apply some property sources to that namespace. The configuration is consisting of Kubernetes ConfigMap and Secret. We store there a Mongo location, credentials, and some other properties. Here’s our ConfigMap declaration.

apiVersion: v1
kind: ConfigMap
metadata:
  name: employee
data:
  application.yaml: |-
    logging.pattern.console: "%d{HH:mm:ss} ${LOG_LEVEL_PATTERN:-%5p} %m%n"
    spring:
      cloud:
        kubernetes:
          discovery:
            all-namespaces: true
            register: true
      data:
        mongodb:
          database: admin
          host: 192.168.99.100
          port: 32612

The port number is taken from mongodb Service, which is deployed as NodePort type.

spring-cloud-kubernetes-hybrib-architecture-mongo

And here’s our Secret.

apiVersion: v1
kind: Secret
metadata:
  name: employee
type: Opaque
data:
  spring.data.mongodb.username: UGlvdF8xMjM=
  spring.data.mongodb.password: cGlvdHI=

Then, we are creating resources inside external namespace.

spring-cloud-kubernetes-hybrib-architecture-propertysources

In the bootstrap.yml file we need to set the address of Kubernetes API server and property responsible for trusting server’s cert. We should also enable using Secret as property source, which is disabled by default for Spring Cloud Kubernetes Config.

spring:
  application:
    name: employee
  cloud:
    kubernetes:
      secrets:
        enableApi: true
      client:
        masterUrl: 192.168.99.100:8443
        trustCerts: true

External registration with Spring cloud Kubernetes

The situation with service discovery is much more complicated. Since Spring Cloud Kubernetes delegates discovery to the platform, what is perfectly right for internal applications, the lack of auto-configured registration is a problem for an external application. That’s why I decided to implement a module for Spring Cloud Kubernetes auto-configured registration for an external application. Currently it is available inside our sample repository as a spring-cloud-kubernetes-discovery-ext module. It is implemented according to the Spring Cloud Discovery registration pattern. Let’s begin with dependencies. We just need to include spring-cloud-starter-kubernetes, which contains core and discovery modules.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-kubernetes</artifactId>
</dependency>

Here’s our registration object. It implements Registration interface from Spring Cloud Commons, which defines some basic getters. We should provide hostname, port, serviceId etc.

public class KubernetesRegistration implements Registration {

    private KubernetesDiscoveryProperties properties;

    private String serviceId;
    private String instanceId;
    private String host;
    private int port;
    private Map<String, String> metadata = new HashMap<>();

    public KubernetesRegistration(KubernetesDiscoveryProperties properties) {
        this.properties = properties;
    }

    @Override
    public String getInstanceId() {
        return instanceId;
    }

    @Override
    public String getServiceId() {
        return serviceId;
    }

    @Override
    public String getHost() {
        return host;
    }

    @Override
    public int getPort() {
        return port;
    }

    @Override
    public boolean isSecure() {
        return false;
    }

    @Override
    public URI getUri() {
        return null;
    }

    @Override
    public Map<String, String> getMetadata() {
        return metadata;
    }

    @Override
    public String getScheme() {
        return "http";
    }

    public void setServiceId(String serviceId) {
        this.serviceId = serviceId;
    }

    public void setInstanceId(String instanceId) {
        this.instanceId = instanceId;
    }

    public void setHost(String host) {
        this.host = host;
    }

    public void setPort(int port) {
        this.port = port;
    }

    public void setMetadata(Map<String, String> metadata) {
        this.metadata = metadata;
    }

}

We have some additional configuration properties in comparison to Spring Cloud Kubernetes Discovery. They are available under the same prefix spring.cloud.kubernetes.discovery.

@ConfigurationProperties("spring.cloud.kubernetes.discovery")
public class KubernetesRegistrationProperties {

    private String ipAddress;
    private String hostname;
    private boolean preferIpAddress;
    private Integer port;
    private boolean register;
   
    // GETTERS AND SETTERS
   
}

There is also a class that should extend abstract AbstractAutoServiceRegistration. It is responsible for managing the registration process. First, it enables the registration mechanism only if an application is running outside Kubernetes. It uses PodUtils bean defined in Spring Cloud Kubernetes Core for that. It also implements a method for building a registration object. The port may be generated dynamically on startup. The rest of the process is performed inside the abstract subclass.

public class KubernetesAutoServiceRegistration extends AbstractAutoServiceRegistration<KubernetesRegistration> {

    private KubernetesDiscoveryProperties properties;
    private KubernetesRegistrationProperties registrationProperties;
    private KubernetesRegistration registration;
    private PodUtils podUtils;

    KubernetesAutoServiceRegistration(ServiceRegistry<KubernetesRegistration> serviceRegistry,
                                      AutoServiceRegistrationProperties autoServiceRegistrationProperties,
                                      KubernetesRegistration registration, KubernetesDiscoveryProperties properties,
                                      KubernetesRegistrationProperties registrationProperties, PodUtils podUtils) {
        super(serviceRegistry, autoServiceRegistrationProperties);
        this.properties = properties;
        this.registrationProperties = registrationProperties;
        this.registration = registration;
        this.podUtils = podUtils;
    }

    public void setRegistration(int port) throws UnknownHostException {
        String ip = registrationProperties.getIpAddress() != null ? registrationProperties.getIpAddress() : InetAddress.getLocalHost().getHostAddress();
        registration.setHost(ip);
        registration.setPort(port);
        registration.setServiceId(getAppName(properties, getContext().getEnvironment()) + "." + getNamespace(getContext().getEnvironment()));
        registration.getMetadata().put("namespace", getNamespace(getContext().getEnvironment()));
        registration.getMetadata().put("name", getAppName(properties, getContext().getEnvironment()));
        this.registration = registration;
    }

    @Override
    protected Object getConfiguration() {
        return properties;
    }

    @Override
    protected boolean isEnabled() {
        return !podUtils.isInsideKubernetes();
    }

    @Override
    protected KubernetesRegistration getRegistration() {
        return registration;
    }

    @Override
    protected KubernetesRegistration getManagementRegistration() {
        return registration;
    }

    public String getAppName(KubernetesDiscoveryProperties properties, Environment env) {
        final String appName = properties.getServiceName();
        if (StringUtils.hasText(appName)) {
            return appName;
        }
        return env.getProperty("spring.application.name", "application");
    }

    public String getNamespace(Environment env) {
        return env.getProperty("KUBERNETES_NAMESPACE", "external");
    }

}

The process should be initialized just after application startup. In order to catch a startup event we prepare a bean that implements SmartApplicationListener interface. The listener method calls bean KubernetesAutoServiceRegistration to prepare the registration object and start the process.

public class KubernetesAutoServiceRegistrationListener implements SmartApplicationListener {

    private static final Logger LOGGER = LoggerFactory.getLogger(KubernetesAutoServiceRegistrationListener.class);

    private final KubernetesAutoServiceRegistration autoServiceRegistration;

    KubernetesAutoServiceRegistrationListener(KubernetesAutoServiceRegistration autoServiceRegistration) {
        this.autoServiceRegistration = autoServiceRegistration;
    }

    @Override
    public boolean supportsEventType(Class<? extends ApplicationEvent> eventType) {
        return WebServerInitializedEvent.class.isAssignableFrom(eventType);
    }

    @Override
    public boolean supportsSourceType(Class<?> sourceType) {
        return true;
    }

    @Override
    public int getOrder() {
        return 0;
    }

    @Override
    public void onApplicationEvent(ApplicationEvent applicationEvent) {
        if (applicationEvent instanceof WebServerInitializedEvent) {
            WebServerInitializedEvent event = (WebServerInitializedEvent) applicationEvent;
            try {
                autoServiceRegistration.setRegistration(event.getWebServer().getPort());
                autoServiceRegistration.start();
            } catch (UnknownHostException e) {
                LOGGER.error("Error registering to kubernetes", e);
            }
        }
    }

}

Here’s the auto-configuration for all previously described beans.

@Configuration
@ConditionalOnProperty(name = "spring.cloud.kubernetes.discovery.register", havingValue = "true")
@AutoConfigureAfter({AutoServiceRegistrationConfiguration.class, KubernetesServiceRegistryAutoConfiguration.class})
public class KubernetesAutoServiceRegistrationAutoConfiguration {

    @Autowired
    AutoServiceRegistrationProperties autoServiceRegistrationProperties;

    @Bean
    @ConditionalOnMissingBean
    public KubernetesAutoServiceRegistration autoServiceRegistration(
            @Qualifier("serviceRegistry") KubernetesServiceRegistry registry,
            AutoServiceRegistrationProperties autoServiceRegistrationProperties,
            KubernetesDiscoveryProperties properties,
            KubernetesRegistrationProperties registrationProperties,
            KubernetesRegistration registration, PodUtils podUtils) {
        return new KubernetesAutoServiceRegistration(registry,
                autoServiceRegistrationProperties, registration, properties, registrationProperties, podUtils);
    }

    @Bean
    public KubernetesAutoServiceRegistrationListener listener(KubernetesAutoServiceRegistration registration) {
        return new KubernetesAutoServiceRegistrationListener(registration);
    }

    @Bean
    public KubernetesRegistration registration(KubernetesDiscoveryProperties properties) throws UnknownHostException {
        return new KubernetesRegistration(properties);
    }

    @Bean
    public KubernetesRegistrationProperties kubernetesRegistrationProperties() {
        return new KubernetesRegistrationProperties();
    }

}

Finally, we may proceed to the most important step – an integration with Kubernetes API. Spring Cloud Kubernetes uses Fabric Kubernetes Client for communication with master API. The KubernetesClient bean is already auto-configured, so we may inject it. The register and deregister methods are implemented in class KubernetesServiceRegistry that implements ServiceRegistry interface. Discovery in Kubernetes is configured via Endpoint API. Each Endpoint contains a list of EndpointSubset that stores a list of registered IPs inside EndpointAddress and a list of listening ports inside EndpointPort. Here’s the implementation of register and deregister methods.

public class KubernetesServiceRegistry implements ServiceRegistry<KubernetesRegistration> {

    private static final Logger LOG = LoggerFactory.getLogger(KubernetesServiceRegistry.class);

    private final KubernetesClient client;
    private KubernetesDiscoveryProperties properties;

    public KubernetesServiceRegistry(KubernetesClient client, KubernetesDiscoveryProperties properties) {
        this.client = client;
        this.properties = properties;
    }

    @Override
    public void register(KubernetesRegistration registration) {
        LOG.info("Registering service with kubernetes: " + registration.getServiceId());
        Resource<Endpoints, DoneableEndpoints> resource = client.endpoints()
                .inNamespace(registration.getMetadata().get("namespace"))
                .withName(registration.getMetadata().get("name"));
        Endpoints endpoints = resource.get();
        if (endpoints == null) {
            Endpoints e = client.endpoints().create(create(registration));
            LOG.info("New endpoint: {}",e);
        } else {
            try {
                Endpoints updatedEndpoints = resource.edit()
                        .editMatchingSubset(builder -> builder.hasMatchingPort(v -> v.getPort().equals(registration.getPort())))
                        .addToAddresses(new EndpointAddressBuilder().withIp(registration.getHost()).build())
                        .endSubset()
                        .done();
                LOG.info("Endpoint updated: {}", updatedEndpoints);
            } catch (RuntimeException e) {
                Endpoints updatedEndpoints = resource.edit()
                        .addNewSubset()
                        .withPorts(new EndpointPortBuilder().withPort(registration.getPort()).build())
                        .withAddresses(new EndpointAddressBuilder().withIp(registration.getHost()).build())
                        .endSubset()
                        .done();
                LOG.info("Endpoint updated: {}", updatedEndpoints);
            }
        }

    }

    @Override
    public void deregister(KubernetesRegistration registration) {
        LOG.info("De-registering service with kubernetes: " + registration.getInstanceId());
        Resource<Endpoints, DoneableEndpoints> resource = client.endpoints()
                .inNamespace(registration.getMetadata().get("namespace"))
                .withName(registration.getMetadata().get("name"));

        EndpointAddress address = new EndpointAddressBuilder().withIp(registration.getHost()).build();
        Endpoints updatedEndpoints = resource.edit()
                .editMatchingSubset(builder -> builder.hasMatchingPort(v -> v.getPort().equals(registration.getPort())))
                .removeFromAddresses(address)
                .endSubset()
                .done();
        LOG.info("Endpoint updated: {}", updatedEndpoints);

        resource.get().getSubsets().stream()
                .filter(subset -> subset.getAddresses().size() == 0)
                .forEach(subset -> resource.edit()
                        .removeFromSubsets(subset)
                        .done());
    }

    private Endpoints create(KubernetesRegistration registration) {
        EndpointAddress address = new EndpointAddressBuilder().withIp(registration.getHost()).build();
        EndpointPort port = new EndpointPortBuilder().withPort(registration.getPort()).build();
        EndpointSubset subset = new EndpointSubsetBuilder().withAddresses(address).withPorts(port).build();
        ObjectMeta metadata = new ObjectMetaBuilder()
                .withName(registration.getMetadata().get("name"))
                .withNamespace(registration.getMetadata().get("namespace"))
                .build();
        Endpoints endpoints = new EndpointsBuilder().withSubsets(subset).withMetadata(metadata).build();
        return endpoints;
    }

}

The auto-configuration beans are registered in spring.factories file.


org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.cloud.kubernetes.discovery.ext.KubernetesServiceRegistryAutoConfiguration,\
org.springframework.cloud.kubernetes.discovery.ext.KubernetesAutoServiceRegistrationAutoConfiguration

Enabling Registration

Now, we may include an already created library to any Spring Cloud application running outside Kubernetes, for example to the employee-service. We are using our example applications together with Spring Cloud Kubernetes.

<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-kubernetes-all</artifactId>
</dependency>
<dependency>
   <groupId>pl.piomin.services</groupId>
   <artifactId>spring-cloud-kubernetes-discovery-ext</artifactId>
   <version>1.0-SNAPSHOT</version>
</dependency>

The registration is still disabled, since we won’t set property spring.cloud.kubernetes.discovery.register to true.


spring:
  cloud:
    kubernetes:
      discovery:
        register: true

Sometimes it might be used to set static IP addresses in configuration, in case you would have multiple network interfaces.

spring:
  cloud:
    kubernetes:
      discovery:
        ipAddress: 192.168.99.1

By setting 192.168.99.1 as a static IP address I’m able to easily perform some tests with Minikube node, which is running on the VM available under 192.168.99.100.

Manual Testing

Let’s start employee-service locally. As you see on the screen below it has succesfully load configuration from Kubernetes and connected with MongoDB running on the cluster.

architecture-app

After startup the application has registered itself in Kubernetes.

spring-cloud-kubernetes-hybrib-architecture-register

We can view details of employee endpoint using kubectl describe endpoints command as shown below.

endpoints

Finally we can perform some test calls, for example via gateway-service running on Minikube.


$ curl http://192.168.99.100:31854/employee/actuator/info

Since our Spring Cloud Kubernetes example does not allow discovery across all namespaces for a Ribbon client, we should override Ribbon configuration using DiscoveryClient as shown below. For more details you may refer to my article Microservices With Spring Cloud Kubernetes.

public class RibbonConfiguration {

    @Autowired
    private DiscoveryClient discoveryClient;

    private String serviceId = "client";
    protected static final String VALUE_NOT_SET = "__not__set__";
    protected static final String DEFAULT_NAMESPACE = "ribbon";

    public RibbonConfiguration () {
    }

    public RibbonConfiguration (String serviceId) {
        this.serviceId = serviceId;
    }

    @Bean
    @ConditionalOnMissingBean
    public ServerList<?> ribbonServerList(IClientConfig config) {

        Server[] servers = discoveryClient.getInstances(config.getClientName()).stream()
                .map(i -> new Server(i.getHost(), i.getPort()))
                .toArray(Server[]::new);

        return new StaticServerList(servers);
    }

}

Summary

There are some limitations related to discovery with Kubernetes. For example, there is no built-in heartbeat mechanism, so we should take care of removing application endpoints on shutdown. Also, I’m not considering security aspects related to allowing discovery across all namespaces and allowing access to API for external applications. I’m assuming you have guaranteed the required level of security when building your Kubernetes cluster, especially if you decide to allow external access to the API. In fact, API is still just API and we may use it. This article shows an example of a use case, which may be useful for you. If you compare it with my previous article with Spring Cloud Kubernetes example you see that with small configuration changes you can move an application outside a cluster without adding any new components for discovery or a distributed configuration.

The post Spring Cloud Kubernetes For Hybrid Microservices Architecture appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/01/03/spring-cloud-kubernetes-for-hybrid-microservices-architecture/feed/ 0 7580