Skaffold Archives - Piotr's TechBlog https://piotrminkowski.com/tag/skaffold/ Java, Spring, Kotlin, microservices, Kubernetes, containers Wed, 14 Feb 2024 13:16:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Skaffold Archives - Piotr's TechBlog https://piotrminkowski.com/tag/skaffold/ 32 32 181738725 Getting Started with Azure Kubernetes Service https://piotrminkowski.com/2024/02/05/getting-started-with-azure-kubernetes-service/ https://piotrminkowski.com/2024/02/05/getting-started-with-azure-kubernetes-service/#respond Mon, 05 Feb 2024 11:12:47 +0000 https://piotrminkowski.com/?p=14887 In this article, you will learn how to create and manage a Kubernetes cluster on Azure and run your apps on it. We will focus on the Azure features that simplify Kubernetes adoption. We will discuss such topics as enabling monitoring based on Prometheus or exposing an app outside of the cluster using the Ingress […]

The post Getting Started with Azure Kubernetes Service appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create and manage a Kubernetes cluster on Azure and run your apps on it. We will focus on the Azure features that simplify Kubernetes adoption. We will discuss such topics as enabling monitoring based on Prometheus or exposing an app outside of the cluster using the Ingress object and Azure mechanisms. To proceed with that article, you don’t need to have a deep knowledge of Kubernetes. However, you may find a lot of articles about Kubernetes and cloud-native development on my blog. For example, if you are developing Java apps and running them on Kubernetes you may read the following article about best practices.

On the other hand, if you are interested in Azure and looking for some other approaches for running Java apps there, you can also refer to some previous posts on my blog. I have already described how to use such services as Azure Spring Apps or Azure Function for Java. For example, in that article, you can read how to integrate Spring Boot with Azure services using the Spring Cloud Azure project. For more information about Azure Function and Spring Cloud refer to that article.

Source Code

This time we won’t work much with a source code. However, if you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should follow my further instructions.

Create Cluster with Azure Kubernetes Service

After signing in to Azure Portal we can create a resource group for our cluster. The name of my resource group is aks. Then we need to find the Azure Kubernetes Service (AKS) in the marketplace. We are creating the instance of AKS in the aks resource group.

We will redirected to the first page of the creation wizard. I will just put the name of my cluster, and leave all the others with the recommended values. The name of my cluster is piomin. The default cluster preset configuration is “Dev/Test”, which is enough for our exercise. However, if you choose e.g. the “Production Standard” preset it will set the 3 availability zones and change the pricing tier for your cluster. Let’s click the “Next” button to proceed to the next page.

azure-kubernetes-install-general

We also won’t change anything in the “Node pools” section. On the “Networking” page, we choose “Azure CNI” instead of "Kubenet" as network configuration, and “Azure” instead of “Calico” as network policy. In comparison to Kubenet, the Azure CNI simplifies integration between Kubernetes and Azure Application Gateway.

We will also provide some changes in the Monitoring section. The main goal here is to enable managed Prometheus service for our cluster. In order to do it, we need to create a new workspace in Azure Monitor. The name of my workspace is prometheus.

That’s all that we needed. Finally, we can create our first AKS cluster.

After some minutes our Kubernetes cluster is ready. We can display a list of resources created inside the aks group. As you see, there are some resources related to the Prometheus or Azure Monitor and a single Kubernetes service “piomin”. It is our Kubernetes cluster. We can click it to see details.

azure-kubernetes-resources-aks

Of course, we can manage the cluster using Azure Portal. However, we can also easily switch to the kubectl CLI. Here’s the Kubernetes API server address for our cluster: piomin-xq30re6n.hcp.eastus.azmk8s.io.

Manage AKS with CLI

We can easily import the AKS cluster credentials into our local Kubeconfig file with the following az command (piomin is the name of the cluster, while aks is the name of the resource group):

$ az aks get-credentials -n piomin -g aks

Once you execute the command visible above, it will add a new Kube context or override the existing one.

After that, we can switch to the kubectl CLI. For example, we can display a list of Deployments across all the namespaces:

$ kubectl get deploy -A
NAMESPACE     NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   ama-logs-rs          1/1     1            1           56m
kube-system   ama-metrics          1/1     1            1           52m
kube-system   ama-metrics-ksm      1/1     1            1           52m
kube-system   coredns              2/2     2            2           58m
kube-system   coredns-autoscaler   1/1     1            1           58m
kube-system   konnectivity-agent   2/2     2            2           58m
kube-system   metrics-server       2/2     2            2           58m

Deploy Sample Apps on the AKS Cluster

Once we can interact with the Kubernetes cluster on Azure through the kubectl CLI, we can run our first app there. In order to do it, firstly, go to the callme-service directory. It contains a simple Spring Boot app that exposes REST endpoints. The Kubernetes manifests are located inside the k8s directory. Let’s take a look at the deployment YAML manifest. It contains the Kubernetes Deployment and Service objects.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
  template:
    metadata:
      labels:
        app: callme-service
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

In order to simplify deployment on Kubernetes we can use Skaffold. It integrates with the kubectl CLI. We just need to execute the following command to build the app from the source code and run it on AKS:

$ cd callme-service
$ skaffold run

After that, we will deploy a second app on the cluster. Go to the caller-service directory. Here’s the YAML manifest with Kubernetes Service and Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        env:
          - name: VERSION
            value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: caller-service
  labels:
    app: caller-service
spec:
  type: ClusterIP
  ports:
    - port: 8080
      name: http
  selector:
    app: caller-service

The caller-service app invokes an endpoint exposed by the callme-service app. Here’s the implementation of Spring @RestController responsible for that:

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallerController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
          buildProperties.or(Optional::empty), version);
      String response = restTemplate.getForObject(
         "http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }

}

Once again, let’s build the app on Kubernetes with the Skaffold CLI:

$ cd caller-service
$ skaffold run

Let’s switch to the Azure Portal. In the Azure Kubernetes Service page go to the workloads section. As you see, there are two Deployments: callme-service and caller-service.

We can switch to the pods view.

Monitoring with Managed Prometheus

In order to access Prometheus metrics for our AKS cluster, we need to go to the prometheus Azure Monitor workspace. In the first step, let’s take a look at the list of clusters assigned to that workspace.

Then, we can switch to the “Prometheus explorer” section. It allows us to provide the PromQL query to see a diagram illustrating the selected metric. You will find a full list of metrics collected for the AKS cluster in the following article. For example, we can visualize the RAM usage for both our apps running in the default namespace. In order to do that, we should use the node_namespace_pod_container:container_memory_working_set_bytes metric as shown below.

azure-kubernetes-prometheus

Exposing App Outside Azure Kubernetes

Install Azure Application Gateway on Kubernetes

In order to expose the service outside of the AKS, we need to create the Ingress object. However, we must have an ingress controller installed on the cluster to satisfy an Ingress. Since we are running the cluster on Azure, our natural choice is the AKS Application Gateway Ingress Controller that configures the Azure Application Gateway. We can install it through the Azure Portal. Go to your AKS cluster page and then switch to the “Networking” section. After that just select the “Enable ingress controller” checkbox. The new ingress-appgateway will be created and assigned to the AKS cluster.

azure-kubernetes-gateway

Once it is ready, you can display its details. The ingress-appgateway object exists in the same virtual network as Azure Kubernetes Service. There is a dedicated resource group – in my case MC_aks_piomin_eastus. The gateway has the public IP addresses assigned. For me, it is 20.253.111.153 as shown below.

After installing the Azure Application Gateway addon on AKS, there is a new Deployment ingress-appgw-deployment responsible for integration between the cluster and Azure Application Gateway service. It is our ingress controller.

Create Kubernetes Ingress

There is also a default IngressClass object installed on the cluster. We can display a list of available ingress classes by executing the command visible below. Our IngressClass object is available under the azure-application-gateway name.

$ kubectl get ingressclass
NAME                        CONTROLLER                  PARAMETERS   AGE
azure-application-gateway   azure/application-gateway   <none>       18m

Let’s take a look at the Ingress manifest. It contains several standard fields inside the spec.rules.* section. It exposes the callme-service Kubernetes Service under the 8080 port. Our Ingress object needs to refer to the azure-application-gateway IngressClass. The Azure Application Gateway Ingress Controller (AGIC) will watch such an object. Once we apply the manifest, AGIC will automatically configure the Application Gateway instance. The Application Gateway contains some health checks to verify the status of the backend. Since the Spring Boot app exposes a liveness endpoint under the /actuator/health/liveness path and 8080 port, we need to override the default settings. In order to do it, we need to use the appgw.ingress.kubernetes.io/health-probe-path and appgw.ingress.kubernetes.io/health-probe-port annotations.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: callme-ingress
  namespace: default
  annotations:
    appgw.ingress.kubernetes.io/health-probe-hostname: localhost
    appgw.ingress.kubernetes.io/health-probe-path: /actuator/health/liveness
    appgw.ingress.kubernetes.io/health-probe-port: '8080'
spec:
  ingressClassName: azure-application-gateway
  rules:
    - http:
        paths:
          - path: /callme
            pathType: Prefix
            backend:
              service:
                name: callme-service
                port:
                  number: 8080

The Ingress for the caller-service is very similar. We just need to change the path and the name of the backend service.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: caller-ingress
  namespace: default
  annotations:
    appgw.ingress.kubernetes.io/health-probe-hostname: localhost
    appgw.ingress.kubernetes.io/health-probe-path: /actuator/health/liveness
    appgw.ingress.kubernetes.io/health-probe-port: '8080'
spec:
  ingressClassName: azure-application-gateway
  rules:
    - http:
        paths:
          - path: /caller
            pathType: Prefix
            backend:
              service:
                name: caller-service
                port:
                  number: 8080

Let’s take a look at the list of ingresses in the Azure Portal. They are available under the same address and port. There is just a difference in the target context path.

azure-kubernetes-ingress

We can test both services using the gateway IP address and the right context path. Each app exposes the GET /ping endpoint.

$ curl http://20.253.111.153/callme/ping
$ curl http://20.253.111.153/caller/ping

The Azure Application Gateway contains a list of backends. In the Kubernetes context, those backends are the IP addresses of running pods. As you both health checks respond with the HTTP 200 OK code.

azure-kubernetes-gateway-backend

What’s next

We have already created a Kubernetes cluster, run the apps there, and exposed them to the external client. Now, the question is – how Azure may help in other activities. Let’s say, we want to install some additional software on the cluster. In order to do that, we need to go to the “Extensions + applications” section on the AKS cluster page. Then, we have to click the “Install an extension” button.

The link redirects us to the app marketplace. There are several different apps we can install in a simplified, graphical form. It could be a database, a message broker, or e.g. one of the Kubernetes-native tools like Argo CD.

azure-kubernetes-extensions

We just need to create a new instance of Argo CD and fill in some basic information. The installer is based on the Argo CD Helm chart provided by Bitnami.

azure-kubernetes-argocd

After a while, the instance of Argo CD is running on our cluster. We can display a list of installed extensions.

I installed Argo CD in the gitops namespace. Let’s verify a list of pods running in that namespace after successful installation:

$ kubectl get pod -n gitops
NAME                                             READY   STATUS    RESTARTS   AGE
gitops-argo-cd-app-controller-6d6848f46c-8n44j   1/1     Running   0          4m46s
gitops-argo-cd-repo-server-5f7cccd9d5-bc6ts      1/1     Running   0          4m46s
gitops-argo-cd-server-5c656c9998-fsgb5           1/1     Running   0          4m46s
gitops-redis-master-0                            1/1     Running   0          4m46s

And the last thing. As you remember, we exposed our apps outside the AKS cluster under the IP address. What about exposing them under the DNS name? Firstly, we need to have a DNS zone created on Azure. In this zone, we have to add a new record set containing the IP address of our application gateway. The name of the record set indicates the hostname of the gateway. In my case it is apps.cb57d.azure.redhatworkshops.io.

After that, we need to change the definition of the Ingress object. It should contain the hostname field inside the rules section, with the public DNS address of our gateway.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: caller-ingress
  namespace: default
  annotations:
    appgw.ingress.kubernetes.io/health-probe-hostname: localhost
    appgw.ingress.kubernetes.io/health-probe-path: /actuator/health/liveness
    appgw.ingress.kubernetes.io/health-probe-port: '8080'
spec:
  ingressClassName: azure-application-gateway
  rules:
    - host: apps.cb57d.azure.redhatworkshops.io
      http:
        paths:
          - path: /caller
            pathType: Prefix
            backend:
              service:
                name: caller-service
                port:
                  number: 8080

Final Thoughts

In this article, I focused on Azure features that simplify starting with the Kubernetes cluster. We covered such topics as cluster creation, monitoring, or exposing apps for external clients. Of course, these are not all the interesting features provided by Azure Kubernetes Service.

The post Getting Started with Azure Kubernetes Service appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/02/05/getting-started-with-azure-kubernetes-service/feed/ 0 14887
Kubernetes Testing with CircleCI, Kind, and Skaffold https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/ https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/#respond Tue, 28 Nov 2023 13:04:18 +0000 https://piotrminkowski.com/?p=14706 In this article, you will learn how to use tools like Kind or Skaffold to build integration tests on CircleCI for apps running on Kubernetes. Our main goal in this exercise is to build the app image and verify the Deployment on Kubernetes in the CircleCI pipeline. Skaffold and Jib Maven plugin build the image […]

The post Kubernetes Testing with CircleCI, Kind, and Skaffold appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use tools like Kind or Skaffold to build integration tests on CircleCI for apps running on Kubernetes. Our main goal in this exercise is to build the app image and verify the Deployment on Kubernetes in the CircleCI pipeline. Skaffold and Jib Maven plugin build the image from the source and deploy it on Kind using YAML manifests. Finally, we will run some load tests on the deployed app using the Grafana k6 tool and its integration with CircleCI.

If you want to build and run tests against Kubernetes, you can read my article about integration tests with JUnit. On the other hand, if you are looking for other testing tools for testing in a Kubernetes-native environment you can refer to that article about Testkube.

Introduction

Before we start, let’s do a brief introduction. There are three simple Spring Boot apps that communicate with each other. The first-service app calls the endpoint exposed by the caller-service app, and then the caller-service app calls the endpoint exposed by the callme-service app. The diagram visible below illustrates that architecture.

kubernetes-circleci-arch

So in short, our goal is to deploy all the sample apps on Kind during the CircleCI build and then test the communication by calling the endpoint exposed by the first-service through the Kubernetes Service.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. It contains three apps: first-service, caller-service, and callme-service. The main Skaffold config manifest is available in the project root directory. Required Kubernetes YAML manifests are always placed inside the k8s directory. Once you take a look at the source code, you should just follow my instructions. Let’s begin.

Our sample Spring Boot apps are very simple. They are exposing a single “ping” endpoint over HTTP and call “ping” endpoints exposed by other apps. Here’s the @RestController in the first-service app:

@RestController
@RequestMapping("/first")
public class FirstController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(FirstController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() 
         ? buildProperties.get().getName() : "first-service", version);
      String response = restTemplate.getForObject(
         "http://caller-service:8080/caller/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm first-service " + version + ". Calling... " + response;
   }

}

Here’s the @RestController inside the caller-service app. The endpoint is called by the first-service app through the RestTemplate bean.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallerController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.or(Optional::empty), version);
      String response = restTemplate.getForObject(
         "http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }

}

Finally, here’s the @RestController inside the callme-service app. It also exposes a single GET /callme/ping endpoint called by the caller-service app:

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallmeController.class);
   private static final String INSTANCE_ID = UUID.randomUUID().toString();
   private Random random = new Random();

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() 
         ? buildProperties.get().getName() : "callme-service", version);
      return "I'm callme-service " + version;
   }

}

Build and Deploy Images with Skaffold and Jib

Firstly, let’s take a look at the main Maven pom.xml in the project root directory. We use the latest version of Spring Boot and the latest LTS version of Java for compilation. All three app modules inherit settings from the parent pom.xml. In order to build the image with Maven we are including jib-maven-plugin. Since it is still using Java 17 in the default base image, we need to override this behavior with the <from>.
            </from>
          </configuration>
        </plugin>
      </plugins>
    </build>
  </profile>
</profiles>

Now, let’s take a look at the main skaffold.yaml file. Skaffold builds the image using Jib support and deploys all three apps on Kubernetes using manifests available in the k8s/deployment.yaml file inside each app module. Skaffold disables JUnit tests for Maven and activates the jib profile. It is also able to deploy Istio objects after activating the istio Skaffold profile. However, we won’t use it today.

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: simple-istio-services
build:
  artifacts:
    - image: piomin/first-service
      jib:
        project: first-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/caller-service
      jib:
        project: caller-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/callme-service
      jib:
        project: callme-service
        args:
          - -Pjib
          - -DskipTests
  tagPolicy:
    gitCommit: {}
manifests:
  rawYaml:
    - '*/k8s/deployment.yaml'
deploy:
  kubectl: {}
profiles:
  - name: istio
    manifests:
      rawYaml:
        - k8s/istio-*.yaml
        - '*/k8s/deployment-versions.yaml'
        - '*/k8s/istio-*.yaml'

Here’s the typical Deployment for our apps. The app is running on the 8080 port.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: first-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: first-service
  template:
    metadata:
      labels:
        app: first-service
    spec:
      containers:
        - name: first-service
          image: piomin/first-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

For testing purposes, we need to expose the first-service outside of the Kind cluster. In order to do that, we will use the Kubernetes NodePort Service. Our app will be available under the 30000 port.

apiVersion: v1
kind: Service
metadata:
  name: first-service
  labels:
    app: first-service
spec:
  type: NodePort
  ports:
  - port: 8080
    name: http
    nodePort: 30000
  selector:
    app: first-service

Note that all other Kubernetes services (“caller-service” and “callme-service”) are exposed only internally using a default ClusterIP type.

How It Works

In this section, we will discuss how we would run the whole process locally. Of course, our goal is to configure it as the CircleCI pipeline. In order to expose the Kubernetes Service outside Kind we need to define the externalPortMappings section in the configuration manifest. As you probably remember, we are exposing our app under the 30000 port. The following file is available in the repository under the k8s/kind-cluster-test.yaml path:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
  - role: control-plane
    extraPortMappings:
      - containerPort: 30000
        hostPort: 30000
        listenAddress: "0.0.0.0"
        protocol: tcp

Assuming we already installed kind CLI on our machine, we need to execute the following command to create a new cluster:

$ kind create cluster --name c1 --config k8s/kind-cluster-test.yaml

You should have the same result as visible on my screen:

We have a single-node Kind cluster ready. There is a single c1-control-plane container running on Docker. As you see, it exposes 30000 port outside of the cluster:

The Kubernetes context is automatically switched to kind-c1. So now, we just need to run the following command from the repository root directory to build and deploy the apps:

$ skaffold run

If you see a similar output in the skaffold run logs, it means that everything works fine.

kubernetes-circleci-skaffold

We can verify a list of Kubernetes services. The first-service is exposed under the 30000 port as expected.

$ kubectl get svc
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
caller-service   ClusterIP   10.96.47.193   <none>        8080/TCP         2m24s
callme-service   ClusterIP   10.96.98.53    <none>        8080/TCP         2m24s
first-service    NodePort    10.96.241.11   <none>        8080:30000/TCP   2m24s

Assuming you have already installed the Grafana k6 tool locally, you may run load tests using the following command:

$ k6 run first-service/src/test/resources/k6/load-test.js

That’s all. Now, let’s define the same actions with the CircleCI workflow.

Test Kubernetes Deployment with the CircleCI Workflow

The CircleCI config.yml file should be placed in the .circle directory. We are doing two things in our pipeline. In the first step, we are executing Maven unit tests without the Kubernetes cluster. That’s why we need a standard executor with OpenJDK 21 and the maven ORB. In order to run Kind during the CircleCI build, we need to have access to the Docker daemon. Therefore, we use the latest version of the ubuntu-2204 machine.

version: 2.1

orbs:
  maven: circleci/maven@1.4.1

executors:
  jdk:
    docker:
      - image: 'cimg/openjdk:21.0'
  machine_executor_amd64:
    machine:
      image: ubuntu-2204:2023.10.1
    environment:
      architecture: "amd64"
      platform: "linux/amd64"

After that, we can proceed to the job declaration. The name of our job is deploy-k8s. It uses the already-defined machine executor. Let’s discuss the required steps after running a standard checkout command:

  1. We need to install the kubectl CLI and copy it to the /usr/local/bin directory. Skaffold uses kubectl to interact with the Kubernetes cluster.
  2. After that, we have to install the skaffold CLI
  3. Our job also requires the kind CLI to be able to create or delete Kind clusters on Docker…
  4. … and the Grafana k6 CLI to run load tests against the app deployed on the cluster
  5. There is a good chance that this step won’t required once CircleCI releases a new version of ubuntu-2204 machine (probably 2024.1.1 according to the release strategy). For now, ubuntu-2204 provides OpenJDK 17, so we need to install OpenJDK 17 to successfully build the app from the source code
  6. After installing all the required tools we can create a new Kubernetes with the kind create cluster command.
  7. Once a cluster is ready, we can deploy our apps using the skaffold run command.
  8. Once the apps are running on the cluster, we can proceed to the tests phase. We are running the test defined inside the first-service/src/test/resources/k6/load-test.js file.
  9. After doing all the required steps, it is important to remove the Kind cluster
 jobs:
  deploy-k8s:
    executor: machine_executor_amd64
    steps:
      - checkout
      - run: # (1)
          name: Install Kubectl
          command: |
            curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
            chmod +x kubectl
            sudo mv ./kubectl /usr/local/bin/kubectl
      - run: # (2)
          name: Install Skaffold
          command: |
            curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64
            chmod +x skaffold
            sudo mv skaffold /usr/local/bin
      - run: # (3)
          name: Install Kind
          command: |
            [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
            chmod +x ./kind
            sudo mv ./kind /usr/local/bin/kind
      - run: # (4)
          name: Install Grafana K6
          command: |
            sudo gpg -k
            sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
            echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
            sudo apt-get update
            sudo apt-get install k6
      - run: # (5)
          name: Install OpenJDK 21
          command: |
            java -version
            sudo apt-get update && sudo apt-get install openjdk-21-jdk
            sudo update-alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
            sudo update-alternatives --set javac /usr/lib/jvm/java-21-openjdk-amd64/bin/javac
            java -version
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
      - run: # (6)
          name: Create Kind Cluster
          command: |
            kind create cluster --name c1 --config k8s/kind-cluster-test.yaml
      - run: # (7)
          name: Deploy to K8s
          command: |
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
            skaffold run
      - run: # (8)
          name: Run K6 Test
          command: |
            kubectl get svc
            k6 run first-service/src/test/resources/k6/load-test.js
      - run: # (9)
          name: Delete Kind Cluster
          command: |
            kind delete cluster --name c1

Here’s the definition of our load test. It has to be written in JavaScript. It defines some thresholds like a % of maximum failed requests or maximum response time for 95% of requests. As you see, we are testing the http://localhost:30000/first/ping endpoint:

import { sleep } from 'k6';
import http from 'k6/http';

export const options = {
  duration: '60s',
  vus: 10,
  thresholds: {
    http_req_failed: ['rate<0.25'],
    http_req_duration: ['p(95)<1000'],
  },
};

export default function () {
  http.get('http://localhost:30000/first/ping');
  sleep(2);
}

Finally, the last part of the CircleCI config file. It defines pipeline workflow. In the first step, we are running tests with Maven. After that, we proceeded to the deploy-k8s job.

workflows:
  build-and-deploy:
    jobs:
      - maven/test:
          name: test
          executor: jdk
      - deploy-k8s:
          requires:
            - test

Once we push a change to the sample Git repository we trigger a new CircleCI build. You can verify it by yourself here in my CircleCI project page.

As you see all the pipeline steps have been finished successfully.

kubernetes-circleci-build

We can display logs for every single step. Here are the logs from the k6 load test step.

There were some errors during the warm-up. However, the test shows that our scenario works on the Kubernetes cluster.

Final Thoughts

CircleCI is one of the most popular CI/CD platforms. Personally, I’m using it for running builds and tests for all my demo repositories on GitHub. For the sample projects dedicated to the Kubernetes cluster, I want to verify such steps as building images with Jib, Kubernetes deployment scripts, or Skaffold configuration. This article shows how to easily perform such tests with CircleCI and Kubernetes cluster running on Kind. Hope it helps 🙂

The post Kubernetes Testing with CircleCI, Kind, and Skaffold appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/feed/ 0 14706
Local Application Development on Kubernetes with Gefyra https://piotrminkowski.com/2023/09/01/local-application-development-on-kubernetes-with-gefyra/ https://piotrminkowski.com/2023/09/01/local-application-development-on-kubernetes-with-gefyra/#comments Fri, 01 Sep 2023 12:13:48 +0000 https://piotrminkowski.com/?p=14479 In this article, you will learn how to simplify and speed up your local application development on Kubernetes with Gefyra. Gefyra provides several useful features for developers. First of all, it allows to run containers and interact with internal services on an external Kubernetes cluster. Moreover, we can overlay Kubernetes cluster-internal services with the container […]

The post Local Application Development on Kubernetes with Gefyra appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to simplify and speed up your local application development on Kubernetes with Gefyra. Gefyra provides several useful features for developers. First of all, it allows to run containers and interact with internal services on an external Kubernetes cluster. Moreover, we can overlay Kubernetes cluster-internal services with the container running on the local Docker. Thanks to that we may leverage the single development cluster across multiple developers at the same time.

If you are looking for similar articles in the area of Kubernetes app development you can read my post about Telepresence and Skaffold. Gefyra is the alternative to Telepresence. However, there are some significant differences between those two tools. Gefyra comes with Docker as a required dependency., while with Telepresence, Docker is optional. On the other hand, Telepresence uses a sidecar pattern to inject the proxy container to intercept the traffic. Gefyra just replaces the image with the “carrier” image. You can find more details in the docs. Enough with the theory, let’s get to practice.

Prerequisites

In order to start the exercise, we need to have a running Kubernetes cluster. It can be a local instance or a remote cluster managed by the cloud provider. In this exercise, I’m using Kubernetes on the Docker Desktop.

$ kubectx -c
docker-desktop

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The code used in this article is available in the dev branch. Then you should just follow my instructions 🙂

Install Gefyra

In the first step, we need to install the gefyra CLI. There are the installation instructions for the different environments in the docs. Once you install the CLI you can verify it with the following command:

$ gefyra version
[INFO] Gefyra client version: 1.1.2

After that, we can install Gefyra on our Kubernetes cluster. Here’s the command for installing on Docker Desktop Kubernetes:

$ gefyra up --host=kubernetes.docker.internal

It will install Gefyra using the operator. Let’s verify a list of running pods in the gefyra namespace:

$  kubectl get po -n gefyra
NAME                               READY   STATUS    RESTARTS   AGE
gefyra-operator-7ff447866b-7gzkd   1/1     Running   0          1h
gefyra-stowaway-bb96bccfd-xg7ds    1/1     Running   0          1h

If you see the running pods, it means that the tool has been successfully installed. Now, we can use Gefyra in our app development on Kubernetes.

Use Case on Kubernetes for Gefyra

We will use exactly the same set of apps and the use case as in the article about Telepresence and Skaffold. Firstly, let’s analyze that case. There are three microservices: first-servicecaller-service and callme-service. All of them expose a single REST endpoint GET /ping, which returns basic information about each microservice. In order to create applications, I’m using the Spring Boot framework. Our architecture is visible in the picture below. The first-service is calling the endpoint exposed by the caller-service. Then the caller-service is calling the endpoint exposed by the callme-service. Of course, we are going to deploy all the microservices on the Kubernetes cluster.

Now, let’s assume we are implementing a new version of the caller-service. We want to easily test with two other apps running on the cluster. Therefore, our goal is to forward the traffic that is sent to the caller-service on the Kubernetes cluster to our local instance running on our Docker. On the other hand, the local instance of the caller-service should call the endpoint exposed by the instance of the callme-service running on the Kubernetes cluster.

kubernetes-gefyra-arch

Build and Deploy Apps with Skaffold and Jib

Before we start development of the new version of caller-service we will deploy all three sample apps. To simplify the process we will use Skaffold and Jib Maven Plugin. Thanks to that you deploy all the using a single command. Here’s the configuration of the Skaffold in the repository root directory:

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: simple-istio-services
build:
  artifacts:
    - image: piomin/first-service
      jib:
        project: first-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/caller-service
      jib:
        project: caller-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/callme-service
      jib:
        project: callme-service
        args:
          - -Pjib
          - -DskipTests
  tagPolicy:
    gitCommit: {}
manifests:
  rawYaml:
    - '*/k8s/deployment.yaml'
deploy:
  kubectl: {}

For more details about the deployment process, you may refer once again to my previous article. We will deploy apps in the demo-1 namespace. Here’s the skaffold command used for that:

$ skaffold run --tail -n demo-1

Once you run the command you will deploy all apps and see their logs in the console. These are very simple Spring Boot apps, which just expose a single REST endpoint and print a log message after receiving the request. Here’s the @RestController of callme-service:

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = 
       LoggerFactory.getLogger(CallmeController.class);

   @Autowired
   BuildProperties buildProperties;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      return "I'm callme-service " + version;
   }
}

And here’s the controller of caller-service. We will modify it during our development. It calls the endpoint exposed by the callme-service using its internal Kubernetes address http://callme-service:8080.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = 
      LoggerFactory.getLogger(CallerController.class);

   @Autowired
   BuildProperties buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      String response = restTemplate
         .getForObject("http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }
}

Here’s a list of deployed apps in the demo-1 namespace:

$ kubectl get deploy -n demo-1              
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
caller-service   1/1     1            1           68m
callme-service   1/1     1            1           68m
first-service    1/1     1            1           68m

Development on Kubernetes with Gefyra

Connect to services running on Kubernetes

Now, I will change the code in the CallerController class. Here’s the latest development version:

kubernetes-gefyra-dev-code

Let’s build the app on the local Docker daemon. We will leverage the Jib Maven plugin once again. We need to go to the caller-service directory and build the image using the jib goal.

$ cd caller-service
$ mvn clean package -DskipTests -Pjib jib:dockerBuild

Here’s the result. The image is available on the local Docker daemon as caller-service:1.1.0.

After that, we may run the container with the app locally using the gefyra command. We use several parameters in the command visible below. Firstly, we need to set the Docker image name using the -i parameter. We simulate running the app in the demo-1 Kubernetes namespace with the -n option. Then, we set a new value for the environment variable used by the into v2 and expose the app container port outside as 8090.

$ gefyra run --rm -i caller-service:1.1.0 \
    -n demo-1 \
    -N caller-service \
    --env VERSION=v2 \
    --expose 8090:8080

Gefyra starts our dev container on the on Docker:

docker ps -l
CONTAINER ID   IMAGE                  COMMAND                  CREATED              STATUS              PORTS                    NAMES
7fec52bed474   caller-service:1.1.0   "java -cp @/app/jib-…"   About a minute ago   Up About a minute   0.0.0.0:8090->8080/tcp   caller-service

Now, let’s try to call the endpoint exposed under the local port 8090:

$ curl http://localhost:8090/caller/ping
I'm a local caller-service v2. Calling on k8s... I'm callme-service v1

Here are the logs from our local container. As you see, it successfully connected to the callme-service app running on the Kubernetes cluster:

kubernetes-gefyra-docker-logs

Let’s switch to the window with the skaffold run --tail command. It displays the logs for our three apps running on Kubernetes. As expected, there are no logs for the caller-service pod since traffic was forwarded to the local container.

kubernetes-gefyra-skaffold-logs

Intercept the traffic sent to Kubernetes

Now, let’s do another try. This time, we will call the first-service running on Kubernetes. In order to do that, we will enable port-forward for the default port.

$ kubectl port-forward svc/first-service -n demo-1 8091:8080

We can the first-service running on Kubernetes using the local port 8091. As you see, all the calls are propagated inside the Kubernetes since the caller-service version is v1.

$ curl http://localhost:8091/first/ping 
I'm first-service v1. Calling... I'm caller-service v1. Calling... I'm callme-service v1

Just to ensure let’s switch to logs printed by skaffold:

In order to intercept the traffic to a container running on Kubernetes and send it to the development container, we need to run the gefyra bridge command. In that command, we have to set the name of the container running in Gefyra using the -N parameter (it was previously set in the gefyra run command). The command will intercept the traffic sent to the caller-service pod (--target parameter) from the demo-1 namespace (-n parameter).

$ gefyra bridge -N caller-service \
    -n demo-1 \
    --port 8080:8080 \
    --target deploy/caller-service/caller-service

You should have a similar output if the bridge has been successfully established:

Let’s call the first-service via the forwarded port once again. Pay attention to the number of the caller-service version.

$ curl http://localhost:8091/first/ping
I'm first-service v1. Calling... I'm a local caller-service v2. Calling on k8s... I'm callme-service v1

Let’s do a double in the logs. Here are the logs from Kubernetes. As you see, there no caller-service logs, but just for the first-service and callme-service.

Of course, the request is forwarded to caller-service running on the local Docker, and then caller-service invokes endpoint exposed by the callme-service running on the cluster.

Once we finish the development we may remove all the bridges:

$ gefyre unbridge -A

The post Local Application Development on Kubernetes with Gefyra appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/09/01/local-application-development-on-kubernetes-with-gefyra/feed/ 2 14479
Kubernetes Multicluster Load Balancing with Skupper https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/ https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/#respond Fri, 04 Aug 2023 00:03:25 +0000 https://piotrminkowski.com/?p=14372 In this article, you will learn how to leverage Skupper for load balancing between app instances running on several Kubernetes clusters. We will create some Kubernetes clusters locally with Kind. Then we will connect them using Skupper. Skupper cluster interconnection works in Layer 7 (application layer). It means there is no need to create any […]

The post Kubernetes Multicluster Load Balancing with Skupper appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to leverage Skupper for load balancing between app instances running on several Kubernetes clusters. We will create some Kubernetes clusters locally with Kind. Then we will connect them using Skupper.

Skupper cluster interconnection works in Layer 7 (application layer). It means there is no need to create any VNPs or special firewall rules. Skupper is working according to the Virtual Application Network (VAN) approach. Thanks to that it can connect different Kubernetes clusters and guarantee communication between services without exposing them to the Internet. You can read more about the concept behind it in the Skupper docs.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. This time we will do almost everything using a command-line tool (skupper CLI). The repository contains just a sample app Spring Boot with Kubernetes Deployment manifests and Skaffold config. You will find here instructions on how to deploy the app with Skaffold, but you can as well use another tool. As always, follow my instructions for the details 🙂

Create Kubernetes clusters with Kind

In the first step, we will create three Kubernetes clusters with Kind. We need to give them different names: c1, c2 and c3. Accordingly, they are available under the context names: kind-c1, kind-c2 and kind-c3.

$ kind create cluster --name c1
$ kind create cluster --name c2
$ kind create cluster --name c3

In this exercise, we will switch between the clusters a few times. Personally, I’m using the kubectx to switch between different Kubernetes contexts and kubens to switch between the namespaces.

By default, Skupper exposes itself as a Kubernetes LoadBalancer Service. Therefore, we need to enable the load balancer on Kind. In order to do that, we can install MetalLB. You can find the full installation instructions in the Kind docs here. Firstly, let’s switch to the c1 cluster:

$ kubectx kind-c1

Then, we have to apply the following YAML manifest:

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

You should repeat the same procedure for the other two clusters: c2 and c3. However, it is not all. We also need to set up the address pool used by load balancers. To do that, let’s first check a range of IP addresses on the Docker network used by Kind. For me it is 172.19.0.0/16 172.19.0.1.

$ docker network inspect -f '{{.IPAM.Config}}' kind

According to the results, we need to choose the right IP address for all three Kind clusters. Then we have to create the IPAddressPool object, which contains the IPs range. Here’s the YAML manifest for the c1 cluster:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.200-172.19.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

Here’s the pool configuration for e.g. the c2 cluster. It is important that the address range should not conflict with the ranges in two other Kind clusters.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.150-172.19.255.199
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

Finally, the configuration for the c3 cluster:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.100-172.19.255.149
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

After applying the YAML manifests with the kubectl apply -f command we can proceed to the next section.

Install Skupper on Kubernetes

We can install and manage Skupper on Kubernetes in two different ways: with CLI or through YAML manifests. Most of the examples in Skupper documentation use CLI for that, so I guess it is a preferable approach. Consequently, before we start with Kubernetes, we need to install CLI. You can find installation instructions in the Skupper docs here. Once you install it, just verify if it works with the following command:

$ skupper version

After that, we can proceed with Kubernetes clusters. We will create the same namespace interconnect inside all three clusters. To simplify our upcoming exercise we can also set a default namespace for each context (alternatively you can do it with the kubectl config set-context --current --namespace interconnect command).

$ kubectl create ns interconnect
$ kubens interconnect

Then, let’s switch to the kind-c1 cluster. We will stay in this context until the end of our exercise 🙂

$ kubectx kind-c1

Finally, we will install Skupper on our Kubernetes clusters. In order to do that, we have to execute the skupper init command. Fortunately, it allows us to set the target Kubernetes context with the -c parameter. Inside the kind-c1 cluster, we will also enable the Skupper UI dashboard (--enable-console parameter). With the Skupper console, we may e.g. visualize a traffic volume for all targets in the Skupper network.

$ skupper init --enable-console --enable-flow-collector
$ skupper init -c kind-c2
$ skupper init -c kind-c3

Let’s verify the status of the Skupper installation:

$ skupper status
$ skupper status -c kind-c2
$ skupper status -c kind-c3

Here’s the status for Skupper running in the kind-c1 cluster:

kubernetes-skupper-status

We can also display a list of running Skupper pods in the interconnect namespace:

$ kubectl get po
NAME                                          READY   STATUS    RESTARTS   AGE
skupper-prometheus-867f57b89-dc4lq            1/1     Running   0          3m36s
skupper-router-55bbb99b87-k4qn5               2/2     Running   0          3m40s
skupper-service-controller-6bf57595dd-45hvw   2/2     Running   0          3m37s

Now, our goal is to connect both the c2 and c3 Kind clusters with the c1 cluster. In the Skupper nomenclature, we have to create a link between the namespace in the source and target cluster. Before we create a link we need to generate a secret token that signifies permission to create a link. The token also carries the link details. We are generating two tokens on the target cluster. Each token is stored as the YAML file. The first of them is for the kind-c2 cluster (skupper-c2-token.yaml), and the second for the kind-c3 cluster (skupper-c3-token.yaml).

$ skupper token create skupper-c2-token.yaml
$ skupper token create skupper-c3-token.yaml

We will consider several scenarios where we create a link using different parameters. Before that, let’s deploy our sample app on the kind-c2 and kind-c3 clusters.

Running the sample app on Kubernetes with Skaffold

After cloning the sample app repository go to the main directory. You can easily build and deploy the app to both kind-c2 and kind-c3 with the following commands:

$ skaffold dev --kube-context=kind-c2
$ skaffold dev --kube-context=kind-c3

After deploying the app skaffold automatically prints all the logs as shown below. It will be helpful for the next steps in our exercise.

Our app is deployed under the sample-spring-kotlin-microservice name.

Load balancing with Skupper – scenarios

Scenario 1: the same number of pods and link cost

Let’s start with the simplest scenario. We have a single pod of our app running on the kind-c2 and kind-c3 cluster. In Skupper we can also assign a cost to each link to influence the traffic flow. By default, link cost is set to 1 for a new link. In a service network, the routing algorithm attempts to use the path with the lowest total cost from the client to the target server. For now, we will leave a default value. Here’s a visualization of the first scenario:

Let’s create links to the c1 Kind cluster using the previously generated tokens.

$ skupper link create skupper-c2-token.yaml -c kind-c2
$ skupper link create skupper-c3-token.yaml -c kind-c3

If everything goes fine you should see a similar message:

We can also verify the status of links by executing the following commands:

$ skupper link status -c kind-c2
$ skupper link status -c kind-c3

It means that now c2 and c3 Kind clusters are “working” in the same Skupper network as the c1 cluster. The next step is to expose our app running in both the c2 and c3 clusters into the c1 cluster. Skupper works at Layer 7 and by default, it doesn’t connect apps unless we won’t enable that feature for the particular app. In order to expose our apps to the c1 cluster we need to run the following command on both c2 and c3 clusters.

$ skupper expose deployment/sample-spring-kotlin-microservice \
  --port 8080 -c kind-c2
$ skupper expose deployment/sample-spring-kotlin-microservice \
  --port 8080 -c kind-c3

Let’s take a look at what happened at the target (kind-c1) cluster. As you see Skupper created the sample-spring-kotlin-microservice Kubernetes Service that forwards traffic to the skupper-router pod. The Skupper Router is responsible for load-balancing requests across pods being a part of the Skupper network.

To simplify our exercise, we will enable port-forwarding for the Service visible above.

$ kubectl port-forward svc/sample-spring-kotlin-microservice 8080:8080

Thanks to that we don’t have to configure Kubernetes Ingress to call the service. Now, we can send some test requests over localhost, e.g. with siege.

$ siege -r 200 -c 5 http://localhost:8080/persons/1

We can easily verify that the traffic is coming to pods running on the kind-c2 and kind-c3 by looking at the logs. Alternatively, we can go to the Skupper console and see the traffic visualization:

kubernetes-skupper-diagram-first

Scenario 2: different number of pods and same link cost

In the next scenario, we won’t change anything in the Skupper network configuration. We will just run the second pod of the app in the kind-c3 cluster. So now, there is a single pod running in the kind-c2 cluster, and two pods running in the kind-c3 cluster. Here’s our architecture.

Once again, we can send some requests to the previously tested Kubernetes Service with the siege command:

$ siege -r 200 -c 5 http://localhost:8080/persons/2

Let’s take a look at traffic visualization in the Skupper dashboard. We can switch between all available pods. Here’s the diagram for the pod running in the kind-c2 cluster.

kubernetes-skupper-diagram

Here’s the same diagram for the pod running in the kind-c3 cluster. As you see it receives only ~50% (or even less depending on which pod we visualize) of traffic received by the pod in the kind-c2 cluster. That’s because Skupper there are two pods running in the kind-c3 cluster, while Skupper still balances requests across clusters equally.

Scenario 3: only one pod and different link costs

In the current scenario, there is a single pod of the app running on the c2 Kind cluster. At the same time, there are no pods on the c3 cluster (the Deployment exists but it has been scaled down to zero instances). Here’s the visualization of our scenario.

kubernetes-skupper-arch2

The important thing here is that the c3 cluster is preferred by Skupper since the link to it has a lower cost (2) than the link to the c2 cluster (4). So now, we need to remove the previous link, and then create a new one with the following commands:

$ skupper link create skupper-c2-token.yaml --cost 4 -c kind-c2
$ skupper link create skupper-c3-token.yaml --cost 2 -c kind-c3

In order to create a Skupper link once again you first need to delete the previous one with the skupper link delete link1 command. Then you have to generate new tokens with the skupper token create command as we did before.

Let’s take a look at the Skupper network status:

kubernetes-skupper-network-status

Let’s send some test requests to the exposed service. It works without any errors. Since there is only a single running pod, the whole traffic goes there:

Scenario 4 – more pods in one cluster and different link cost

Finally, the last scenario in our exercise. We will use the same Skupper configuration as in Scenario 3. However, this time we will run two pods in the kind-c3 cluster.

kubernetes-skupper-arch-1

We can switch once again to the Skupper dashboard. Now, as you see, all the pods receive a very similar amount of traffic. Here’s the diagram for the pod running on the kind-c2 cluster.

kubernetes-skupper-equal-traffic

Here’s a similar diagram for the pod running on the kind-c3 cluster. After setting the cost of the link assuming the number of pods running on the cluster I was able to split traffic equally between all the pods across both clusters. It works. However, it is not a perfect way for load-balancing. I would expect at least an option for enabling a round-robin between all the pods working in the same Skupper network. The solution presented in this scenario will work as expected unless we enable auto-scaling for the app.

Final Thoughts

Skupper introduces an interesting approach to the Kubernetes multicluster connectivity based fully on Layer 7. You can compare it to another solution based on the different layers like Submariner or Cilium cluster mesh. I described both of them in my previous articles. If you want to read more about Submariner visit the following post. If you are interested in Cilium read that article.

The post Kubernetes Multicluster Load Balancing with Skupper appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/08/04/kubernetes-multicluster-load-balancing-with-skupper/feed/ 0 14372
Renew Certificates on Kubernetes with Cert Manager and Reloader https://piotrminkowski.com/2022/12/02/renew-certificates-on-kubernetes-with-cert-manager-and-reloader/ https://piotrminkowski.com/2022/12/02/renew-certificates-on-kubernetes-with-cert-manager-and-reloader/#respond Fri, 02 Dec 2022 11:55:26 +0000 https://piotrminkowski.com/?p=13757 In this article, you will learn how to renew certificates in your Spring Boot apps on Kubernetes with cert-manager and Stakater Reloader. We are going to run two simple Spring Boot apps that communicate with each other over SSL. The TLS cert used in that communication will be automatically generated by Cert Manager. With Cert […]

The post Renew Certificates on Kubernetes with Cert Manager and Reloader appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to renew certificates in your Spring Boot apps on Kubernetes with cert-manager and Stakater Reloader. We are going to run two simple Spring Boot apps that communicate with each other over SSL. The TLS cert used in that communication will be automatically generated by Cert Manager. With Cert Manager we can easily rotate certs after a certain time. In order to automatically use the latest TLS certs we need to restart our apps. We can achieve it with Stakater Reloader.

Before we start, it is worth reading the following article. It shows how to use cert-manager together with Istio to create secure gateways on Kubernetes.

Source Code

If you would like to try this exercise yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. Then switch to the ssl directory. You will find two Spring Boot apps: secure-callme and secure-caller. After that, you should just follow my instructions. Let’s begin.

How it works

Before we go into the technical details, let me write a little bit more about the architecture of our solution. Our challenge is pretty common. We need secure SSL/TLS communication between the services running on Kubernetes. Instead of manually generating and replacing certs inside the apps, we need an automatic approach.

Here come the cert-manager and the Stakater Reloader. Cert Manager is able to generate certificates automatically, based on the provided CRD object. It also ensures the certificates are valid and up-to-date and will attempt to renew certificates before expiration. It puts all the required inside Kubernetes Secret. On the other hand, Stakater Reloader is able to watch if any changes happen in ConfigMap or Secret. Then it performs a rolling upgrade on pods, which use the particular ConfigMap or Secret. Here is the visualization of the described architecture.

renew-certificates-kubernetes-arch

Prerequisites

Of course, you need to have a Kubernetes cluster. In this exercise, I’m using Kubernetes on Docker Desktop. But you can as well use any other local distribution like minikubekind, or a cloud-hosted instance. No matter which distribution you choose you also need to have:

  1. Skaffold (optionally) – a CLI tool to simplify deploying the Spring Boot app on Kubernetes and applying all the manifests in that exercise using a single command. You can find installation instructions here
  2. Helm – used to install additional tools on Kubernetes like Stakater Reloader or cert-manager

Install Cert Manager and Stakater Reloader

In order to install both cert-manager and Reloader on Kubernetes we will use Helm charts. We don’t need any specific settings just defaults. Let’s begin with the cert-manager. Before installing the chart we have to add CRD resources for the latest version 1.10.1:

$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.crds.yaml

Then, we need to add the jetstack chart repository:

$ helm repo add jetstack https://charts.jetstack.io

After that, we can install the chart using the following command:

$ helm install my-release cert-manager jetstack/cert-manager

The same with the Stakater Reloader – first we need to add the stakater charts repository:

$ helm repo add stakater https://stakater.github.io/stakater-charts

Then, we can install the latest version of the chart:

$ helm install my-reloader stakater/reloader

In order to verify that the installation finished successfully we can display a list of running pods:

$ kubectl get po
NAME                                          READY   STATUS    RESTARTS   AGE
my-cert-manager-578884c6cf-f9ppt              1/1     Running   0          1m
my-cert-manager-cainjector-55d4cd4bb6-6mgjd   1/1     Running   0          1m
my-cert-manager-webhook-5c68bf9c8d-nz7sd      1/1     Running   0          1m
my-reloader-reloader-7566fdc68c-qj9l4         1/1     Running   0          1m

That’s all. Now we can proceed to the implementation.

HTTPS with Spring Boot

Our first app secure-callme exposes a single endpoint GET /callme over HTTP. That endpoint will be called by the secure-caller app. Here’s the @RestController implementation:

@RestController
public class SecureCallmeController {

    @GetMapping("/callme")
    public String call() {
        return "I'm `secure-callme`!";
    }

}

Now our goal is to enable HTTPS for that app, and of course, make it work properly on Kubernetes. First, we should change the default server port for the Spring Boot app to 8443. Then we have to enable SSL and provide locations of key stores. Additionally, we will force verification of the client’s certificate with the server.ssl.client-auth property. Here’s the configuration for our Spring Boot inside the application.yml file.

server.port: 8443
server.ssl:
  enabled: true
  key-store: ${CERT_PATH}/keystore.jks
  key-store-password: ${PASSWORD}
  trust-store: ${CERT_PATH}/truststore.jks
  trust-store-password: ${PASSWORD}
  client-auth: NEED

We will set the values of CERT_PATH and PASSWORD at the level of Kubernetes Deployment. Now, let’s switch to the secure-caller implementation. We have to configure SSL on the REST client side. Since we use Spring RestTemplate for calling services, we need to add customize its default behavior. Firstly, let’s include the Apache HttpClient dependency.

<dependency>
  <groupId>org.apache.httpcomponents.client5</groupId>
  <artifactId>httpclient5</artifactId>
</dependency>

Now, we will use Apache HttpClient as a low-level client for the Spring RestTemplate. We need to define a key store and trust store for the client since a server-side requires and verifies client cert. In order to create RestTempate @Bean we use RestTemplateBuilder.

@SpringBootApplication
public class SecureCaller {

   public static void main(String[] args) {
      SpringApplication.run(SecureCaller.class, args);
   }

   @Autowired
   ClientSSLProperties clientSSLProperties;

   @Bean
   RestTemplate builder(RestTemplateBuilder builder) throws 
      GeneralSecurityException, IOException {

      final SSLContext sslContext = new SSLContextBuilder()
         .loadTrustMaterial(
            new File(clientSSLProperties.getTrustStore()),
            clientSSLProperties.getTrustStorePassword().toCharArray())
         .loadKeyMaterial(
            new File(clientSSLProperties.getKeyStore()),
            clientSSLProperties.getKeyStorePassword().toCharArray(),
            clientSSLProperties.getKeyStorePassword().toCharArray()
         )
         .build();

      final SSLConnectionSocketFactory sslSocketFactory = 
         SSLConnectionSocketFactoryBuilder.create()
                .setSslContext(sslContext)
                .build();

      final HttpClientConnectionManager cm = 
         PoolingHttpClientConnectionManagerBuilder.create()
                .setSSLSocketFactory(sslSocketFactory)
                .build();

      final HttpClient httpClient = HttpClients.custom()
         .setConnectionManager(cm)
         .evictExpiredConnections()
         .build();

      return builder
         .requestFactory(() -> 
            new HttpComponentsClientHttpRequestFactory(httpClient))
         .build();
   }
}

The client credentials are taken from configuration settings under the client.ssl key. Here is the @ConfigurationProperties class used the RestTemplateBuilder in the previous step.

@Configuration
@ConfigurationProperties("client.ssl")
public class ClientSSLProperties {

   private String keyStore;
   private String keyStorePassword;
   private String trustStore;
   private String trustStorePassword;

   // GETTERS AND SETTERS ...

}

Here’s the configuration for the secure-caller inside application.yml file. The same as for the secure-callme we expose the REST endpoint over HTTPS.

server.port: 8443
server.ssl:
  enabled: true
  key-store: ${CERT_PATH}/keystore.jks
  key-store-password: ${PASSWORD}
  trust-store: ${CERT_PATH}/truststore.jks
  trust-store-password: ${PASSWORD}
  client-auth: NEED

client.url: https://${HOST}:8443/callme
client.ssl:
  key-store: ${CLIENT_CERT_PATH}/keystore.jks
  key-store-password: ${PASSWORD}
  trust-store: ${CLIENT_CERT_PATH}/truststore.jks
  trust-store-password: ${PASSWORD}

The secure-caller app calls GET /callme exposed by the secure-callme app using customized RestTemplate.

@RestController
public class SecureCallerController {

   RestTemplate restTemplate;

   @Value("${client.url}")
   String clientUrl;

   public SecureCallerController(RestTemplate restTemplate) {
      this.restTemplate = restTemplate;
   }

   @GetMapping("/caller")
   public String call() {
      return "I'm `secure-caller`! calling... " +
             restTemplate.getForObject(clientUrl, String.class);
   }

}

Generate and Renew Certificates on Kubernetes with Cert Manager

With cert-manager, we can automatically generate and renew certificates on Kubernetes. Of course, we could generate TLS/SSL certs using e.g. openssl as well and then apply them on Kubernetes. However, Cert Manager simplifies that process. It allows us to declaratively define the rules for the certs generation process. Let’s see how it works. Firstly, we need the issuer object. We can create a global issuer for the whole as shown. It uses the simplest option – self-signed.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: ss-clusterissuer
spec:
  selfSigned: {}

After that, we can generate certificates. Here’s the cert-manager Certificate object for the secure-callme app. There are some important things here. First of all, we can generate key stores together with a certificate and private key (1). The object refers to the ClusterIssuer created in the previous step (2). The name of Kubernetes Service used during communication is secure-callme, so the cert needs to have that name as CN. In order to enable certificate rotation we need to set validity time. The lowest possible value is 1 hour (4). So each time 5 minutes before expiration cert-manager will automatically renew a certificate (5). It won’t rotate the private key. In order to enable it we should set the parameter rotationPolicy to Always.

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: secure-callme-cert
spec:
  keystores: # (1)
    jks:
      passwordSecretRef:
        name: jks-password-secret
        key: password
      create: true
  issuerRef: # (2)
    name: ss-clusterissuer
    group: cert-manager.io
    kind: ClusterIssuer
  privateKey:
    algorithm: ECDSA
    size: 256
  dnsNames:
    - secure-callme
  secretName: secure-callme-cert
  commonName: secure-callme # (3)
  duration: 1h # (4)
  renewBefore: 5m  # (5)

The Certificate object for secure-caller is very similar. The only difference is in the CN field. We will use the port-forward option during the test, so I’ll set the domain name to localhost (1).

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: secure-caller-cert
spec:
  keystores:
    jks:
      passwordSecretRef:
        name: jks-password-secret
        key: password
      create: true
  issuerRef:
    name: ss-clusterissuer
    group: cert-manager.io
    kind: ClusterIssuer
  privateKey:
    algorithm: ECDSA
    size: 256
  dnsNames:
    - localhost
    - secure-caller
  secretName: secure-caller-cert
  commonName: localhost # (1)
  duration: 1h
  renewBefore: 5m

After applying both manifests we can display a list of Certificates. Each of them is related to the Secret with the same name:

$ kubectl get certificate
NAME                 READY   SECRET               AGE
secure-caller-cert   True    secure-caller-cert   1m
secure-callme-cert   True    secure-callme-cert   1m

Here are the details of the secure-callme-cert Secret. It contains the key store and trust store in the JKS format. We will use both of them in the Spring Boot SSL configuration (server.ssl.trust-store and server.ssl.key-store properties). There is also a certificate (tls.crt), a private key (tls.key), and CA (ca.crt).

$ kubectl describe secret secure-callme-cert
Name:         secure-callme-cert
Namespace:    default
Labels:       <none>
Annotations:  cert-manager.io/alt-names: secure-callme
              cert-manager.io/certificate-name: secure-callme-cert
              cert-manager.io/common-name: secure-callme
              cert-manager.io/ip-sans: 
              cert-manager.io/issuer-group: cert-manager.io
              cert-manager.io/issuer-kind: ClusterIssuer
              cert-manager.io/issuer-name: ss-clusterissuer
              cert-manager.io/uri-sans: 

Type:  kubernetes.io/tls

Data
====
ca.crt:          550 bytes
keystore.jks:    1029 bytes
tls.crt:         550 bytes
tls.key:         227 bytes
truststore.jks:  422 bytes

Deploy and Reload Apps on Kubernetes

Since we have already prepared all the required components and objects, we may proceed with the deployment of our apps. As I mentioned in the “Prerequisites” section we will use Skaffold for building and deploying apps on the local cluster. Let’s begin with the secure-callme app.

First of all, we need to reload the app each time the secure-callme-cert changes. It occurs once per hour when the cert-manager renews the TLS certificate. In order to enable the automatic restart of the pod with Stakater Reloader we need to annotate the Deployment with secret.reloader.stakater.com/reload (1). The annotation should contain the name of Secret, which triggers the app reload. Of course, we also need to mount key store and trust store files (3) and set the mount path for the Spring Boot app available under the CERT_PATH env variable (2). We are mounting the whole secure-callme-cert Secret.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: secure-callme
  annotations:
    # (1)
    secret.reloader.stakater.com/reload: "secure-callme-cert" 
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: secure-callme
  template:
    metadata:
      labels:
        app.kubernetes.io/name: secure-callme
    spec:
      containers:
        - image: piomin/secure-callme
          name: secure-callme
          ports:
            - containerPort: 8443
              name: https
          env:
            - name: PASSWORD
              valueFrom:
                secretKeyRef:
                  key: password
                  name: jks-password-secret
            - name: CERT_PATH # (2)
              value: /opt/secret
          volumeMounts:
            - mountPath: /opt/secret # (3)
              name: cert
      volumes:
        - name: cert
          secret:
            secretName: secure-callme-cert # (4)

The password to the key store files is available inside the jks-password-secret:

kind: Secret
apiVersion: v1
metadata:
  name: jks-password-secret
data:
  password: MTIzNDU2
type: Opaque

There is also the Kubernetes Service related to the app:

apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: secure-callme
  name: secure-callme
spec:
  ports:
    - name: https
      port: 8443
      targetPort: 8443
  selector:
    app.kubernetes.io/name: secure-callme
  type: ClusterIP

Now, go to the secure-callme directory and just run the following command:

$ skaffold run

The Deployment manifest of the secure-caller app is a little bit more complicated. The same as before we need to reload the app on Secret change (1). However, this app uses two secrets. The first of them contains server certs (secure-caller-cert), while the second contains certs for communication with secure-callme. Consequently, we are mounting two secrets (5) and we are setting the path with server key stores (2) and client key stores (3).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: secure-caller
  # (1)
  annotations:
    secret.reloader.stakater.com/reload: "secure-caller-cert,secure-callme-cert"
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: secure-caller
  template:
    metadata:
      labels:
        app.kubernetes.io/name: secure-caller
    spec:
      containers:
        - image: piomin/secure-caller
          name: secure-caller
          ports:
            - containerPort: 8443
              name: https
          env:
            - name: PASSWORD
              valueFrom:
                secretKeyRef:
                  key: password
                  name: jks-password-secret
            # (2)
            - name: CERT_PATH
              value: /opt/secret
            # (3)
            - name: CLIENT_CERT_PATH
              value: /opt/client-secret
            - name: HOST
              value: secure-callme
          volumeMounts:
            - mountPath: /opt/secret
              name: cert
            - mountPath: /opt/client-secret
              name: client-cert
      volumes:
        # (5)
        - name: cert
          secret:
            secretName: secure-caller-cert
        - name: client-cert
          secret:
            secretName: secure-callme-cert

Then, go to the secure-caller directory and deploy the app. This time we enable port-forward to easily test the app locally.

$ skaffold run --port-forward

Let’s display a final list of all running apps. We have cert-manager components, the Stakater reloader, and our two sample Spring Boot apps.

$ kubectl get deploy 
NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
my-cert-manager              1/1     1            1           1h
my-cert-manager-cainjector   1/1     1            1           1h
my-cert-manager-webhook      1/1     1            1           1h
my-reloader-reloader         1/1     1            1           1h
secure-caller                1/1     1            1           1h
secure-callme                1/1     1            1           1h

Testing Renew of Certificates on Kubernetes

The secure-caller app is available on the 8443 port locally, while the secure-callme app is available inside the cluster under the service name secure-callme. Let’s make a test call. Firstly, we need to download certificates and private keys stored on Kubernetes:

$ kubectl get secret secure-caller-cert \
  -o jsonpath \
  --template '{.data.tls\.key}' | base64 --decode > tls.key

$ kubectl get secret secure-caller-cert \
  -o jsonpath \
  --template '{.data.tls\.crt}' | base64 --decode > tls.crt

$ kubectl get secret secure-caller-cert \
  -o jsonpath \
  --template '{.data.ca\.crt}' | base64 --decode > ca.crt

Now, we can call the GET /caller endpoint using the following curl command. Under the hood, the secure-caller calls the endpoint GET /callme exposed by the secure-callme also over HTTPS. If you did everything according to the instruction you should have the same result as below.

$ curl https://localhost:8443/caller \
  --key tls.key \
  --cert tls.crt \
  --cacert ca.crt 
I'm `secure-caller`! calling... I'm `secure-callme`!

Our certificate is valid for one hour.

Let’s see what happens after one hour. A new certificate has already been generated and both our apps have been reloaded. Now, if try to call the same endpoint as before using old certificates you should see the following error.

Now, if you repeat the first step in that section it should work properly. We just need to download the certs to make a test. The internal communication over SSL works automatically after a reload of apps.

Final Thoughts

Of course, there are some other ways for achieving the same result as in our exercise. For example, you can a service mesh tool like Istio and enable mutual TLS for internal communication. You will still need to handle the automatic renewal of certificates somehow in that scenario. Cert Manager may be replaced with some other tools like HashiCorp Vault, which provides features for generating SSL/TLS certificates. You can as well use Spring Cloud Kubernetes with Spring Boot for watching for changes in secrets and reloading them without restarting the app. However, the solution used to renew certificates on Kubernetes presented in that article is simple and will work for any type of app.

The post Renew Certificates on Kubernetes with Cert Manager and Reloader appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/12/02/renew-certificates-on-kubernetes-with-cert-manager-and-reloader/feed/ 0 13757
Continuous Development on Kubernetes with GitOps Approach https://piotrminkowski.com/2022/06/06/continuous-development-on-kubernetes-with-gitops-approach/ https://piotrminkowski.com/2022/06/06/continuous-development-on-kubernetes-with-gitops-approach/#respond Mon, 06 Jun 2022 08:53:30 +0000 https://piotrminkowski.com/?p=11609 In this article, you will learn how to design your apps continuous development process on Kubernetes with the GitOps approach. In order to deliver the application to stage or production, we should use a standard CI/CD process and tools. It requires separation between the source code and the configuration code. It may result in using […]

The post Continuous Development on Kubernetes with GitOps Approach appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to design your apps continuous development process on Kubernetes with the GitOps approach. In order to deliver the application to stage or production, we should use a standard CI/CD process and tools. It requires separation between the source code and the configuration code. It may result in using dedicated tools for the building phase, and for the deployment phase. We are talking about a similar way to the one described in the following article, where we use Tekton as a CI tool and Argo CD as a delivery tool. With that approach, each time you want to release a new version of the image, you should commit it to the repository with configuration. Then, the tool responsible for the CD process applies changes to the cluster. Consequently, it performs a deployment of the new version.

I’ll describe here four possible approaches. Here’s the list of topics:

If you would like to try this exercise yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. There is a sample Spring Boot application there. You can also access the following repository with the configuration for that app. Go to the apps/simple for a plain Deployment object example, and to the apps/helm for the Helm chart. After that, you should just follow my instructions. Let’s begin.

Approach 1: Use the same tag and make a rollout

The first approach is probably the simplest way to achieve our goal. However, not the best one 🙂 Let’s start with the Kubernetes Deployment. That fragment of YAML is a part of the configuration, so the Argo CD manages it. There are two important things here. We use dev-latest as the image tag (1). It won’t be changed when we are deploying a new version of the image. We also need to pull the latest version of the image each time we will do the Deployment rollout. Therefore, we set imagePullPolicy to Always (2).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-kotlin-microservice
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: sample-spring-kotlin-microservice
  template:
    metadata:
      labels:
        app.kubernetes.io/name: sample-spring-kotlin-microservice
    spec:
      containers:
      - image: piomin/sample-spring-kotlin:dev-latest # (1)
        name: sample-spring-kotlin-microservice
        ports:
        - containerPort: 8080
          name: http
        imagePullPolicy: Always # (2)

Let’s assume we have a pipeline e.g. in GitLab CI triggered by every push to the dev branch. We use Maven to build an image from the source code (using jib-maven-plugin) and then push it to the container registry. In the last step (reload-app) we are restarting the application in order to run the latest version of the image tagged with dev-latest. In order to do that, we should execute the kubectl restart deploy sample-spring-kotlin-microservice command.

image: maven:latest

stages:
  - compile
  - image-build
  - reload-app

build:
  stage: compile
  script:
    - mvn compile

image-build:
  stage: image-build
  script:
    - mvn -s .m2/settings.xml compile jib:build

reload-app:
  image: bitnami/kubectl:latest
  stage: deploy
  only:
    - dev
  script:
    - kubectl restart deploy sample-spring-kotlin-microservice -n dev

We still have all the versions available in the registry. But just a single one is tagged as dev-latest. Of course, we can use any other convention of image tagging, e.g. based on timestamp or git commit id.

In this approach, we still use GitOps to manage the app configuration on Kubernetes. The CI pipeline pushes the latest version to the registry and triggers reload on Kubernetes.

Approach 2: Commit the latest tag to the repository managed by the CD tool

Configuration

Let’s consider a slightly different approach than the previous one. Argo CD automatically synchronizes changes pushed to the config repository with the Kubernetes cluster. Once the pipeline pushes a changed version of the image tag, Argo CD performs a rollout. Here’s the Argo CD Application manifest.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: sample-spring-kotlin-simple
  namespace: argocd
spec:
  destination:
    namespace: apps
    server: https://kubernetes.default.svc
  project: default
  source:
    path: apps/simple
    repoURL: https://github.com/piomin/openshift-cluster-config
    targetRevision: HEAD
  syncPolicy:
    automated: {}

Here’s the first version of our Deployment. As you see we are deploying the image with the 1.0.0 tag.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-kotlin-microservice
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: sample-spring-kotlin-microservice
  template:
    metadata:
      labels:
        app.kubernetes.io/name: sample-spring-kotlin-microservice
    spec:
      containers:
      - image: piomin/sample-spring-kotlin:1.0.0
        name: sample-spring-kotlin-microservice
        ports:
        - containerPort: 8080
          name: http

Now, our pipeline should build a new image, override the image tag in YAML and push the latest version to the Git repository. I won’t create a pipeline, but just show you step-by-step what should be done. Let’s begin with the tool. Our pipeline may use Skaffold to build, push and override image tags in YAML. Skaffold is a CLI tool very useful for simplifying development on Kubernetes. However, we can also use it for building CI/CD blocks or templating Kubernetes manifests for the GitOps approach. Here’s the Skaffold configuration file. It is very simple. The same as for the previous example, we use Jib for building and pushing an image. Skaffold supports multiple tag policies for tagging images. We may define e.g. a tagger that uses the current date and time.

apiVersion: skaffold/v2beta22
kind: Config
build:
  artifacts:
  - image: piomin/sample-spring-kotlin
    jib: {}
  tagPolicy:
    dateTime: {}

Skaffold in CI/CD

In the first step, we are going to build and push the image. Thanks to the --file-output Skaffold will export the info about a build to the file.

$ skaffold build --file-output='/Users/pminkows/result.json' --push

The file with the result is located under the /Users/pminkows/result.json path. It contains the basic information about a build including the image name and tag.

{"builds":[{"imageName":"piomin/sample-spring-kotlin","tag":"piomin/sample-spring-kotlin:2022-06-03_13-53-43.988_CEST@sha256:287572a319ee7a0caa69264936063d003584e026eefeabb828e8ecebca8678a7"}]}

This image has also been pushed to the registry.

Now, let’s run the skaffold render command to override the image tag in the YAML manifest. Assuming we are running it in the next pipeline stage we can just set the /Users/pminkows/result.json file as in input. The output apps/simple/deployment.yaml is a location inside the Git repository managed by Argo CD. We don’t want to include the namespace name, therefore we should set the parameter --offline=true.

$ skaffold render -a /Users/pminkows/result.json \
    -o apps/simple/deployment.yaml \
    --offline=true

Here’s the final version of our YAML manifest.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-kotlin-microservice
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: sample-spring-kotlin-microservice
  template:
    metadata:
      labels:
        app.kubernetes.io/name: sample-spring-kotlin-microservice
    spec:
      containers:
      - image: piomin/sample-spring-kotlin:2022-06-03_13-53-43.988_CEST@sha256:287572a319ee7a0caa69264936063d003584e026eefeabb828e8ecebca8678a7
        name: sample-spring-kotlin-microservice
        ports:
        - containerPort: 8080
          name: http

Finally, our pipeline needs to commit the latest version of our manifest to the Git repository. Argo CD will automatically deploy it to the Kubernetes cluster.

Approach 3: Detects the latest version of the image and update automatically with Renovate

Concept

Firstly, let’s visualize the current approach. Our pipeline pushes the latest image to the container registry. Renovate is continuously monitoring the tags of the image in the container registry to detect a change. Once it detects that the image has been updated it creates a pull request to the Git repository with configuration. Then Argo CD detects a new image tag committed into the registry and synchronizes it with the current Kubernetes cluster.

kubernetes-gitops-renovate-arch

Configuration

Renovate is a very interesting tool. It continuously runs and detects the latest available versions of dependencies. These can be e.g. Maven dependencies. But in our case, we can use it for monitoring container images in a registry.

In the first step, we will prepare a configuration for Renovate. It accepts JSON format. There are several things we need to set:

(1) platform – our repository is located on GitHub

(2) repository – the location of the configuration repository. Renovate monitors the whole repository and detects files matching filtering criteria

(3) enabledManagers – we are going to monitor Kubernetes YAML manifests and Helm value files. For every single manager, we should set file filtering rules.

(4) packageRules – our goal is to automatically update the configuration repository with the latest tag. Since Renovate creates a pull request after detecting a change we would like to enable PR auto-merge on GitHub. Auto-merge should be performed only for patches (e.g. update from 1.0.0 to 1.0.1) or minor updates (e.g. from 1.0.1 to 1.0.5). For other types of updates, PR needs to be approved manually.

(5) ignoreTests – we need to enable it to perform PR auto-merge. Otherwise Renovate will require at least one test in the repository to perform PR auto-approve.

{
  "platform": "github",
  "repositories": [
    {
      "repository": "piomin/openshift-cluster-config",
      "enabledManagers": ["kubernetes", "helm-values"],
      "kubernetes" : {
        "fileMatch": ["\\.yaml$"]
      },
      "helm-values": {
        "fileMatch": ["(.*)values.yaml$"]
      },
      "packageRules": [
        {
          "matchUpdateTypes": ["minor", "patch"],
          "automerge": true
        }
      ],
      "ignoreTests": true
    }
  ]
}

In order to create a pull request, Renovate needs to have write access to the GitHub repository. Let’s create a Kubernetes Secret containing the GitHub access token.

apiVersion: v1
kind: Secret
metadata:
  name: renovate-secrets
  namespace: renovate
data:
  RENOVATE_TOKEN: <BASE64_TOKEN>
type: Opaque

Installation

Now we can install Renovate on Kubernetes. The best way for that is through the Helm chart. Let’s add the Helm repository:

$ helm repo add renovate https://docs.renovatebot.com/helm-charts
$ helm repo update

Then we may install it using the previously prepared config.json file. We also need to pass the name of the Secret containing the GitHub token and set the cron job scheduling interval. We will run the job responsible for detecting changes and creating PR once per minute.

$ helm install --generate-name \
    --set-file renovate.config=config.json \
    --set cronjob.schedule='*/1 * * * *' \
    --set existingSecret=renovate-secrets \
    renovate/renovate -n renovate

After installation, you should see a CronJob in the renovate namespace:

$ kubectl get cj -n renovate
NAME                  SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
renovate-1653648026   */1 * * * *    False     0        1m19s           2m19s

Use Case

Let’s consider the following Helm values.yaml. It needs to have a proper structure, i.e. image.repository, image.tag and image.registry fields.

app:
  name: sample-kotlin-spring
  replicas: 1

image:
  repository: 'pminkows/sample-kotlin-spring'
  tag: 1.4.20
  registry: quay.io

Let’s the image pminkows/sample-kotlin-spring with the tag 1.4.21 to the registry.

Once Renovate detected a new image tag in the container registry it created a PR with auto-approval enabled:

kubernetes-gitops-renovate-pr

Finally, the following Argo CD Application will apply changes automatically to the Kubernetes cluster:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: sample-spring-kotlin-helm
  namespace: argocd
spec:
  destination:
    namespace: apps
    server: https://kubernetes.default.svc
  project: default
  source:
    path: apps/helm
    repoURL: https://github.com/piomin/openshift-cluster-config
    targetRevision: HEAD
    helm:
      valueFiles:
        - values.yaml 
  syncPolicy:
    automated: {}

Approach 4: Use Argo CD Image Updater

Finally, we may proceed to the last proposition in this article to implement the development process in Kubernetes with GitOps. That option is available only for the container images managed by Argo CD. Let me show you a tool called Argo CD Image Update. The concept around this tool is pretty similar to Renovate. It can check for new versions of the container images deployed on Kubernetes and automatically update them. You can read more about it here.

Argo CD Image Updater can work in two modes. Once it detects a new version of the image in the registry it can update the image version in the Git repository (git) or directly inside Argo CD Application (argocd). We will use the argocd mode, which is a default option. Firstly, let’s install Argo CD Image Updater on Kubernetes in the same namespace as Argo CD. We can use the Helm chart for that:

$ helm repo add argo https://argoproj.github.io/argo-helm
$ helm install argocd-image-updater argo/argocd-image-updater -n argocd

After that, the only thing we need to do is to annotate the Argo CD Application with argocd-image-updater.argoproj.io/image-list. The value of the annotation is the list of images to monitor. Assuming there is the same Argo CD Application as in the previous section it looks as shown below:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: sample-spring-kotlin-helm
  namespace: argocd
  annotations: 
    argocd-image-updater.argoproj.io/image-list: quay.io/pminkows/sample-spring-kotlin
spec:
  destination:
    namespace: apps
    server: https://kubernetes.default.svc
  project: default
  source:
    path: apps/helm
    repoURL: https://github.com/piomin/openshift-cluster-config
    targetRevision: HEAD
    helm:
      valueFiles:
        - values.yaml 
  syncPolicy:
    automated: {}

Once the Argo CD Image Update detects a new version of the image quay.io/pminkows/sample-spring-kotlin it adds two parameters (or just updates a value of the image.tag parameter) to the Argo CD Application. In fact, it leverages the feature of Argo CD that allows overriding the parameters of the Argo CD Application. You can read more about that feature in their documentation. After that, Argo CD will automatically deploy the image with the tag taken from image.tag parameter.

Final Thoughts

The main goal of this article is to show you how to design your apps development process on Kubernetes in the era of GitOps. I assumed you use Argo CD for GitOps on Kubernetes, but in fact, only the last described approach requires it. Our goal was to build a development pipeline that builds an image after the source code change and pushes it into the registry. Then with the GitOps model, we are running such an image on Kubernetes in the development environment. I showed how you can use tools like Skaffold, Renovate, or Argo CD Image Updater to implement the required behavior.

The post Continuous Development on Kubernetes with GitOps Approach appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/06/06/continuous-development-on-kubernetes-with-gitops-approach/feed/ 0 11609
Vault on Kubernetes with Spring Cloud https://piotrminkowski.com/2021/12/30/vault-on-kubernetes-with-spring-cloud/ https://piotrminkowski.com/2021/12/30/vault-on-kubernetes-with-spring-cloud/#comments Thu, 30 Dec 2021 13:47:51 +0000 https://piotrminkowski.com/?p=10399 In this article, you will learn how to run Vault on Kubernetes and integrate it with your Spring Boot application. We will use the Spring Cloud Vault project in order to generate database credentials dynamically and inject them into the application. Also, we are going to use a mechanism that allows authenticating against Vault using […]

The post Vault on Kubernetes with Spring Cloud appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to run Vault on Kubernetes and integrate it with your Spring Boot application. We will use the Spring Cloud Vault project in order to generate database credentials dynamically and inject them into the application. Also, we are going to use a mechanism that allows authenticating against Vault using a Kubernetes service account token. If this topic seems to be interesting for you it is worth reading one of my previous articles about how to run Vault on a quite similar platform as Kubernetes – Nomad. You may find it here.

Why Spring Cloud Vault on Kubernetes?

First of all, let me explain why I decided to use Spring Cloud instead of Hashicorp’s Vault Agent. It is important to know that Vault Agent is always injected as a sidecar container into the application pod. So even if we have a single secret in Vault and we inject it once on startup there is always one additional container running. I’m not saying it’s wrong, since it is a standard approach on Kubernetes. However, I’m not very happy with it. I also had some problems in troubleshooting with Vault Agent. To be honest, it wasn’t easy to find my mistake in configuration based just on its logs. Anyway, Spring Cloud is an interesting alternative to the solution provided by Hashicorp. It allows you to easily integrate Spring Boot configuration properties with the Vault Database engine. In fact, you just need to include a single dependency to use it.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. To see the sample application go to the kubernetes/sample-db-vault directory. Then you should just follow my instructions 🙂

Prerequisites

Before we start, there are some required tools. Of course, we need to have a Kubernetes cluster locally or remotely. Personally, I use Docker Desktop, but you may use any other option you prefer. In order to run Vault on Kubernetes, we need to install Helm.

If you would like to build the application from the source code you need to have Skaffold, Java 17, and Maven. Alternatively, you may use a ready image from my Docker Hub account piomin/sample-app.

Install Vault on Kubernetes with Helm

The recommended way to run Vault on Kubernetes is via the Helm chart. Helm installs and configures all the necessary components to run Vault in several different modes. Firstly, let’s add the HashiCorp Helm repository.

$ helm repo add hashicorp https://helm.releases.hashicorp.com

Before proceeding it is worth updating all the repositories to ensure helm uses the latest versions of the components.

$ helm repo update

Since I will run Vault in the dedicated namespace, we first need to create it.

$ kubectl create ns vault

Finally, we can install the latest version of the Vault server and run it in development mode.

$ helm install vault hashicorp/vault \
    --set "server.dev.enabled=true" \
    -n vault

We can verify the installation by displaying a list of running pods in the vault namespace. As you see the Vault Agent is installed by the Helm Chart, so you can try using it as well. If you wish to just go to this tutorial prepared by HashiCorp.

$ kubectl get pod -n vault
NAME                                    READY   STATUS     RESTARTS   AGE
vault-0                                 1/1     Running    0          1h
vault-agent-injector-678dc584ff-wc2r7   1/1     Running    0          1h

Access Vault on Kubernetes

Before we run our application on Kubernetes, we need to configure several things on Vault. I’ll show you how to do it using the vault CLI. The simplest way to use CLI on Kubernetes is just by getting a shell of a running Vault container:

$ kubectl exec -it vault-0 -n vault -- /bin/sh

Alternatively, we can use Vault Web Console available at the 8200 port. To access it locally we should first enable port forwarding:

$ kubectl port-forward service/vault 8200:8200 -n vault

Now, you access it locally in your web browser at http://localhost:8200. In order to log in there use the Token method (a default token value is root). Then you may do the same as with the vault CLI but with the nice UI.

vault-kubernetes-ui-login

Configure Kubernetes authentication

Vault provides a Kubernetes authentication method that enables clients to authenticate with a Kubernetes service account token. This token is available to every single pod. Assuming you have already started an interactive shell session on the vault-0 pod just execute the following command:

$ vault auth enable kubernetes

In the next step, we are going to configure the Kubernetes authentication method. We need to set the location of the Kubernetes API, the service account token, its certificate, and the name of the Kubernetes service account issuer (required for Kubernetes 1.21+).

$ vault write auth/kubernetes/config \
    kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
    token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
    kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
    issuer="https://kubernetes.default.svc.cluster.local"

Ok, now very important. You need to understand what happened here. We need to create a Vault policy that allows us to generate database credentials dynamically. We will enable the Vault database engine in the next section. For now, we are just creating a policy that will be assigned to the authentication role. The name of our Vault policy is internal-app:

$ vault policy write internal-app - <<EOF
path "database/creds/default" {
  capabilities = ["read"]
}
EOF

The next important thing is related to the Kubernetes RBAC. Although the Vault server is running in the vault namespace our sample application will be running in the default namespace. Therefore, the service account used by the application is also in the default namespace. Let’s create ServiceAccount for the application:

$ kubectl create sa internal-app

Now, we have everything to do the last step in this section. We need to create a Vault role for the Kubernetes authentication method. In this role, we set the name and location of the Kubernetes ServiceAccount and the Vault policy created in the previous step.

$ vault write auth/kubernetes/role/internal-app \
    bound_service_account_names=internal-app \
    bound_service_account_namespaces=default \
    policies=internal-app \
    ttl=24h

After that, we may proceed with the next steps. Let’s enable the Vault database engine.

Enable Vault Database Engine

Just to clarify, we are still inside the vault-0 pod. Let’s enable the Vault database engine.

$ vault secrets enable database

Of course, we need to run a database on Kubernetes. We will PostgreSQL since it is supported by Vault. The full deployment manifest is available on my GitHub repository in /kubernetes/k8s/postgresql-deployment.yaml. Here’s just the Deployment object:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:latest
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: POSTGRES_PASSWORD
                  name: postgres-secret
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgredb
      volumes:
        - name: postgredb
          persistentVolumeClaim:
            claimName: postgres-claim

Let’s apply the whole manifest to deploy Postgres in the default namespace:

$ kubectl apply -f postgresql-deployment.yaml

Following Vault documentation, we first need to configure a plugin for the PostgreSQL database and then provide connection settings and credentials:

$ vault write database/config/postgres \
    plugin_name=postgresql-database-plugin \
    allowed_roles="default" \
    connection_url="postgresql://{{username}}:{{password}}@postgres.default:5432?sslmode=disable" \
    username="postgres" \
    password="admin123"

I have disabled SSL for connection with Postgres by setting the property sslmode=disable. There is only one role allowed to use the Vault PostgresSQL plugin: default. The name of the role should be the same as the name passed in the field allowed_roles in the previous step. We also have to set a target database name and SQL statement that creates users with privileges. We set the max TTL of the lease to 10 minutes just to present revocation and renewal features of Spring Cloud Vault. It means that 10 minutes after your application has started it can no longer authenticate with the database.

$ vault write database/roles/default db_name=postgres \
    creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";GRANT USAGE,  SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";" \
    default_ttl="1m" \
    max_ttl="10m"

And that’s all on the Vault server side. Now, we can test our configuration using a vault CLI as shown below. You can log in to the database using returned credentials. By default, they are valid for one minute (the default_ttl parameter in the previous command).

$ vault read database/creds/default

We can also verify a connection to the instance of PostgreSQL in Vault UI:

Now, we can generate new credentials just by renewing the Vault lease (vault lease renew LEASE_ID). Hopefully, Spring Cloud Vault does it automatically for our app. Let’s see how it works.

Use Spring Cloud Vault on Kubernetes

For the purpose of this demo, I created a simple Spring Boot application. It exposes REST API and connects to the PostgreSQL database. It uses Spring Data JPA to interact with the database. However, the most important thing here are the following two dependencies:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-bootstrap</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-vault-config-databases</artifactId>
</dependency>

The first of them enables bootstrap.yml processing on the application startup. The second of them include Spring Cloud Vault Database engine support.

The only thing we need to do is to provide the right configuration settings Here’s the minimal set of the required dependencies to make it work without any errors. The following configuration is provided in the bootstrap.yml file:

spring:
  application:
    name: sample-db-vault
  datasource:
    url: jdbc:postgresql://postgres:5432/postgres #(1)
  jpa:
    hibernate:
      ddl-auto: update
  cloud:
    vault:
      config.lifecycle: #(2)
        enabled: true
        min-renewal: 10s
        expiry-threshold: 30s
      kv.enabled: false #(3)
      uri: http://vault.vault:8200 #(4)
      authentication: KUBERNETES #(5)
      postgresql: #(6)
        enabled: true
        role: default
        backend: database
      kubernetes: #(7)
        role: internal-app

Let’s analyze the configuration visible above in the details:

(1) We need to set the database connection URI, but WITHOUT any credentials. Assuming our application uses standard properties for authentication against the database (spring.datasource.username and spring.datasource.password) we don’t need to anything else

(2) As you probably remember, the max TTL for the database lease is 10 minutes. We enable lease renewal every 30 seconds. Just for the demo purpose. You will see that Spring Cloud Vault will create new credentials in PostgreSQL every 30 seconds, and the application still works without any errors

(3) Vault KV is not needed here, since I’m using only the database engine

(4) The application is going to be deployed in the default namespace, while Vault is running in the vault namespace. So, the address of Vault should include the namespace name

(5) (7) Our application uses the Kubernetes authentication method to access Vault. We just need to set the role name, which is internal-app. All other settings should be left with the default values

(6) We also need to enable postgres database backend support. The name of the backend in Vault is database and the name of Vault role used for that engine is default.

Run Spring Boot application on Kubernetes

The Deployment manifest is rather simple. But what is important here – we need to use the ServiceAccount internal-app used by the Vault Kubernetes authentication method.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app-deployment
spec:
  selector:
    matchLabels:
      app: sample-app
  template:
    metadata:
      labels:
        app: sample-app
    spec:
      containers:
      - name: sample-app
        image: piomin/sample-app
        ports:
        - containerPort: 8080
      serviceAccountName: internal-app

Our application requires Java 17. Since I’m using Jib Maven Plugin for building images I also have to override the default base image. Let’s use openjdk:17.0.1-slim-buster.

<plugin>
  <groupId>com.google.cloud.tools</groupId>
  <artifactId>jib-maven-plugin</artifactId>
  <version>3.1.4</version>
  <configuration>
    <from>
      
    </from>
  </configuration>
</plugin>

The repository is configured to easily deploy the application with Skaffold. Just go to the /kubernetes/sample-db-vault directory and run the following command in order to build and deploy our sample application on Kubernetes:

$ skaffold dev --port-forward

After that, you can call one of the REST endpoints to test if the application works properly:

$ curl http://localhost:8080/persons

Everything works fine? In the background, Spring Cloud Vault creates new credentials every 30 seconds. You can easily verify it inside the PostgreSQL container. Just connect to the postgres pod and run the psql process:

$ kubectl exec svc/postgres -i -t -- psql -U postgres

Now you can list users with the \du command. Repeat the command several times to see if the credentials have been regenerated. Of course, the application is able to renew the lease until the max TTL (10 minutes) is not exceeded.

The post Vault on Kubernetes with Spring Cloud appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/12/30/vault-on-kubernetes-with-spring-cloud/feed/ 8 10399
Development on Kubernetes with Telepresence and Skaffold https://piotrminkowski.com/2021/12/21/development-on-kubernetes-with-telepresence-and-skaffold/ https://piotrminkowski.com/2021/12/21/development-on-kubernetes-with-telepresence-and-skaffold/#respond Tue, 21 Dec 2021 10:41:29 +0000 https://piotrminkowski.com/?p=10351 In this article, you will learn how to use Telepresence and Skaffold to improve development workflow on Kubernetes. In order to simplify the build of our Java applications, we will also use the Jib Maven plugin. All those tools give you great power to speed up your development process. That’s not my first article about […]

The post Development on Kubernetes with Telepresence and Skaffold appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use Telepresence and Skaffold to improve development workflow on Kubernetes. In order to simplify the build of our Java applications, we will also use the Jib Maven plugin. All those tools give you great power to speed up your development process. That’s not my first article about Skaffold. If you are not familiar with this tool it is worth my article about it. Today I’ll focus on Telepresence, which is in fact one of my favorite tools. I hope, that after reading this article, you will say it back 🙂

Introduction

What’s Telepresence? It’s a very simple and powerful CLI tool for fast and local development for Kubernetes. Why it is simple? Because you can do almost everything using a single command. Telepresence is a CNCF sandbox project originally created by Ambassador Labs. It lets you run and test microservices locally against a remote Kubernetes cluster. It intercepts remote traffic and sends it to your local running instance. I won’t focus on the technical aspects. If you want to read more about it, you can refer to the following link.

Firstly, let’s analyze our case. There are three microservices: first-service, caller-service and callme-service. All of them expose a single REST endpoint GET /ping, which returns basic information about each microservice. In order to create applications, I’m using the Spring Boot framework. Our architecture is visible in the picture below. The first-service is calling the endpoint exposed by the caller-service. Then the caller-service is calling the endpoint exposed by the callme-service. Of course, we are going to deploy all the microservices on the remote Kubernetes cluster.

Assuming we have something to do with the caller-service, we are also running it locally. So finally, our goal is to forward the traffic that is sent to the caller-service on the Kubernetes cluster to our local instance. On the other hand, the local instance of the caller-service should call the instance of the callme-service running on the remote cluster. Looks hard? Let’s check it out!

telepresence-kubernetes-arch

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The code used in this article is available in the dev branch. Then you should just follow my instructions 🙂

Prerequisites

Before we start, we need to install several tools. Of course, we also need to have a running Kubernetes cluster (or e.g. OpenShift). We will use the following CLI tools:

  1. kubectl – to interact with the Kubernetes cluster. It is also used by Skaffold
  2. skaffoldhere are installation instructions. It works perfectly fine on Linux, Macos as well as Windows too
  3. telepresencehere are installation instruction. I’m not sure about Windows since it is in developer preview there. However, I’m using it on Macos without any problems
  4. Maven + JDK11 – of course we need to build applications locally before deploying to Kubernetes.

Build and deploy applications on Kubernetes with Skaffold and Jib

Our applications are as simple as possible. Let’s take a look at the callme-service REST endpoint implementation. It just returns the name of the microservice and its version (v1 everywhere in this article):

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = 
       LoggerFactory.getLogger(CallmeController.class);

   @Autowired
   BuildProperties buildProperties;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      return "I'm callme-service " + version;
   }
}

The endpoint visible above is called by the caller-service. The same as before it also prints the name of the microservice and its version. But also, it appends the result received from the callme-service. It calls the callme-service endpoint using the Spring RestTemplate and the name of the Kubernetes Service.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = 
      LoggerFactory.getLogger(CallerController.class);

   @Autowired
   BuildProperties buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      String response = restTemplate
         .getForObject("http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }
}

Finally, let’s take a look at the implementation of the first-service @RestController. It calls the caller-service endpoint visible above.

@RestController
@RequestMapping("/first")
public class FirstController {

   private static final Logger LOGGER = 
      LoggerFactory.getLogger(FirstController.class);

   @Autowired
   BuildProperties buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.getName(), version);
      String response = restTemplate
         .getForObject("http://caller-service:8080/caller/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm first-service " + version + ". Calling... " + response;
   }

}

Here’s a definition of the Kubernetes Service for e.g. callme-service:

apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

You can see Kubernetes YAML manifests inside the k8s directory for every single microservice. Let’s take a look at the example Deployment manifest for the callme-service.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

We can deploy each microservice independently, or all of them at once. Here’s a global Skaffold configuration for the whole project. You can find it in the root directory. As you see it uses Jib as a build tool and tries to find manifests inside k8s directory of every single module.

apiVersion: skaffold/v2beta22
kind: Config
metadata:
  name: simple-istio-services
build:
  artifacts:
    - image: piomin/first-service
      jib:
        project: first-service
    - image: piomin/caller-service
      jib:
        project: caller-service
    - image: piomin/callme-service
      jib:
        project: callme-service
  tagPolicy:
    gitCommit: {}
deploy:
  kubectl:
    manifests:
      - '*/k8s/*.yaml'

Each Maven module has to include Jib Maven plugin.

<plugin>
  <groupId>com.google.cloud.tools</groupId>
  <artifactId>jib-maven-plugin</artifactId>
  <version>3.1.1</version>
</plugin>

Finally, we can deploy all our microservices on Kubernetes with Skaffold. Jib is working in Dockerless mode, so you don’t have to run Docker on your machine. By default, it uses adoptopenjdk:11-jre as a base following Java version defined in Maven pom.xml. If you want to observe logs after running applications on Kubernetes just activate the --tail option.

$ skaffold run --tail

Let’s just display a list of running pods to verify if the deployment was successful:

$ kubectl get pod
NAME                              READY   STATUS    RESTARTS   AGE
caller-service-688bd76c98-2m4gp   1/1     Running   0          3m1s
callme-service-75c7cf5bf-rfx69    1/1     Running   0          3m
first-service-7698465bcb-rvf77    1/1     Running   0          3m

Using Telepresence with Kubernetes

Let the party begin! After running all microservices on Kubernetes we will connect Telepresence to our cluster. The following command will run Telepresence deamon on your machine and connect it to the Kubernetes cluster (from current Kubecontext).

$ telepresence connect

If you see a similar result it means everything goes well.

telepresence-kubernetes-connect

Telepresence has already connected to your Kubernetes cluster, but it still not intercepting any traffic from the pods. You can verify it with the following command:

$ telepresence list
caller-service: ready to intercept (traffic-agent not yet installed)
callme-service: ready to intercept (traffic-agent not yet installed)
first-service : ready to intercept (traffic-agent not yet installed)

Ok, so now let’s intercept the traffic from the caller-service.

$ telepresence intercept caller-service --port 8080:8080

Here’s my result after running the command visible above.

telepresence-kubernetes-intercept

Now, the only thing we need to do is to run the caller-service on the local machine. By default, it listens on the port 8080:

$ mvn clean spring-boot:run

We can do it event smarter with the single Telepresence command instead of running them separately:

$ telepresence intercept caller-service --port 8080:8080 -- mvn clean spring-boot:run

Before we send a test request let’s analyze what happened. After running the telepresence intercept command Telepresence injects a sidecar container into the application pod. The name of this container is traffic-agent. It is responsible for intercepting the traffic that comes to the caller-service.

$ kubectl get pod caller-service-7577b9f6fd-ww7nv \
  -o jsonpath='{.spec.containers[*].name}'
caller-service traffic-agent

Ok, now let’s just call the first-service running on the remote Kubernetes cluster. I deployed it on OpenShift, so I can easily call it externally using the Route object. If you run it on other plain Kubernetes you can create Ingress or just run the kubectl port-forward command. Alternatively, you may also enable port forwarding on the skaffold run command (--port-forward option). Anyway, let’s call the first-service /ping endpoint. Here’s my result.

Here are the application logs from the Kubernetes cluster printed by the skaffold run command. As you see it just prints the logs from callme-service and first-service:

Now, let’s take a look at the logs from the local instance of the caller-service. Telepresence intercepts the traffic and sends it to the local instance of the application. Then this instance call the callme-service on the remote cluster 🙂

Cleanup Kubernetes environment after using Telepresence

To clean up the environment just run the following command. It will remove the sidecar container from your application pod. However, if you start your Spring Boot application within the telepresence intercept command you just need to kill the local process with CTRL+C (however the traffic-agent container is still inside the pod).

$ telepresence uninstall --agent caller-service

After that, you can call the first-service once again. Now, all the requests are not going out of the cluster.

In order to shutdown the Telepresence daemon and disconnect from the Kubernetes cluster just run the following command:

$ telepresence quit

Final Thoughts

You can also easily debug your microservices locally, just by running the same telepresence intercept command and your application in debug mode. What’s important in this scenario – Telepresence does not force you to use any other particular tools or IDE. You can do everything the same as would you normally run or debug the application locally. I hope you will like that tool the same as me 🙂

The post Development on Kubernetes with Telepresence and Skaffold appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/12/21/development-on-kubernetes-with-telepresence-and-skaffold/feed/ 0 10351
Spring Boot on Knative https://piotrminkowski.com/2021/03/01/spring-boot-on-knative/ https://piotrminkowski.com/2021/03/01/spring-boot-on-knative/#respond Mon, 01 Mar 2021 11:28:54 +0000 https://piotrminkowski.com/?p=9506 In this article, I’ll explain what is Knative and how to use it with Spring Boot. Although Knative is a serverless platform, we can run there any type of application (not just function). Therefore, we are going to run there a standard Spring Boot application that exposes REST API and connects to a database. Knative […]

The post Spring Boot on Knative appeared first on Piotr's TechBlog.

]]>
In this article, I’ll explain what is Knative and how to use it with Spring Boot. Although Knative is a serverless platform, we can run there any type of application (not just function). Therefore, we are going to run there a standard Spring Boot application that exposes REST API and connects to a database.

Knative introduces a new way of managing your applications on Kubernetes. It extends Kubernetes to add some new key features. One of the most significant of them is a “Scale to zero”. If Knative detects that a service is not used, it scales down the number of running instances to zero. Consequently, it provides a built-in autoscaling feature based on a concurrency or a number of requests per second. We may also take advantage of revision tracking, which is responsible for switching from one version of your application to another. With Knative you just have to focus on your core logic.

All the features I described above are provided by the component called “Knative Serving”. There are also two other components: Eventing” and “Build”. The Build component is deprecated and has been replaced by Tekton. The Eventing component requires attention. However, I’ll discuss it in more detail in the separated article.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂

I used the same application as the example in some of my previous articles about Spring Boot and Kubernetes. I just wanted to focus that you don’t have to change anything in the source code to run it also on Knative. The only required change will be in the YAML manifest.

Since Knative provides built-in autoscaling you may want to compare it with the horizontal pod autoscaler (HPA) on Kubernetes. To do that you may read the article Spring Boot Autoscaling on Kubernetes. If you are interested in how to easily deploy applications on Kubernetes read the following article about the Okteto platform.

Install Knative on Kubernetes

Of course, before we start Spring Boot development we need to install Knative on Kubernetes. We can do it using the kubectl CLI or an operator. You can find the detailed installation instruction here. I decided to try it on OpenShift. It is obviously the fastest way. I could do it with one click using the OpenShift Serverless Operator. No matter which type of installation you choose, the further steps will apply everywhere.

Using Knative CLI

This step is optional. You can deploy and manage applications on Knative with CLI. To download CLI do to the site https://knative.dev/docs/install/install-kn/. Then you can deploy the application using the Docker image.

$ kn service create sample-spring-boot-on-kubernetes \
   --image piomin/sample-spring-boot-on-kubernetes:latest

We can also verify a list of running services with the following command.

$ kn service list

For more advanced deployments it will be more suitable to use the YAML manifest. We will start the build from the source code build with Skaffold and Jib. Firstly, let’s take a brief look at our Spring Boot application.

Spring Boot application for Knative

As I mentioned before, we are going to create a typical Spring Boot REST-based application that connects to a Mongo database. The database is deployed on Kubernetes. Our model class uses the person collection in MongoDB. Let’s take a look at it.

@Document(collection = "person")
@Getter
@Setter
@AllArgsConstructor
@NoArgsConstructor
public class Person {

   @Id
   private String id;
   private String firstName;
   private String lastName;
   private int age;
   private Gender gender;
}

We use Spring Data MongoDB to integrate our application with the database. In order to simplify this integration we take advantage of its “repositories” feature.

public interface PersonRepository extends CrudRepository<Person, String> {
   Set<Person> findByFirstNameAndLastName(String firstName, String lastName);
   Set<Person> findByAge(int age);
   Set<Person> findByAgeGreaterThan(int age);
}

Our application exposes several REST endpoints for adding, searching and updating data. Here’s the controller class implementation.

@RestController
@RequestMapping("/persons")
public class PersonController {

   private PersonRepository repository;
   private PersonService service;

   PersonController(PersonRepository repository, PersonService service) {
      this.repository = repository;
      this.service = service;
   }

   @PostMapping
   public Person add(@RequestBody Person person) {
      return repository.save(person);
   }

   @PutMapping
   public Person update(@RequestBody Person person) {
      return repository.save(person);
   }

   @DeleteMapping("/{id}")
   public void delete(@PathVariable("id") String id) {
      repository.deleteById(id);
   }

   @GetMapping
   public Iterable<Person> findAll() {
      return repository.findAll();
   }

   @GetMapping("/{id}")
   public Optional<Person> findById(@PathVariable("id") String id) {
      return repository.findById(id);
   }

   @GetMapping("/first-name/{firstName}/last-name/{lastName}")
   public Set<Person> findByFirstNameAndLastName(@PathVariable("firstName") String firstName,
			@PathVariable("lastName") String lastName) {
      return repository.findByFirstNameAndLastName(firstName, lastName);
   }

   @GetMapping("/age-greater-than/{age}")
   public Set<Person> findByAgeGreaterThan(@PathVariable("age") int age) {
      return repository.findByAgeGreaterThan(age);
   }

   @GetMapping("/age/{age}")
   public Set<Person> findByAge(@PathVariable("age") int age) {
      return repository.findByAge(age);
   }

}

We inject database connection settings and credentials using environment variables. Our application exposes endpoints for liveness and readiness health checks. The readiness endpoint verifies a connection with the Mongo database. Of course, we use the built-in feature from Spring Boot Actuator for that.

spring:
  application:
    name: sample-spring-boot-on-kubernetes
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

management:
  endpoints:
    web:
      exposure:
        include: "*"
  endpoint.health:
      show-details: always
      group:
        readiness:
          include: mongo
      probes:
        enabled: true

Defining Knative Service in YAML

Firstly, we need to define a YAML manifest with a Knative service definition. It sets an autoscaling strategy using the Knative Pod Autoscaler (KPA). In order to do that we have to add annotation autoscaling.knative.dev/target with the number of simultaneous requests that can be processed by each instance of the application. By default, it is 100. We decrease that limit to 20 requests. Of course, we need to set liveness and readiness probes for the container. Also, we refer to the Secret and ConfigMap to inject MongoDB settings.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: sample-spring-boot-on-kubernetes
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/target: "20"
    spec:
      containers:
        - image: piomin/sample-spring-boot-on-kubernetes
          livenessProbe:
            httpGet:
              path: /actuator/health/liveness
          readinessProbe:
            httpGet:
              path: /actuator/health/readiness
          env:
            - name: MONGO_DATABASE
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-name
            - name: MONGO_USERNAME
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-user
            - name: MONGO_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-password

Configure Skaffold and Jib for Knative deployment

We will use Skaffold to automate the deployment of our application on Knative. Skaffold is a command-line tool that allows running the application on Kubernetes using a single command. You may read more about it in the article Local Java Development on Kubernetes. It may be easily integrated with the Jib Maven plugin. We just need to set jib as a default option in the build section of the Skaffold configuration. We can also define a list of YAML scripts executed during the deploy phase. The skaffold.yaml file should be placed in the project root directory. Here’s a current Skaffold configuration. As you see, it runs the script with the Knative Service definition.

apiVersion: skaffold/v2beta5
kind: Config
metadata:
  name: sample-spring-boot-on-kubernetes
build:
  artifacts:
    - image: piomin/sample-spring-boot-on-kubernetes
      jib:
        args:
          - -Pjib
  tagPolicy:
    gitCommit: {}
deploy:
  kubectl:
    manifests:
      - k8s/mongodb-deployment.yaml
      - k8s/knative-service.yaml

Skaffold activates the jib profile during the build. Within this profile, we will place a jib-maven-plugin. Jib is useful for building images in dockerless mode.

<profile>
   <id>jib</id>
   <activation>
      <activeByDefault>false</activeByDefault>
   </activation>
   <build>
      <plugins>
         <plugin>
            <groupId>com.google.cloud.tools</groupId>
            <artifactId>jib-maven-plugin</artifactId>
            <version>2.8.0</version>
         </plugin>
      </plugins>
   </build>
</profile>

Finally, all we need to do is to run the following command. It builds our application, creates and pushes a Docker image, and run it on Knative using knative-service.yaml.

$ skaffold run

Verify Spring Boot deployment on Knative

Now, we can verify our deployment on Knative. To do that let’s execute the command kn service list as shown below. We have a single Knative Service with the name sample-spring-boot-on-kubernetes.

spring-boot-knative-services

Then, let’s imagine we deploy three versions (revisions) of our application. To do that let’s just provide some changes in the source and redeploy our service using skaffold run. It creates new revisions of our Knative Service. However, the whole traffic is forwarded to the latest revision (with -vlskg suffix).

spring-boot-knative-revisions

With Knative we can easily split traffic between multiple revisions of the single service. To do that we need to add a traffic section in the Knative Service YAML configuration. We define a percent of the whole traffic per a single revision as shown below.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: sample-spring-boot-on-kubernetes
  spec:
    template:
      ...
    traffic:
      - latestRevision: true
        percent: 60
        revisionName: sample-spring-boot-on-kubernetes-vlskg
      - latestRevision: false
        percent: 20
        revisionName: sample-spring-boot-on-kubernetes-t9zrd
      - latestRevision: false
        percent: 20
        revisionName: sample-spring-boot-on-kubernetes-9xhbw

Let’s take a look at the graphical representation of our current architecture. 60% of traffic is forwarded to the latest revision, while both previous revisions receive 20% of traffic.

spring-boot-knative-openshift

Autoscaling and scale to zero

By default, Knative supports autoscaling. We may choose between two types of targets: concurrency and requests-per-second (RPS). The default target is concurrency. As you probably remember, I have overridden this default value to 20 with the autoscaling.knative.dev/target annotation. So, our goal now is to verify autoscaling. To do that we need to send many simultaneous requests to the application. Of course, the incoming traffic is distributed across three different revisions of Knative Service.

Fortunately, we may easily simulate a large traffic with the siege tool. We will call the GET /persons endpoint that returns all available persons. We are sending 150 concurrent requests with the command visible below.

$ siege http://sample-spring-boot-on-kubernetes-pminkows-serverless.apps.cluster-7260.7260.sandbox1734.opentlc.com/persons \
   -i -v -r 1000  -c 150 --no-parser

Under the hood, Knative still creates a Deployment and scales down or scales up the number of running pods. So, if you have three revisions of a single Service, there are three different deployments created. Finally, I have 10 running pods for the latest deployment that receives 60% of traffic. There are also 3 and 2 running pods for the previous revisions.

What will happen if there is no traffic coming to the service? Knative will scale down the number of running pods for all the deployments to zero.

Conclusion

In this article, you learned how to deploy the Spring Boot application as a Knative service using Skaffold and Jib. I explained with the examples how to create a new revision of the Service, and distribute traffic across those revisions. We also test the scenario with autoscaling based on concurrent requests and scale to zero in case of no incoming traffic. You may expect more articles about Knative soon! Not only with Spring Boot 🙂

The post Spring Boot on Knative appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/03/01/spring-boot-on-knative/feed/ 0 9506
Spring Boot on Kubernetes with Buildpacks and Skaffold https://piotrminkowski.com/2020/12/18/spring-boot-on-kubernetes-with-buildpacks-and-skaffold/ https://piotrminkowski.com/2020/12/18/spring-boot-on-kubernetes-with-buildpacks-and-skaffold/#comments Fri, 18 Dec 2020 14:40:11 +0000 https://piotrminkowski.com/?p=9268 In this article, you will learn how to run the Spring Boot application on Kubernetes using Buildpacks and Skaffold. Since version 2.3 Spring Boot supports Buildpacks. The main goal of Buildpack is to detect how to transform a source code into a runnable image. Once we created an image we may run it on Kubernetes. […]

The post Spring Boot on Kubernetes with Buildpacks and Skaffold appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to run the Spring Boot application on Kubernetes using Buildpacks and Skaffold. Since version 2.3 Spring Boot supports Buildpacks. The main goal of Buildpack is to detect how to transform a source code into a runnable image. Once we created an image we may run it on Kubernetes. Fortunately, we may use Skaffold to automate a whole process starting from a source code and ending on a Kubernetes deployment. Skaffold supports builds with Cloud Native Buildpacks, that require only a local Docker daemon.

In order to deploy Spring Boot on Kubernetes, we may also use Skaffold with Jib Maven Plugin. Jib is a tool from Google that builds optimized images without a Docker daemon. To look at how to integrate it with Skaffold in a Kubernetes deployment process you should read the article Local Java Development on Kubernetes. Furthermore, we may include a tool called Dekorate. It generates Kubernetes manifests based on a source code. You will find interesting details about it here.

Step 1. Build a Docker image with Spring Boot and Buildpacks

We don’t have to change anything in the build process to create a Docker image of a Spring Boot application. Of course, we need to declare spring-boot-maven-plugin in Maven pom.xml as always.

<build>
   <plugins>
      <plugin>
         <groupId>org.springframework.boot</groupId>
         <artifactId>spring-boot-maven-plugin</artifactId>
         <executions>
            <execution>
               <goals>
                  <goal>build-info</goal>
               </goals>
            </execution>
         </executions>
      </plugin>
   </plugins>
</build>

Since Spring Boot 2.3+ includes buildpack support directly for Maven, you may just type a single command to build a Docker image. So, let’s run the following command.

$ mvn package spring-boot:build-image

By default, Spring Boot uses Paketo Java buildpack to create Docker images. The name of a builder image is paketobuildpacks/builder:base. We can verify in the build logs visible below.

spring-boot-buildpacks-build-image

To clarify, this step is not required to deploy our application on Kubernetes. I just wanted to demonstrate how to use Cloud Native Buildpacks with Spring Boot. However, we should remember the name of a builder image. We will use the same builder in Step 3.

Step 2. Create Spring Boot application

I use the latest stable version of Spring Boot. For Step 1, we need at least version 2.3. For the next steps, it is not required. Moreover, we will use JDK 11 for compilation.

<parent>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-parent</artifactId>
   <version>2.4.1</version>
</parent>

<groupId>pl.piomin.samples</groupId>
<artifactId>sample-spring-boot-on-kubernetes</artifactId>
<version>1.0-SNAPSHOT</version>

<properties>
   <java.version>11</java.version>
</properties>

We will create a simple web application that exposes a REST API and connects to the Mongo database. Therefore, we need to include the following dependencies.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

Here’s a controller class. It implements methods for adding, updating, deleting, and searching data. There are multiple find methods available.

@RestController
@RequestMapping("/persons")
public class PersonController {

   private PersonRepository repository;

   PersonController(PersonRepository repository) {
      this.repository = repository;
   }

   @PostMapping
   public Person add(@RequestBody Person person) {
      return repository.save(person);
   }

   @PutMapping
   public Person update(@RequestBody Person person) {
      return repository.save(person);
   }

   @DeleteMapping("/{id}")
   public void delete(@PathVariable("id") String id) {
      repository.deleteById(id);
   }

   @GetMapping
   public Iterable<Person> findAll() {
      return repository.findAll();
   }

   @GetMapping("/{id}")
   public Optional<Person> findById(@PathVariable("id") String id) {
      return repository.findById(id);
   }

   @GetMapping("/first-name/{firstName}/last-name/{lastName}")
   public Set<Person> findByFirstNameAndLastName(@PathVariable("firstName") String firstName,
         @PathVariable("lastName") String lastName) {
      return repository.findByFirstNameAndLastName(firstName, lastName);
   }

   @GetMapping("/age-greater-than/{age}")
   public Set<Person> findByAgeGreaterThan(@PathVariable("age") int age) {
      return repository.findByAgeGreaterThan(age);
   }

   @GetMapping("/age/{age}")
   public Set<Person> findByAge(@PathVariable("age") int age) {
      return repository.findByAge(age);
   }

}

Step 3. Configure Skaffold

Firstly, we need to create a Skaffold configuration file in the project root directory. To do that we may execute the following command.

$ skaffold init --XXenableBuildpacksInit

At least we need to set the name of builder in the buildpacks section. Let’s configure the same builder as used by Spring Boot – paketobuildpacks/builder:base. We will also override a default location of Kubernetes manifests. Skaffold will detect two YAML manifests k8s/mongodb-deployment.yaml and k8s/deployment.yaml instead of a whole k8s directory.

apiVersion: skaffold/v2beta5
kind: Config
metadata:
  name: sample-spring-boot-on-kubernetes
build:
  artifacts:
    - image: piomin/sample-spring-boot-on-kubernetes
      buildpacks:
        builder: paketobuildpacks/builder:base
  tagPolicy:
    gitCommit: {}
deploy:
  kubectl:
    manifests:
      - k8s/mongodb-deployment.yaml
      - k8s/deployment.yaml

Step 4. Deploy on Kubernetes

Finally, we may proceed to the last step – deployment. The deployment manifest is visible below. Since, our application connects with the Mongo database, we inject its address and login credentials using ConfigMap and Secret references.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-boot-on-kubernetes-deployment
spec:
  selector:
    matchLabels:
      app: sample-spring-boot-on-kubernetes
  template:
    metadata:
      labels:
        app: sample-spring-boot-on-kubernetes
    spec:
      containers:
      - name: sample-spring-boot-on-kubernetes
        image: piomin/sample-spring-boot-on-kubernetes
        ports:
        - containerPort: 8080
        env:
          - name: MONGO_DATABASE
            valueFrom:
              configMapKeyRef:
                name: mongodb
                key: database-name
          - name: MONGO_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-user
          - name: MONGO_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-password

Let’s deploy our Spring Boot application using the following command. With port-forward option enabled we will be able to test HTTP endpoints using local port 8080.

$ skaffold dev --port-forward

As you see below, our sample application has succesfully started on Kubernetes.

spring-boot-buildpacks-skaffold

Let’s verify a list of deployments.

Finally, we may send some tests requests using http://localhost:8080 address.

The post Spring Boot on Kubernetes with Buildpacks and Skaffold appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/12/18/spring-boot-on-kubernetes-with-buildpacks-and-skaffold/feed/ 2 9268