Knative Revision Archives - Piotr's TechBlog https://piotrminkowski.com/tag/knative-revision/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 29 Mar 2022 07:45:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Knative Revision Archives - Piotr's TechBlog https://piotrminkowski.com/tag/knative-revision/ 32 32 181738725 Canary Release on Kubernetes with Knative and Tekton https://piotrminkowski.com/2022/03/29/canary-release-on-kubernetes-with-knative-and-tekton/ https://piotrminkowski.com/2022/03/29/canary-release-on-kubernetes-with-knative-and-tekton/#respond Tue, 29 Mar 2022 07:43:43 +0000 https://piotrminkowski.com/?p=10932 In this article, you will learn how to prepare a canary release in your CI/CD with Knative and Tekton. Since Knative supports many versions of the same service it seems to be the right tool to do canary releases. We will use its feature called gradual rollouts to shift the traffic to the latest version […]

The post Canary Release on Kubernetes with Knative and Tekton appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to prepare a canary release in your CI/CD with Knative and Tekton. Since Knative supports many versions of the same service it seems to be the right tool to do canary releases. We will use its feature called gradual rollouts to shift the traffic to the latest version in progressive steps. As an exercise, we are going to compile natively (with GraalVM) and run a simple REST service built on top of Spring Boot. We will use Cloud Native Buildpacks as a build tool on Kubernetes. Let’s begin!

If you are interested in more details about Spring Boot native compilation please refer to my article Microservices on Knative with Spring Boot and GraalVM.

Prerequisites

Native compilation for Java is a memory-intensive process. Therefore, we need to reserve at least 8GB for our Kubernetes cluster. We also have to install Tekton and Knative there, so it is worth having even more memory.

1. Install Knative Serving – we will use the latest version of Knative (1.3). Go to the following site for the installation manual. Once you did that, you can just verify if it works with the following command:

$ kubectl get pods -n knative-serving

2. Install Tekton – you can go to that site for more details. However, there is just a single command to install it:

$ kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then go to the callme-service directory. After that, you should just follow my instructions.

Spring Boot for Native GraalVM

In order to expose the REST endpoints, we need to include Spring Boot Starter Web. Our service also stored data in the H2 database, so we include Spring Boot Starter Data JPA. The last dependency is for native compilation support. The current version of Spring Native is 0.11.3.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.experimental</groupId>
  <artifactId>spring-native</artifactId>
  <version>0.11.3</version>
</dependency>

Let’s switch to the code. It is our model class exposed by the REST endpoint. It contains the current date, the name of the Kubernetes pod, and the version number.

@Entity
public class Callme {

   @Id
   @GeneratedValue
   private Integer id;
   @Temporal(TemporalType.TIMESTAMP)
   private Date addDate;
   private String podName;
   private String version;

   // getters, setters, constructor ...

}

There is a single endpoint that creates an event, stores it in the database and returns it as a result. The name of the pod and the name of the namespace are taken directly from Kubernetes Deployment. We use the version number from Maven pom.xml as the application version.

@RestController
@RequestMapping("/callme")
public class CallmeController {

   @Value("${spring.application.name}")
   private String appName;
   @Value("${POD_NAME}")
   private String podName;
   @Value("${POD_NAMESPACE}")
   private String podNamespace;
   @Autowired
   private CallmeRepository repository;
   @Autowired(required = false)
   BuildProperties buildProperties;

   @GetMapping("/ping")
   public String ping() {
      Callme c = repository.save(new Callme(new Date(), podName,
            buildProperties != null ? buildProperties.getVersion() : null));
      return appName +
            " v" + c.getVersion() +
            " (id=" + c.getId() + "): " +
            podName +
            " in " + podNamespace;
   }

}

In order to use the Maven version, we need to generate the build-info.properties file during the build. Therefore we should add the build-info goal in the spring-boot-maven-plugin execution properties. If you would like to build a native image locally just set the configuration environment property BP_NATIVE_IMAGE to true. Then you can just run the command mvn spring-boot:build-image.

<plugin>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-maven-plugin</artifactId>
  <executions>
    <execution>
      <goals>
        <goal>build-info</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    
  </configuration>
</plugin>

Create Tekton Pipelines

Our pipeline consists of three tasks. In the first of them, we are cloning the source code repository with the Spring Boot application. In the second step, we are building the image natively on Kubernetes using the Cloud Native Builpacks task. After building the image we are pushing to the remote registry. Finally, we are running the image on Kubernetes as the Knative service.

knative-canary-release-pipeline

Firstly, we need to install the Tekton git-clone task:

$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/task/git-clone/0.4/git-clone.yaml

We also need the buildpacks task that allows us to run Cloud Native Buildpacks on Kubernetes:

$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/task/buildpacks/0.3/buildpacks.yaml

Finally, we are installing the kubernetes-actions task to deploy Knative Service using the YAML manifest.

$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/kubernetes-actions/0.2/kubernetes-actions.yaml

All our tasks are ready. Let’s just verify it:

$ kubectl get task                     
NAME                 AGE
buildpacks           1m
git-clone            1m
kubernetes-actions   33s

Finally, we are going to create a Tekton pipeline. In the buildpacks task reference, we need to set several parameters. Since we have two Maven modules in the repository, we first need to set the working directory to the callme-service (1). Also, we use Paketo Buildpacks, so we change the default builder image to the paketobuildpacks/builder:base (2). Finally, we need to enable native build for Cloud Native Buildpacks. In order to do that, we should set the environment variable BP_NATIVE_IMAGE to true (3). After building and pushing the image we can deploy it on Kubernetes (4).

apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
  name: build-spring-boot-pipeline
spec:
  params:
    - description: image URL to push
      name: image
      type: string
  tasks:
    - name: fetch-repository
      params:
        - name: url
          value: 'https://github.com/piomin/sample-spring-boot-graalvm.git'
        - name: subdirectory
          value: ''
        - name: deleteExisting
          value: 'true'
      taskRef:
        kind: Task
        name: git-clone
      workspaces:
        - name: output
          workspace: source-workspace
    - name: buildpacks
      params:
        - name: APP_IMAGE
          value: $(params.image)
        - name: SOURCE_SUBPATH # (1)
          value: callme-service
        - name: BUILDER_IMAGE # (2)
          value: 'paketobuildpacks/builder:base'
        - name: ENV_VARS # (3)
          value:
            - BP_NATIVE_IMAGE=true
      runAfter:
        - fetch-repository
      taskRef:
        kind: Task
        name: buildpacks
      workspaces:
        - name: source
          workspace: source-workspace
        - name: cache
          workspace: cache-workspace
    - name: deploy
      params:
        - name: args # (4)
          value: 
            - apply -f callme-service/k8s/
      runAfter:
        - buildpacks
      taskRef:
        kind: Task
        name: kubernetes-actions
      workspaces:
        - name: manifest-dir
          workspace: source-workspace
  workspaces:
    - name: source-workspace
    - name: cache-workspace

In order to push the image into the remote secure registry, you need to create a Secret containing your username and password.

$ kubectl create secret docker-registry docker-user-pass \
    --docker-username=<USERNAME> \
    --docker-password=<PASSWORD> \
    --docker-server=https://index.docker.io/v1/

After that, you should create ServiceAccount that uses a newly created Secret. As you probably figured out our pipeline uses that ServiceAccount.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: buildpacks-service-account
secrets:
  - name: docker-user-pass

Deploy Knative Service with gradual rollouts

Here’s the YAML manifest with our Knative Service for the 1.0 version. It is a very simple definition. The only additional thing we need to do is to inject the name of the pod and namespace into the container.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: callme-service
spec:
  template:
    spec:
      containers:
      - name: callme
        image: piomin/callme-service:1.0
        ports:
          - containerPort: 8080
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace

In order to inject the name of the pod and namespace into the container, we use Downward API. This Kubernetes feature is by default disabled on Knative. To enable it we need to add the property kubernetes.podspec-fieldref with value enabled in the config-features ConfigMap.

apiVersion: v1
kind: ConfigMap
metadata:
  name: config-features
  namespace: knative-serving
data:
  kubernetes.podspec-fieldref: enabled

Now, let’s run our pipeline for the 1.0 version of our sample application. To run it, you should also create a PersistentVolumeClaim with the name tekton-workspace-pvc.

apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
spec:
  params:
    - name: image
      value: 'piomin/callme-service:1.0'
  pipelineRef:
    name: build-spring-boot-pipeline
  serviceAccountName: buildpacks-service-account
  workspaces:
    - name: source-workspace
      persistentVolumeClaim:
        claimName: tekton-workspace-pvc
      subPath: source
    - name: cache-workspace
      persistentVolumeClaim:
        claimName: tekton-workspace-pvc
      subPath: cache

Finally, it is time to release a new version of our application – 1.1. Firstly, you should change the version number in Maven pom.xml.

We should also change the version number in the k8s/ksvc.yaml manifest. However, the most important thing is related to the annotation serving.knative.dev/rollout-duration. It enables gradual rollouts for Knative Service. The value 300s of this parameter means that our rollout to the latest revision will take exactly 300 seconds. Knative is going to roll out to 1% of traffic first, and then in equal incremental steps for the rest of the assigned traffic. In that case, it will increase the traffic to the latest service by 1% every 3 seconds.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: callme-service
  annotations:
    serving.knative.dev/rollout-duration: "300s"
spec:
  template:
    spec:
      containers:
      - name: callme
        image: piomin/callme-service:1.1
        ports:
          - containerPort: 8080
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace

Verify Canary Release with Knative

Let’s verify our canary release process built with Knative and Tekton. Once you run the pipeline for the 1.0 and 1.1 version of our sample application, you can display a list of Knative Service. As you the latest revision is callme-service-00002, but the rollout is still in progress:

knative-canary-release-services

We can send some test requests to our Knative Service. For me, Knative is available on localhost:80, since I’m using Kubernetes on the Docker Desktop. The only thing I need to do is to set the name URL of the service in the Host header.

$ curl http://localhost:80/callme/ping \
  -H "Host:callme-service.default.example.com"

Here are some responses. As you see, the first two requests have been processed by the 1.0 version of our application. While the last request by the 1.1 version.

Let’s verify the current percentage traffic distribution between the two revisions. Currently, it is 52% to 1.1 and 48% to 1.0.

knative-canary-release-rollout

Finally, after the rollout procedure is finished you should get the same response as me.

Final Thoughts

As you see the process of canary release with Knative is very simple. You only need to set a single annotation on the Knative Service. You can compare it for example with Argo Rollouts which allows us to perform progressive traffic shifting for a standard Kubernetes Deployment.

The post Canary Release on Kubernetes with Knative and Tekton appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/03/29/canary-release-on-kubernetes-with-knative-and-tekton/feed/ 0 10932
Spring Boot on Knative https://piotrminkowski.com/2021/03/01/spring-boot-on-knative/ https://piotrminkowski.com/2021/03/01/spring-boot-on-knative/#respond Mon, 01 Mar 2021 11:28:54 +0000 https://piotrminkowski.com/?p=9506 In this article, I’ll explain what is Knative and how to use it with Spring Boot. Although Knative is a serverless platform, we can run there any type of application (not just function). Therefore, we are going to run there a standard Spring Boot application that exposes REST API and connects to a database. Knative […]

The post Spring Boot on Knative appeared first on Piotr's TechBlog.

]]>
In this article, I’ll explain what is Knative and how to use it with Spring Boot. Although Knative is a serverless platform, we can run there any type of application (not just function). Therefore, we are going to run there a standard Spring Boot application that exposes REST API and connects to a database.

Knative introduces a new way of managing your applications on Kubernetes. It extends Kubernetes to add some new key features. One of the most significant of them is a “Scale to zero”. If Knative detects that a service is not used, it scales down the number of running instances to zero. Consequently, it provides a built-in autoscaling feature based on a concurrency or a number of requests per second. We may also take advantage of revision tracking, which is responsible for switching from one version of your application to another. With Knative you just have to focus on your core logic.

All the features I described above are provided by the component called “Knative Serving”. There are also two other components: Eventing” and “Build”. The Build component is deprecated and has been replaced by Tekton. The Eventing component requires attention. However, I’ll discuss it in more detail in the separated article.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂

I used the same application as the example in some of my previous articles about Spring Boot and Kubernetes. I just wanted to focus that you don’t have to change anything in the source code to run it also on Knative. The only required change will be in the YAML manifest.

Since Knative provides built-in autoscaling you may want to compare it with the horizontal pod autoscaler (HPA) on Kubernetes. To do that you may read the article Spring Boot Autoscaling on Kubernetes. If you are interested in how to easily deploy applications on Kubernetes read the following article about the Okteto platform.

Install Knative on Kubernetes

Of course, before we start Spring Boot development we need to install Knative on Kubernetes. We can do it using the kubectl CLI or an operator. You can find the detailed installation instruction here. I decided to try it on OpenShift. It is obviously the fastest way. I could do it with one click using the OpenShift Serverless Operator. No matter which type of installation you choose, the further steps will apply everywhere.

Using Knative CLI

This step is optional. You can deploy and manage applications on Knative with CLI. To download CLI do to the site https://knative.dev/docs/install/install-kn/. Then you can deploy the application using the Docker image.

$ kn service create sample-spring-boot-on-kubernetes \
   --image piomin/sample-spring-boot-on-kubernetes:latest

We can also verify a list of running services with the following command.

$ kn service list

For more advanced deployments it will be more suitable to use the YAML manifest. We will start the build from the source code build with Skaffold and Jib. Firstly, let’s take a brief look at our Spring Boot application.

Spring Boot application for Knative

As I mentioned before, we are going to create a typical Spring Boot REST-based application that connects to a Mongo database. The database is deployed on Kubernetes. Our model class uses the person collection in MongoDB. Let’s take a look at it.

@Document(collection = "person")
@Getter
@Setter
@AllArgsConstructor
@NoArgsConstructor
public class Person {

   @Id
   private String id;
   private String firstName;
   private String lastName;
   private int age;
   private Gender gender;
}

We use Spring Data MongoDB to integrate our application with the database. In order to simplify this integration we take advantage of its “repositories” feature.

public interface PersonRepository extends CrudRepository<Person, String> {
   Set<Person> findByFirstNameAndLastName(String firstName, String lastName);
   Set<Person> findByAge(int age);
   Set<Person> findByAgeGreaterThan(int age);
}

Our application exposes several REST endpoints for adding, searching and updating data. Here’s the controller class implementation.

@RestController
@RequestMapping("/persons")
public class PersonController {

   private PersonRepository repository;
   private PersonService service;

   PersonController(PersonRepository repository, PersonService service) {
      this.repository = repository;
      this.service = service;
   }

   @PostMapping
   public Person add(@RequestBody Person person) {
      return repository.save(person);
   }

   @PutMapping
   public Person update(@RequestBody Person person) {
      return repository.save(person);
   }

   @DeleteMapping("/{id}")
   public void delete(@PathVariable("id") String id) {
      repository.deleteById(id);
   }

   @GetMapping
   public Iterable<Person> findAll() {
      return repository.findAll();
   }

   @GetMapping("/{id}")
   public Optional<Person> findById(@PathVariable("id") String id) {
      return repository.findById(id);
   }

   @GetMapping("/first-name/{firstName}/last-name/{lastName}")
   public Set<Person> findByFirstNameAndLastName(@PathVariable("firstName") String firstName,
			@PathVariable("lastName") String lastName) {
      return repository.findByFirstNameAndLastName(firstName, lastName);
   }

   @GetMapping("/age-greater-than/{age}")
   public Set<Person> findByAgeGreaterThan(@PathVariable("age") int age) {
      return repository.findByAgeGreaterThan(age);
   }

   @GetMapping("/age/{age}")
   public Set<Person> findByAge(@PathVariable("age") int age) {
      return repository.findByAge(age);
   }

}

We inject database connection settings and credentials using environment variables. Our application exposes endpoints for liveness and readiness health checks. The readiness endpoint verifies a connection with the Mongo database. Of course, we use the built-in feature from Spring Boot Actuator for that.

spring:
  application:
    name: sample-spring-boot-on-kubernetes
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

management:
  endpoints:
    web:
      exposure:
        include: "*"
  endpoint.health:
      show-details: always
      group:
        readiness:
          include: mongo
      probes:
        enabled: true

Defining Knative Service in YAML

Firstly, we need to define a YAML manifest with a Knative service definition. It sets an autoscaling strategy using the Knative Pod Autoscaler (KPA). In order to do that we have to add annotation autoscaling.knative.dev/target with the number of simultaneous requests that can be processed by each instance of the application. By default, it is 100. We decrease that limit to 20 requests. Of course, we need to set liveness and readiness probes for the container. Also, we refer to the Secret and ConfigMap to inject MongoDB settings.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: sample-spring-boot-on-kubernetes
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/target: "20"
    spec:
      containers:
        - image: piomin/sample-spring-boot-on-kubernetes
          livenessProbe:
            httpGet:
              path: /actuator/health/liveness
          readinessProbe:
            httpGet:
              path: /actuator/health/readiness
          env:
            - name: MONGO_DATABASE
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-name
            - name: MONGO_USERNAME
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-user
            - name: MONGO_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongodb
                  key: database-password

Configure Skaffold and Jib for Knative deployment

We will use Skaffold to automate the deployment of our application on Knative. Skaffold is a command-line tool that allows running the application on Kubernetes using a single command. You may read more about it in the article Local Java Development on Kubernetes. It may be easily integrated with the Jib Maven plugin. We just need to set jib as a default option in the build section of the Skaffold configuration. We can also define a list of YAML scripts executed during the deploy phase. The skaffold.yaml file should be placed in the project root directory. Here’s a current Skaffold configuration. As you see, it runs the script with the Knative Service definition.

apiVersion: skaffold/v2beta5
kind: Config
metadata:
  name: sample-spring-boot-on-kubernetes
build:
  artifacts:
    - image: piomin/sample-spring-boot-on-kubernetes
      jib:
        args:
          - -Pjib
  tagPolicy:
    gitCommit: {}
deploy:
  kubectl:
    manifests:
      - k8s/mongodb-deployment.yaml
      - k8s/knative-service.yaml

Skaffold activates the jib profile during the build. Within this profile, we will place a jib-maven-plugin. Jib is useful for building images in dockerless mode.

<profile>
   <id>jib</id>
   <activation>
      <activeByDefault>false</activeByDefault>
   </activation>
   <build>
      <plugins>
         <plugin>
            <groupId>com.google.cloud.tools</groupId>
            <artifactId>jib-maven-plugin</artifactId>
            <version>2.8.0</version>
         </plugin>
      </plugins>
   </build>
</profile>

Finally, all we need to do is to run the following command. It builds our application, creates and pushes a Docker image, and run it on Knative using knative-service.yaml.

$ skaffold run

Verify Spring Boot deployment on Knative

Now, we can verify our deployment on Knative. To do that let’s execute the command kn service list as shown below. We have a single Knative Service with the name sample-spring-boot-on-kubernetes.

spring-boot-knative-services

Then, let’s imagine we deploy three versions (revisions) of our application. To do that let’s just provide some changes in the source and redeploy our service using skaffold run. It creates new revisions of our Knative Service. However, the whole traffic is forwarded to the latest revision (with -vlskg suffix).

spring-boot-knative-revisions

With Knative we can easily split traffic between multiple revisions of the single service. To do that we need to add a traffic section in the Knative Service YAML configuration. We define a percent of the whole traffic per a single revision as shown below.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: sample-spring-boot-on-kubernetes
  spec:
    template:
      ...
    traffic:
      - latestRevision: true
        percent: 60
        revisionName: sample-spring-boot-on-kubernetes-vlskg
      - latestRevision: false
        percent: 20
        revisionName: sample-spring-boot-on-kubernetes-t9zrd
      - latestRevision: false
        percent: 20
        revisionName: sample-spring-boot-on-kubernetes-9xhbw

Let’s take a look at the graphical representation of our current architecture. 60% of traffic is forwarded to the latest revision, while both previous revisions receive 20% of traffic.

spring-boot-knative-openshift

Autoscaling and scale to zero

By default, Knative supports autoscaling. We may choose between two types of targets: concurrency and requests-per-second (RPS). The default target is concurrency. As you probably remember, I have overridden this default value to 20 with the autoscaling.knative.dev/target annotation. So, our goal now is to verify autoscaling. To do that we need to send many simultaneous requests to the application. Of course, the incoming traffic is distributed across three different revisions of Knative Service.

Fortunately, we may easily simulate a large traffic with the siege tool. We will call the GET /persons endpoint that returns all available persons. We are sending 150 concurrent requests with the command visible below.

$ siege http://sample-spring-boot-on-kubernetes-pminkows-serverless.apps.cluster-7260.7260.sandbox1734.opentlc.com/persons \
   -i -v -r 1000  -c 150 --no-parser

Under the hood, Knative still creates a Deployment and scales down or scales up the number of running pods. So, if you have three revisions of a single Service, there are three different deployments created. Finally, I have 10 running pods for the latest deployment that receives 60% of traffic. There are also 3 and 2 running pods for the previous revisions.

What will happen if there is no traffic coming to the service? Knative will scale down the number of running pods for all the deployments to zero.

Conclusion

In this article, you learned how to deploy the Spring Boot application as a Knative service using Skaffold and Jib. I explained with the examples how to create a new revision of the Service, and distribute traffic across those revisions. We also test the scenario with autoscaling based on concurrent requests and scale to zero in case of no incoming traffic. You may expect more articles about Knative soon! Not only with Spring Boot 🙂

The post Spring Boot on Knative appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/03/01/spring-boot-on-knative/feed/ 0 9506