quarkus-kubernetes Archives - Piotr's TechBlog https://piotrminkowski.com/tag/quarkus-kubernetes/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 21 Feb 2022 10:41:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 quarkus-kubernetes Archives - Piotr's TechBlog https://piotrminkowski.com/tag/quarkus-kubernetes/ 32 32 181738725 Validate Kubernetes Deployment in CI/CD with Tekton and Datree https://piotrminkowski.com/2022/02/21/validate-kubernetes-deployment-in-ci-cd-with-tekton-and-datree/ https://piotrminkowski.com/2022/02/21/validate-kubernetes-deployment-in-ci-cd-with-tekton-and-datree/#respond Mon, 21 Feb 2022 10:41:08 +0000 https://piotrminkowski.com/?p=10654 In this article, you will learn how to use the tool Datree to validate Kubernetes manifests in the CI/CD process with Tekton. In order to do that, first, we will create a simple Java application with Quarkus. Then we will build a pipeline with the step that runs the datree CLI command. This command interacts […]

The post Validate Kubernetes Deployment in CI/CD with Tekton and Datree appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use the tool Datree to validate Kubernetes manifests in the CI/CD process with Tekton. In order to do that, first, we will create a simple Java application with Quarkus. Then we will build a pipeline with the step that runs the datree CLI command. This command interacts with our account on app.datree.io that performs validation against policies and rules defined there. In this article, I’m describing just a specific part of the CI/CD process. To read more about the whole CI/CD process on Kubernetes, see my article about Tekton and Argo CD.

Introduction

Our pipeline consists of three steps. In the first step, it clones the source code from the Git repository. Then it builds the application using Maven. Thanks to Quarkus Kubernetes features, we don’t need to create YAML manifests by ourselves. Quarkus will generate them based on the source code and configuration properties during the build. It can also build and publish images in the same step (but that’s just an option, disabled by default). Maybe this feature is not suitable for production, but we may use it here to simplify our pipeline. Anyway, once we generate the Kubernetes manifest we may proceed to the next step – validation with Datree. Here’s a picture to illustrate our process.

datree-kubernetes-pipeline

Now, some words about Datree. We can easily install it locally. After that, we may validate a single or several YAML manifests using the datree CLI. Of course, the main goal is to include Datree’s policy check as part of our CI/CD pipeline. Thanks to that, we hope to prevent Kubernetes misconfigurations from reaching production. Let’s begin!

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then go to the person-service directory. After that, you should just follow my instructions 🙂

Using Datree with Kubernetes Manifests

In order to do a quick intro to Datree, we will first try it locally. You need to install the datree CLI. I installed it on my macOS with Homebrew.

$ brew tap datreeio/datree
$ brew install datreeio/datree/datree

You also need to have JDK and Maven on your machine in order to build the application from the source code. Now, go to the person-service directory. Build the application with the following command:

$ mvn clean package

Since our application includes the quarkus-kubernetes Maven module we don’t need to create a YAML manifest manually. Quarkus generates it during the build. You can find the generated kubernetes.yml file in the target/kubernetes directory. Now, let’s just verify it using the datree test command as shown below.

$ datree test target/kubernetes/kubernetes.yml

Here’s the result. It doesn’t look very good for our current configuration 🙂 What’s important for now, you will find the link to your Datree account at the bottom. Just click it and then sign up. You will see the list of your validation policies.

datree-kubernetes-local-report

By default, there are 34 rules available in the Default policy. Not all of them are active. For example, we may enable the following rule that verifies if the owner label exists.

Moreover, we may create our own custom rule. To do that we first need to navigate to the account settings and enable the option Policy as Code. Then download the policy file.

Let’s say we would like to enable Prometheus metrics for our Deployment on Kubernetes. To enable scraping we should add the following annotation to the Deployment manifest:

metadata:
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/path: /g/metrics
    prometheus.io/port: "8080"

Now, let’s create the rule that verifies if those annotations have been added to the Deployment object. In order to do that, we need to add the following custom rule to the policies.yaml file. I think it is quite intuitive and doesn’t require a detailed explanation. If you are interested in more details please refer to the documentation.

customRules:
  - identifier: ENABLE_PROMETHEUS_LABELS
    name: Ensure the Prometheus labels are set
    defaultMessageOnFailure: Prometheus scraping not enabled!
    schema:
      properties:
        metadata:
          properties:
            annotations:
              properties:
                prometheus.io/scrape:
                  enum:
                    - "true"
              required:
                - prometheus.io/scrape
                - prometheus.io/path
                - prometheus.io/port
          required:
            - annotations

Finally, we need to add our custom rule to the Default policy.

policies:
  - name: Default
    isDefault: true
    rules:
      - identifier: ENABLE_PROMETHEUS_LABELS
        messageOnFailure: Prometheus scraping not enabled!
      - ...

A modified policy should be published to our Datree account using the following command:

$ datree publish policies.yaml

Create Datree Tekton Task

There are three tasks used in our example Tekton pipeline. The first two of them, git-clone and maven, are the standard tasks and we may easily get them from the Tekton Hub. You won’t find a task dedicated to Datree in the hub. However, we can easily create it by ourselves. I’ll use a very minimalistic version of that task only with options required in our case. If you need a more advanced definition you can find an example here.

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: datree
spec:
  description: >-
    The Datree (datree.io) task
  workspaces:
    - name: source
  params:
    - name: yamlSrc
      description: Path for the yaml files relative to the workspace path
      type: string
      default: "./*.yaml"
    - name: policy
      description: Specify which policy to execute
      type: string
      default: "Default"
    - name: token
      description: The Datree token to the account on datree.io
      type: string
  steps:
    - name: datree-test
      image: datree/datree
      workingDir: $(workspaces.source.path)
      env:
        - name: DATREE_TOKEN
          value: $(params.token)
        - name: WORKSPACE_PATH
          value: $(workspaces.source.path)
        - name: PARAM_YAMLSRC
          value: $(params.yamlSrc)
        - name: PARAM_POLICY
          value: $(params.policy)
      script: |
        #!/usr/bin/env sh
        POLICY_FLAG=""

        if [ "${PARAM_POLICY}" != "" ] ; then
          POLICY_FLAG="--policy $PARAM_POLICY"
        fi

        /datree test $WORKSPACE_PATH/$PARAM_YAMLSRC $POLICY_FLAG

Now, we can apply the task to the Kubernetes cluster. You can find the file with the task definition in the repository here: .tekton/datree-task.yaml. First, let’s create a test namespace for our current example.

$ kubectl create ns datree
$ kubectl apply -f .tekton/datree-task.yaml -n datree

Build and Run Tekton Pipeline with Datree

Now, we have all tasks required to compose our pipeline.

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: datree-pipeline
  namespace: datree
spec:
  tasks:
    - name: git-clone
      params:
        - name: url
          value: 'https://github.com/piomin/sample-quarkus-applications.git'
        - name: userHome
          value: /tekton/home
      taskRef:
        kind: Task
        name: git-clone
      workspaces:
        - name: output
          workspace: source-dir
    - name: maven
      params:
        - name: GOALS
          value:
            - clean
            - package
        - name: CONTEXT_DIR
          value: /person-service
      runAfter:
        - git-clone
      taskRef:
        kind: Task
        name: maven
      workspaces:
        - name: source
          workspace: source-dir
        - name: maven-settings
          workspace: maven-settings
    - name: datree
      params:
        - name: yamlSrc
          value: /person-service/target/kubernetes/kubernetes.yml
        - name: policy
          value: Default
        - name: token
          value: ${YOUR_TOKEN} # put your Datree token here
      runAfter:
        - maven
      taskRef:
        kind: Task
        name: datree
      workspaces:
        - name: source
          workspace: source-dir
  workspaces:
    - name: source-dir
    - name: maven-settings

Before running the pipeline you need to obtain your token from the Datree account. Once again, go to your settings and copy the token.

Now, you need to set it as a pipeline parameter token. We also need to pass the location of the Kubernetes manifest inside a workspace. As I mentioned before, it is automatically generated by Quarkus under the path target/kubernetes/kubernetes.yml. In order to run the Tekton pipeline on Kubernetes, we should create the PipelineRun object.

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: datree-pipeline-run-
  namespace: datree
  labels:
    tekton.dev/pipeline: datree-pipeline
spec:
  pipelineRef:
    name: datree-pipeline
  serviceAccountName: pipeline
  workspaces:
    - name: source-dir
      persistentVolumeClaim:
        claimName: tekton-workspace-pvc
    - configMap:
        name: maven-settings
      name: maven-settings

That’s all. Let’s just run our pipeline by applying the Tekton PipelineRun object.

$ kubectl apply -f .tekton/pipeline-run.yaml -n datree

Personally, I’m using OpenShift for running Tekton pipelines. That’s why I can easily verify the result with OpenShift Console.

Let’s just see how it looks. The pipeline has failed due to Kubernetes manifest validation errors detected by the Datree task. That’s exactly what we wanted to achieve. Now, let’s try to fix some errors to make the pipeline run finish successfully.

datree-kubernetes-pipeline-run

Customize Kubernetes Deployment with Quarkus

We will analyze a report generated by the Datree task issue by issue. But before we do that, let’s see the Deployment file generated by Quarkus during the build.

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    app.quarkus.io/commit-id: 130b6957fa6b21fac87fe65a40f4dee75e0e39c3
    app.quarkus.io/build-timestamp: 2022-02-17 - 12:12:10 +0000
  labels:
    app.kubernetes.io/name: person-service
    app.kubernetes.io/version: "1.0"
  name: person-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: person-service
      app.kubernetes.io/version: "1.0"
  template:
    metadata:
      annotations:
        app.quarkus.io/commit-id: 130b6957fa6b21fac87fe65a40f4dee75e0e39c3
        app.quarkus.io/build-timestamp: 2022-02-17 - 12:12:10 +0000
      labels:
        app.kubernetes.io/name: person-service
        app.kubernetes.io/version: "1.0"
    spec:
      containers:
        - env:
            - name: KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POSTGRES_USER
              valueFrom:
                secretKeyRef:
                  key: database-user
                  name: person-db
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: database-password
                  name: person-db
            - name: POSTGRES_DB
              valueFrom:
                secretKeyRef:
                  key: database-name
                  name: person-db
          image: piomin/person-service:1.0
          imagePullPolicy: Always
          name: person-service
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP

Let’s take a look at a report generated by the Datree task after analyzing the manifest shown above.

(1) Ensure each container has a configured memory / CPU request – to fix that, we will add the following two properties in the application.properties file:

quarkus.kubernetes.resources.requests.memory=64Mi
quarkus.kubernetes.resources.requests.cpu=250m

(2) Ensure each container has a configured memory / CPU limit – that’s a very similar situation to the issue from point 1, but this time related to memory and CPU limits. We need to add the following properties in the application.properties file:

quarkus.kubernetes.resources.limits.memory=512Mi
quarkus.kubernetes.resources.limits.cpu=500m

(3) Ensure each container has a configured liveness and readiness probe – we just need to add a single dependency to the Maven pom.xml to enable health checks with Quarkus

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-smallrye-health</artifactId>
</dependency>

(4) Ensure Deployment has more than one replica configured – let’s set 2 replicas for our application

quarkus.kubernetes.replicas = 2

(5) Ensure workload has a configured owner label – we can also use the Quarkus Kubernetes module to add a label to the Deployment manifest

quarkus.kubernetes.labels.owner = piotr.minkowski

(6) Ensure the Prometheus labels are set – finally our custom rule for checking Prometheus annotations. Let’s add the module that automatically exposes the Prometheus endpoint for the Quarkus application and add annotations to the Deployment manifest

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-smallrye-metrics</artifactId>
</dependency>

If you run the pipeline once again, it should finish successfully.

Here’s the final version of the Kubernetes Deployment analyzed with Datree.

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    app.quarkus.io/commit-id: 65b6ffded40ecac3297ae77c63c148920efedf0f
    app.quarkus.io/build-timestamp: 2022-02-18 - 07:57:34 +0000
    prometheus.io/scrape: "true"
    prometheus.io/path: /q/metrics
    prometheus.io/port: "8080"
    prometheus.io/scheme: http
  labels:
    owner: piotr.minkowski
    app.kubernetes.io/name: person-service
    app.kubernetes.io/version: "1.0"
  name: person-service
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: person-service
      app.kubernetes.io/version: "1.0"
  template:
    metadata:
      annotations:
        app.quarkus.io/commit-id: 65b6ffded40ecac3297ae77c63c148920efedf0f
        app.quarkus.io/build-timestamp: 2022-02-18 - 07:57:34 +0000
        prometheus.io/scrape: "true"
        prometheus.io/path: /q/metrics
        prometheus.io/port: "8080"
        prometheus.io/scheme: http
      labels:
        owner: piotr.minkowski
        app.kubernetes.io/name: person-service
        app.kubernetes.io/version: "1.0"
    spec:
      containers:
        - env:
            - name: KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POSTGRES_USER
              valueFrom:
                secretKeyRef:
                  key: database-user
                  name: person-db
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: database-password
                  name: person-db
            - name: POSTGRES_DB
              valueFrom:
                secretKeyRef:
                  key: database-name
                  name: person-db
          image: pminkows/person-service:1.0
          imagePullPolicy: Always
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /q/health/live
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 0
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 10
          name: person-service
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /q/health/ready
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 0
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 10
          resources:
            limits:
              cpu: 1000m
              memory: 512Mi
            requests:
              cpu: 250m
              memory: 64Mi

The post Validate Kubernetes Deployment in CI/CD with Tekton and Datree appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/02/21/validate-kubernetes-deployment-in-ci-cd-with-tekton-and-datree/feed/ 0 10654
Knative Eventing with Kafka and Quarkus https://piotrminkowski.com/2021/03/31/knative-eventing-with-kafka-and-quarkus/ https://piotrminkowski.com/2021/03/31/knative-eventing-with-kafka-and-quarkus/#respond Wed, 31 Mar 2021 12:55:26 +0000 https://piotrminkowski.com/?p=9620 In this article, you will learn how to run eventing applications on Knative using Kafka and Quarkus. Previously I described the same approach for Kafka and Spring Cloud. If you want to compare both of them read my article Knative Eventing with Kafka and Spring Cloud. We will deploy exactly the same architecture. However, instead […]

The post Knative Eventing with Kafka and Quarkus appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to run eventing applications on Knative using Kafka and Quarkus. Previously I described the same approach for Kafka and Spring Cloud. If you want to compare both of them read my article Knative Eventing with Kafka and Spring Cloud. We will deploy exactly the same architecture. However, instead of Spring Cloud Functions we will use Quarkus Funqy. Also, Spring Cloud Stream may be replaced with Quarkus Kafka. Before we start, let’s clarify some things.

Concept over Knative and Quarkus

Quarkus supports Knative in several ways. First of all, we may use the Quarkus Kubernetes module to simplify deployment on Knative. We can also use the Quarkus Funqy Knative Event extension to route and process cloud events within functions. That’s not all. Quarkus supports a serverless functional style. With the Quarkus Funqy module, we can write functions deployable to various FaaS (including Knative). These functions can be invoked through HTTP. Finally, we may integrate our application with Kafka topics using annotations from the Quarkus Kafka extension.

The Quarkus Funqy Knative Event module bases on the Knative broker and triggers. Since we will use Kafka Source instead of broker and trigger we won’t include that module. However, we can still take advantage of Quarkus Funqy and HTTP binding.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

As I mentioned before, we will the same architecture and scenario as in my previous article about Knative eventing. Let’s briefly describe it.

Today we will implement an eventual consistency pattern (also known as a SAGA pattern). How it works? The sample system consists of three services. The order-service creates a new order that is related to the customers and products. That order is sent to the Kafka topic. Then, our two other applications customer-service and product-service receive the order event. After that, they perform a reservation. The customer-service reserves an order’s amount on the customer’s account. Meanwhile the product-service reserves a number of products specified in the order. Both these services send a response to the order-service through the Kafka topic. If the order-service receives positive reservations from both services it confirms the order. Then, it sends an event with that information. Both customer-service and product-service receive the event and confirm reservations. You can verify it in the picture below.

quarkus-knative-eventing-arch

Prerequisites

There are several requirements we need to comply before start. I described them all in my previous article about Knative Eventing. Here’s just a brief remind:

  1. Kubernetes cluster with at least 1.17 version. I’m using a local cluster. If you use a remote cluster replace dev.local in image name into your Docker account name
  2. Install Knative Serving and Eventing on your cluster. You may find the detailed installation instructions here.
  3. Install Kafka Eventing Broker. Here’s the link to the releases site. You don’t need everything – we will use the KafkaSource and KafkaBinding CRDs
  4. Install Kafka cluster with the Strimzi operator. I installed it in the kafka namespace. The name of my cluster is my-cluster.

Step 1. Installing Kafka Knative components

Assuming you have already installed all the required elements to run Knative Eventing on your Kubernetes cluster, we may create some components dedicated to applications. You may find YAML manifests with object declarations in the k8s directory inside every single application directory. Firstly, let’s create a KafkaBinding. It is responsible for injecting the address of the Kafka cluster into the application container. Thanks to KafkaBinding that address is visible inside the container as the KAFKA_BOOTSTRAP_SERVERS environment variable. Here’s an example of the YAML declaration for the customer-saga application. We should create similar objects for two other applications.

apiVersion: bindings.knative.dev/v1beta1
kind: KafkaBinding
metadata:
  name: kafka-binding-customer-saga
spec:
  subject:
    apiVersion: serving.knative.dev/v1
    kind: Service
    name: customer-saga
  bootstrapServers:
    - my-cluster-kafka-bootstrap.kafka:9092

In the next step, we create the KafkaSource object. It reads events from the particular topic and passes them to the consumer. It calls the HTTP POST endpoint exposed by the application. We can override a default context path of the HTTP endpoint. For the customer-saga the target URL is /reserve. It should receive events from the order-events topic. Because both customer-saga and product-saga listen for events from the order-events topic we need to create a similar KafkaSource object also for product-saga.

apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
  name: kafka-source-orders-customer
spec:
  bootstrapServers:
    - my-cluster-kafka-bootstrap.kafka:9092
  topics:
    - order-events
  sink:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: customer-saga
    uri: /reserve

On the other hand, the order-saga listens for events on the reserve-events topic. If you want to verify our scenario once again please refer to the diagram in the Source Code section. This time the target URL is /confirm.

apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
  name: kafka-source-orders-confirm
spec:
  bootstrapServers:
    - my-cluster-kafka-bootstrap.kafka:9092
  topics:
    - reserve-events
  sink:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: order-saga
    uri: /confirm

Let’s verify a list of Kafka sources. In our case there is a single KafkaSource per application. Before deploying our Quarkus application on Knative your Kafka source won’t be ready.

quarkus-knative-eventing-kafkasource

Step 2. Integrating Quarkus with Kafka

In order to integrate Quarkus with Apache Kafka, we may use the SmallRye Reactive Messaging library. Thanks to that we may define an input and output topic for each method using annotations. The messages are serialized to JSON. We can also automatically expose Kafka connection status in health check. Here’s the list of dependencies we need to include in Maven pom.xml.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-jackson</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-reactive-messaging-kafka</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-health</artifactId>
</dependency>

Before we start with the source code, we need to provide some configuration settings in the application.properties file. Of course, Kafka requires a connection URL to the cluster. We use the environment variable injected by the KafkaBinding object. Also, the output topic name should be configured. Here’s a list of required properties for the order-saga application.

kafka.bootstrap.servers = ${KAFKA_BOOTSTRAP_SERVERS}

mp.messaging.outgoing.order-events.connector = smallrye-kafka
mp.messaging.outgoing.order-events.topic = order-events
mp.messaging.outgoing.order-events.value.serializer = io.quarkus.kafka.client.serialization.ObjectMapperSerializer

Finally, we may switch to the code. Let’s start with order-saga. It will continuously send orders to the order-events topic. Those events are received by both customer-saga and product-saga applications. The method responsible for generating and sending events returns reactive stream using Mutiny Multi. It sends an event every second. We need to annotate the method with the @Outgoing annotation passing the name of output defined in application properties. Also, @Broadcast annotation Indicates that the event is dispatched to all subscribers. Before sending, every order needs to be persisted in a database (we use H2 in-memory database).

@ApplicationScoped
@Slf4j
public class OrderPublisher {

   private static int num = 0;

   @Inject
   private OrderRepository repository;
   @Inject
   private UserTransaction transaction;

   @Outgoing("order-events")
   @Broadcast
   public Multi<Order> orderEventsPublish() {
      return Multi.createFrom().ticks().every(Duration.ofSeconds(1))
           .map(tick -> {
              Order o = new Order(++num, num%10+1, num%10+1, 100, 1, OrderStatus.NEW);
              try {
                 transaction.begin();
                 repository.persist(o);
                 transaction.commit();
              } catch (Exception e) {
                 log.error("Error in transaction", e);
              }

              log.info("Order published: {}", o);
              return o;
           });
   }

}

Step 3. Handling Knative events with Quarkus Funqy

Ok, in the previous step we have already implemented a part of the code responsible for sending events to the Kafka topic. We also have KafkaSource that is responsible for dispatching events from the Kafka topic into the application HTTP endpoint. Now, we just need to handle them. It is very simple with Quarkus Funqy. It allows us to create functions according to the serverless Faas approach. But we can also easily bound each function to the HTTP endpoint with the Quarkus Funqy HTTP extension. Let’s include it in our dependencies.

 <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-funqy-http</artifactId>
 </dependency>

In order to create a function with Quarkus Funqy, we just need to annotate the particular method with @Funq. The name of the method is reserve, so it is automatically bound to the HTTP endpoint POST /reserve. It takes a single input parameter, which represents incoming order. It is automatically deserialized from JSON.

In the fragment of code visible below, we implement order handling in the customer-saga application. Once it receives an order, it performs a reservation on the customer account. Then it needs to send a response to the order-saga. To do that we may use Quarkus reactive messaging support once again. We define the Emitter object that allows us to send a single event into the topic. We may use inside a method that does not return any output that should be sent to a topic (with @Outgoing). The Emitter bean should be annotated with @Channel. It works similar to @Outgoing. We also need to define an output topic related to the name of the channel.

@Slf4j
public class OrderReserveFunction {

   @Inject
   private CustomerRepository repository;
   @Inject
   @Channel("reserve-events")
   Emitter<Order> orderEmitter;

   @Funq
   public void reserve(Order order) {
      log.info("Received order: {}", order);
      doReserve(order);
   }

   private void doReserve(Order order) {
      Customer customer = repository.findById(order.getCustomerId());
      log.info("Customer: {}", customer);
      if (order.getStatus() == OrderStatus.NEW) {
         customer.setAmountReserved(customer.getAmountReserved() + order.getAmount());
         customer.setAmountAvailable(customer.getAmountAvailable() - order.getAmount());
         order.setStatus(OrderStatus.IN_PROGRESS);
         log.info("Order reserved: {}", order);
         orderEmitter.send(order);
      } else if (order.getStatus() == OrderStatus.CONFIRMED) {
         customer.setAmountReserved(customer.getAmountReserved() - order.getAmount());
      }
      repository.persist(customer);
   }
}

Here are configuration properties for integration between Kafka and Emitter. The same configuration properties should be created for both customer-saga and product-saga.

kafka.bootstrap.servers = ${KAFKA_BOOTSTRAP_SERVERS}

mp.messaging.outgoing.reserve-events.connector = smallrye-kafka
mp.messaging.outgoing.reserve-events.topic = reserve-events
mp.messaging.outgoing.reserve-events.value.serializer = io.quarkus.kafka.client.serialization.ObjectMapperSerializer

Finally, let’s take a look at the implementation of the Quarkus function inside the product-saga application. It also sends a response to the reserve-events topic using the Emitter object. It handles incoming orders and performs a reservation for the requested number of products.

@Slf4j
public class OrderReserveFunction {

   @Inject
   private ProductRepository repository;

   @Inject
   @Channel("reserve-events")
   Emitter<Order> orderEmitter;

   @Funq
   public void reserve(Order order) {
      log.info("Received order: {}", order);
      doReserve(order);
   }

   private void doReserve(Order order) {
      Product product = repository.findById(order.getProductId());
      log.info("Product: {}", product);
      if (order.getStatus() == OrderStatus.NEW) {
         product.setReservedItems(product.getReservedItems() + order.getProductsCount());
         product.setAvailableItems(product.getAvailableItems() - order.getProductsCount());
         order.setStatus(OrderStatus.IN_PROGRESS);
         orderEmitter.send(order);
      } else if (order.getStatus() == OrderStatus.CONFIRMED) {
         product.setReservedItems(product.getReservedItems() - order.getProductsCount());
      }
      repository.persist(product);
   }
}

Step 4. Deploy Quarkus application on Knative

Finally, we can deploy all our applications on Knative. To simplify that process we may use Quarkus Kubernetes support. It is able to automatically generate deployment manifests based on the source code and application properties. Quarkus also supports building images with Jib. So first, let’s add the following dependencies to Maven pom.xml.

 <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-kubernetes</artifactId>
 </dependency>
 <dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-container-image-jib</artifactId>
 </dependency>

In the next step, we need to add some configuration settings to the application.properties file. To enable automatic deployment on Kubernetes the property quarkus.kubernetes.deploy must be set to true. Then we should change the target platform into Knative. Thanks to that Quarkus will generate Knative Service instead of a standard Kubernetes Deployment. The last property quarkus.container-image.group is responsible for setting the name of the image owner group. For local development with Knative, we should set the dev.local value there.

quarkus.kubernetes.deploy = true
quarkus.kubernetes.deployment-target = knative
quarkus.container-image.group = dev.local

After setting all the values visible above we just need to execute Maven build to deploy the application.

$ mvn clean package

After running Maven build for all the applications let’s verify a list of Knative Services.

Once the order-saga application starts it begins sending orders continuously. It also receives order events sent by customer-saga and product-saga. Those events are processed by the Quarkus function. Here are the logs printed by order-saga.

Final Thoughts

As you see, we can easily implement and deploy Quarkus applications on Knative. Quarkus provides several extensions that simplify integration with the Knative Eventing model and Kafka broker. We can use Quarkus Funqy to implement the serverless FaaS approach or SmallRye Reactive Messaging to integrate with Apache Kafka. You can compare that Quarkus support with Spring Boot in my previous article: Knative Eventing with Kafka and Spring Cloud.

The post Knative Eventing with Kafka and Quarkus appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/03/31/knative-eventing-with-kafka-and-quarkus/feed/ 0 9620
Guide to Quarkus on Kubernetes https://piotrminkowski.com/2020/08/10/guide-to-quarkus-on-kubernetes/ https://piotrminkowski.com/2020/08/10/guide-to-quarkus-on-kubernetes/#comments Mon, 10 Aug 2020 15:30:42 +0000 http://piotrminkowski.com/?p=8336 Quarkus is usually described as a Kubernetes-native Java framework. It allows us to automatically generate Kubernetes resources based on the defaults and user-provided configuration. It also provides an extension for building and pushing container images. Quarkus can create a container image and push it to a registry before deploying the application to the target platform. […]

The post Guide to Quarkus on Kubernetes appeared first on Piotr's TechBlog.

]]>
Quarkus is usually described as a Kubernetes-native Java framework. It allows us to automatically generate Kubernetes resources based on the defaults and user-provided configuration. It also provides an extension for building and pushing container images. Quarkus can create a container image and push it to a registry before deploying the application to the target platform. It also provides an extension that allows developers to use Kubernetes ConfigMap as a configuration source, without having to mount them into the pod. We may use fabric8 Kubernetes Client directly to interact with the cluster, for example during JUnit tests.
In this guide, you will learn how to:

  • Use Quarkus Dekorate extension to automatically generate Kubernetes manifests basing on the source code and configuration
  • Build and push images to Docker registry with Jib extension
  • Deploy your application on Kubernetes without any manually created YAML in one click
  • Use Quarkus Kubernetes Config to inject configuration properties from ConfigMap

This guide is the second in series about Quarkus framework. If you are interested in the introduction to building Quarkus REST applications with Kotlin you may refer to my article Guide to Quarkus with Kotlin.

github-logo Source code

The source code with the sample Quarkus applications is available on GitHub. First, you need to clone the following repository: https://github.com/piomin/sample-quarkus-applications.git. Then, you need to go to the employee-service directory. We use the same repository as in my previous article about Quarkus.

1. Dependencies

Quarkus does not implement mechanisms for generating Kubernetes manifests, deploying them on the platform, or building images. It adds some logic to the existing tools. To enable extensions to Dekorate and Jib we should include the following dependencies.

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-kubernetes</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-container-image-jib</artifactId>
</dependency>

Jib builds optimized images for Java applications without a Docker daemon, and without deep mastery of Docker best-practices. It is available as plugins for Maven and Gradle and as a Java library. Dekorate is a Java library that makes generating and decorating Kubernetes manifests as simple as adding a dependency to your project. It may generate manifests basing on the source code, annotations, and configuration properties.

2. Preparation

In the first part of my guide to Kotlin, we were running our application in development mode with an embedded H2 database. In this part of the tutorial, we will integrate our application with Postgres deployed on Kubernetes. To do that we first need to change configuration settings for the data source. H2 database will be active only in dev and test mode. The configuration of Postgresql data source would be based on environment variables.


# kubernetes
quarkus.datasource.db-kind=postgresql
quarkus.datasource.username=${POSTGRES_USER}
quarkus.datasource.password=${POSTGRES_PASSWORD}
quarkus.datasource.jdbc.url=jdbc:postgresql://${POSTGRES_HOST}:5432/${POSTGRES_DB}
# dev
%dev.quarkus.datasource.db-kind=h2
%dev.quarkus.datasource.username=sa
%dev.quarkus.datasource.password=password
%dev.quarkus.datasource.jdbc.url=jdbc:h2:mem:testdb
# test
%test.quarkus.datasource.db-kind=h2
%test.quarkus.datasource.username=sa
%test.quarkus.datasource.password=password
%test.quarkus.datasource.jdbc.url=jdbc:h2:mem:testdb

3. Configure Kubernetes extension

With Quarkus Kubernetes extension we may customize the behavior of the manifest generator. To do that we need to provide configuration settings with the prefix quarkus.kubernetes.*. There are pretty many options like defining labels, annotations, environment variables, Secret and ConfigMap references, or mounting volumes. First, let’s take a look at the Secret and ConfigMap prepared for Postgres.

apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-config
  labels:
    app: postgres
data:
  POSTGRES_DB: quarkus
  POSTGRES_USER: quarkus
  POSTGRES_HOST: postgres
---
apiVersion: v1
kind: Secret
metadata:
  name: postgres-secret
  labels:
    app: postgres
data:
  POSTGRES_PASSWORD: YWRtaW4xMjM=

In this fragment of configuration, besides simple label and annotation, we are adding reference to all the keys inside postgres-config and postgres-secret.

quarkus.kubernetes.labels.app-type=demo
quarkus.kubernetes.annotations.app-type=demo
quarkus.kubernetes.env.secrets=postgres-secret
quarkus.kubernetes.env.configmaps=postgres-config

4. Build image and deploy

Before executing build and deploy we need to apply manifest with Postgres. Here’s Deployment definition of Postgres.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:latest
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_DB
              valueFrom:
                configMapKeyRef:
                  key: POSTGRES_DB
                  name: postgres-config
            - name: POSTGRES_USER
              valueFrom:
                configMapKeyRef:
                  key: POSTGRES_USER
                  name: postgres-config
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  key: POSTGRES_PASSWORD
                  name: postgres-secret
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgredb
      volumes:
        - name: postgredb
          persistentVolumeClaim:
            claimName: postgres-claim

Let’s apply it on Kubernetes together with required ConfigMap, Secret, PersistenceVolume and PersistenceVolumeClaim. All the objects are available inside example repository in the file employee-service/k8s/postgres-deployment.yaml.

$ kubectl apply -f employee-service\k8s\postgresql-deployment.yaml

After deploying Postgres we may proceed to the main task. In order to build a Docker image with the application, we need to enable option quarkus.container-image.build during Maven build. If you also want to deploy and run a container with the application on your local Kubernetes instance you need to enable option quarkus.kubernetes.deploy.

$ clean package -Dquarkus.container-image.build=true -Dquarkus.kubernetes.deploy=true

If your Kubernetes cluster is located on the hosted cloud you should push the image to a remote Docker registry before deployment. To do that we should also activate option quarkus.container-image.push during Maven build. If you do not push to the default Docker registry you have to set parameter quarkus.container-image.registry=gcr.io inside the application.properties file. The only thing I need to set for building images is the following property, which is the same as my login to docker.io site.

quarkus.container-image.group=piomin

After running the required Maven command our application is deployed on Kubernetes. Let’s take a look at what happened during the Maven build. Here’s the fragment of logs during that build. You see that Quarkus extension generated two files kubernetes.yaml and kubernetes.json inside target/kubernetes directory. Then it proceeded to build a Docker image with our application. Because we didn’t specify any base image it takes a default one for Java 11 – fabric8/java-alpine-openjdk11-jre.

quarkus-build-image

Let’s take a look on the Deployment definition automatically generated by Quarkus.

  1. It adds some annotations like port or path to metrics endpoint used by Prometheus to monitor application and enabled scraping. It also adds Git commit id, repository URL, and our custom annotation defined in application.properties.
  2. It adds labels with the application name, version (taken from Maven pom.xml), and our custom label app-type.
  3. It injects Kubernetes namespace name into the container.
  4. It injects the reference to the postgres-secret defined in application.properties.
  5. It injects the reference to the postgres-config defined in application.properties.
  6. The name of the image is automatically created. It is based on Maven artifactId and version.
  7. The definition of liveness and readiness is generated if Maven module quarkus-smallrye-health is present
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations: # (1)
    prometheus.io/path: /metrics
    prometheus.io/port: 8080
    app.quarkus.io/commit-id: f6ae37288ed445177f23291c921c6099cfc58c6e
    app.quarkus.io/vcs-url: https://github.com/piomin/sample-quarkus-applications.git
    app.quarkus.io/build-timestamp: 2020-08-10 - 13:22:32 +0000
    app-type: demo
    prometheus.io/scrape: "true"
  labels: # (2)
    app.kubernetes.io/name: employee-service
    app.kubernetes.io/version: 1.1
    app-type: demo
  name: employee-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: employee-service
      app.kubernetes.io/version: 1.1
  template:
    metadata:
      annotations:
        prometheus.io/path: /metrics
        prometheus.io/port: 8080
        app.quarkus.io/commit-id: f6ae37288ed445177f23291c921c6099cfc58c6e
        app.quarkus.io/vcs-url: https://github.com/piomin/sample-quarkus-applications.git
        app.quarkus.io/build-timestamp: 2020-08-10 - 13:22:32 +0000
        app-type: demo
        prometheus.io/scrape: "true"
      labels:
        app.kubernetes.io/name: employee-service
        app.kubernetes.io/version: 1.1
        app-type: demo
    spec:
      containers:
      - env:
        - name: KUBERNETES_NAMESPACE # (3)
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        envFrom:
        - secretRef: # (4)
            name: postgres-secret
        - configMapRef: # (5)
            name: postgres-config
        image: piomin/employee-service:1.1 # (6)
        imagePullPolicy: IfNotPresent
        livenessProbe: # (7)
          failureThreshold: 3
          httpGet:
            path: /health/live
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 0
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 10
        name: employee-service
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health/ready
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 0
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 10
      serviceAccount: employee-service

Once the image has been built it is available in local registry. Quarkus automatically deploy it to the current cluster using already generated Kubernetes manifests.

quarkus-build-maven

Here’s the list of pods in default namespace.

quarkus-pods

5. Using Kubernetes Config extension

With Kubernetes Config extension you can use ConfigMap as a configuration source, without having to mount them into the pod with the application. To use that extension we need to include the following Maven dependency.


<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-kubernetes-config</artifactId>
</dependency>

This extension works directly with Kubernetes API using fabric8 KubernetesClient. That’s why we should set the proper permissions for ServiceAccount. Fortunately, all the required configuration is automatically generated by Quarkus Kubernetes extension. The RoleBinding object is appied automatically if quarkus-kubernetes-config module is present.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  annotations:
    prometheus.io/path: /metrics
    prometheus.io/port: 8080
    app.quarkus.io/commit-id: a5da459af01637657ebb0ec3a606eb53d13b8524
    app.quarkus.io/vcs-url: https://github.com/piomin/sample-quarkus-applications.git
    app.quarkus.io/build-timestamp: 2020-08-10 - 14:25:20 +0000
    app-type: demo
    prometheus.io/scrape: "true"
  labels:
    app.kubernetes.io/name: employee-service
    app.kubernetes.io/version: 1.1
    app-type: demo
  name: employee-service:view
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: view
subjects:
- kind: ServiceAccount
  name: employee-service

Here’s our example ConfigMap that contains a single property property1.


apiVersion: v1
kind: ConfigMap
metadata:
  name: employee-config
data:
  application.properties: |-
    property1=one

The same property is defined inside application.properties available on the classpath, but there it has a different value.

property1=test

Before deploying a new version of application we need to add the following properties. First of them enables Kubernetes ConfigMap injection, while the second specifies the name of injected ConfigMap.

quarkus.kubernetes-config.enabled=true
quarkus.kubernetes-config.config-maps=employee-config

Finally we just need to implement a simple endpoint that injects and returns configuration property.

@ConfigProperty(name = "property1")
lateinit var property1: String

@GET
@Path("/property1")
fun property1(): String = property1

The properties obtained from the ConfigMap have a higher priority than any properties of the same name that are found in application.properties available on the classpath. Let’s test it.

quarkus-config

The post Guide to Quarkus on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/08/10/guide-to-quarkus-on-kubernetes/feed/ 3 8336