kubernetes-operator Archives - Piotr's TechBlog https://piotrminkowski.com/tag/kubernetes-operator/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 05 May 2023 11:59:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 kubernetes-operator Archives - Piotr's TechBlog https://piotrminkowski.com/tag/kubernetes-operator/ 32 32 181738725 Manage Kubernetes Operators with ArgoCD https://piotrminkowski.com/2023/05/05/manage-kubernetes-operators-with-argocd/ https://piotrminkowski.com/2023/05/05/manage-kubernetes-operators-with-argocd/#comments Fri, 05 May 2023 11:59:32 +0000 https://piotrminkowski.com/?p=14151 In this article, you will learn how to install and configure operators on Kubernetes with ArgoCD automatically. A Kubernetes operator is a method of packaging, deploying, and managing applications on Kubernetes. It has its own lifecycle managed by the OLM. It also uses custom resources (CR) to manage applications and their components. The Kubernetes operator watches […]

The post Manage Kubernetes Operators with ArgoCD appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to install and configure operators on Kubernetes with ArgoCD automatically. A Kubernetes operator is a method of packaging, deploying, and managing applications on Kubernetes. It has its own lifecycle managed by the OLM. It also uses custom resources (CR) to manage applications and their components. The Kubernetes operator watches a CR object and takes actions to ensure the current state matches the desired state of that resource. Assuming we want to manage our Kubernetes cluster in the GitOps way, we want to keep the list of operators, their configuration, and CR objects definitions in the Git repository. Here comes Argo CD.

In this article, I’m describing several more advanced Argo CD features. If you looking for the basics you can find a lot of other articles about Argo CD on my blog. For example, you may about Kubernetes CI/CD with Tekton and ArgoCD in the following article.

Introduction

The main goal of this exercise is to run the scenario, in which we can automatically install and use operators on Kubernetes in the GitOps way. Therefore, the state of the Git repository should be automatically applied to the target Kubernetes cluster. We will define a single Argo CD Application that performs all the required steps. In the first step, it will trigger the operator installation process. It may take some time since we need to install the controller application and Kubernetes CRDs. Then we may define some CR objects to run our apps on the cluster.

We cannot create a CR object before installing an operator. Fortunately, with ArgoCD we can divide the sync process into multiple separate phases. This ArgoCD feature is called sync waves. In order to proceed to the next phase, ArgoCD first needs to finish the previous sync wave. ArgoCD checks the health checks of all objects created during the particular phase. If all of those checks reply with success the phase is considered to be finished. Argo CD provides some built-in health check implementations for several standard Kubernetes types. However, in this exercise, we will have to override the health check for the main operator CR – the Subscription object.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that go to the global directory. Then you should just follow my instructions. Let’s begin.

Prerequisites

Before starting the exercise you need to have a running Kubernetes cluster with ArgoCD and Operator Lifecycle Manager (OLM) installed. You can install Argo CD using Helm chart or with the operator. In order to read about the installation details please refer to the Argo CD docs.

Install Operators with Argo CD

In the first step, we will define templates responsible for operators’ installation. If you have OLM installed on the Kubernetes cluster that process comes to the creation of the Subscription object (1). In some cases, we have to create the OperatorGroup object. It provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its members. Before installing in a different namespace than openshift-operators, we have to create the OperatorGroup in that namespace (2). We use the argocd.argoproj.io/sync-wave annotation to configure sync phases (3). The lower value of that parameter is – the highest priority for the object (before OperatorGroup we need to create the namespace).

{{- range .Values.subscriptions }}
apiVersion: operators.coreos.com/v1alpha1 # (1)
kind: Subscription
metadata:
  name: {{ .name }}
  namespace: {{ .namespace }}
  annotations:
    argocd.argoproj.io/sync-wave: "2" # (3)
spec:
  channel: {{ .channel }}
  installPlanApproval: Automatic
  name: {{ .name }}
  source: {{ .source }}
  sourceNamespace: openshift-marketplace
---
{{- if ne .namespace "openshift-operators" }}
apiVersion: v1
kind: Namespace
metadata:
  name: {{ .namespace }}
  annotations:
    argocd.argoproj.io/sync-wave: "1" # (3)
---
apiVersion: operators.coreos.com/v1alpha2 # (2)
kind: OperatorGroup
metadata:
  name: {{ .name }}
  namespace: {{ .namespace }}
  annotations:
    argocd.argoproj.io/sync-wave: "2" # (3)
spec: {}
---
{{- end }}
{{- end }}

I’m using Helm for templating the YAML manifests. Thanks to that we can use it to apply several Subscription and OperatorGroup objects. Our Helm templates iterate over the subscriptions list. In order to define a list of operators we just need to provide a similar configuration in the values.yaml file visible below. There are operators installed with that example: Kiali, Service Mesh (Istio), AMQ Streams (Strimzi Kafka), Patch Operator, and Serverless (Knative).

subscriptions:
  - name: kiali-ossm
    namespace: openshift-operators
    channel: stable
    source: redhat-operators
  - name: servicemeshoperator
    namespace: openshift-operators
    channel: stable
    source: redhat-operators
  - name: amq-streams
    namespace: openshift-operators
    channel: stable
    source: redhat-operators
  - name: patch-operator
    namespace: patch-operator
    channel: alpha
    source: community-operators
  - name: serverless-operator
    namespace: openshift-serverless
    channel: stable
    source: redhat-operators

Override Argo CD Health Check

As I mentioned before, we need to override the default Argo CD health check for the Subscription CR. Normally, Argo CD just creates the Subscription objects and doesn’t wait until the operator is installed on the cluster. In order to do that, we need to verify the value of the status.state field. If it equals the AtLatestKnown value, it means that the operator has been successfully installed. In that case, we can set the value of the Argo CD health check to Healthy. We can also override the default health check description to display the current version of the operator (the status.currentCSV field). If you installed Argo CD using Helm chart you can provide your health check implementation directly in the argocd-cm ConfigMap.

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-cm
  namespace: argocd
  labels:
    app.kubernetes.io/name: argocd-cm
    app.kubernetes.io/part-of: argocd
data:
  resource.customizations: |
    operators.coreos.com/Subscription:
      health.lua: |
        hs = {}
        hs.status = "Progressing"
        hs.message = ""
        if obj.status ~= nil then
          if obj.status.state ~= nil then
            if obj.status.state == "AtLatestKnown" then
              hs.message = obj.status.state .. " - " .. obj.status.currentCSV
              hs.status = "Healthy"
            end
          end
        end
        return hs

For those of you, who installed Argo CD using the operator (including me) there is another way to override the health check. We need to provide it inside the extraConfig field in the ArgoCD CR.

apiVersion: argoproj.io/v1alpha1
kind: ArgoCD
metadata:
  name: openshift-gitops
  namespace: openshift-gitops
spec:
  ...
  extraConfig:
    resource.customizations: |
      operators.coreos.com/Subscription:
        health.lua: |
          hs = {}
          hs.status = "Progressing"
          hs.message = ""
          if obj.status ~= nil then
            if obj.status.state ~= nil then
              if obj.status.state == "AtLatestKnown" then
                hs.message = obj.status.state .. " - " .. obj.status.currentCSV
                hs.status = "Healthy"
              end
            end
          end
          return hs

After the currently described steps, we achieved two things. We divided our sync process into multiple phases with the Argo CD waves feature. We also forced Argo CD to wait before going to the next phase until the operator installation process is finished. Let’s proceed to the next step – defining CRDs.

Create Custom Resources with Argo CD

In the previous steps, we successfully installed Kubernetes operators with ArgoCD. Now, it is time to use them. We will do everything in a single synchronization process. In the previous phase (wave=2), we installed the Kafka operator (Strimzi). In this phase, we will run the Kafka cluster using CRD provided by the Strimzi project. To be sure that we apply it after the Strimzi operator installation, we will do it in the third phase (1). That’s not all. Since our CRD has been created by the operator, it is not part of the sync process. By default, Argo CD tries to find the CRD in the sync and will fail with the error the server could not find the requested resource. To avoid it we will skip the dry run for missing resource types (2) during sync.

apiVersion: v1
kind: Namespace
metadata:
  name: kafka
  annotations:
    argocd.argoproj.io/sync-wave: "1"
---
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
  namespace: kafka
  annotations:
    argocd.argoproj.io/sync-wave: "3" # (1)
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true # (2)
spec:
  kafka:
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      default.replication.factor: 3
      min.insync.replicas: 2
      inter.broker.protocol.version: '3.2'
    storage:
      type: persistent-claim
      size: 5Gi
      deleteClaim: true
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    version: 3.2.3
    replicas: 3
  entityOperator:
    topicOperator: {}
    userOperator: {}
  zookeeper:
    storage:
      type: persistent-claim
      deleteClaim: true
      size: 2Gi
    replicas: 3

We can also install Knative Serving on our cluster since we previously installed the Knative operator. The same as before we are setting the wave=3 and skipping the dry run on missing resources during the sync.

apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
  name: knative-serving
  namespace: knative-serving
  annotations:
    argocd.argoproj.io/sync-wave: "3"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec: {}

Finally, let’s create the Argo CD Application that manages all the defined manifests and automatically applies them to the Kubernetes cluster. We need to define the source Git repository and the directory containing our YAMLs (global).

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: cluster-config
spec:
  destination:
    server: 'https://kubernetes.default.svc'
  project: default
  source:
    path: global
    repoURL: 'https://github.com/piomin/openshift-cluster-config.git'
    targetRevision: HEAD
    helm:
      valueFiles:
        - values.yaml
  syncPolicy:
    automated:
      selfHeal: true

Helm Unit Testing

Just to ensure that we defined all the Helm templates properly we can include some unit tests. We can use helm-unittest for that. We will place the test sources inside the global/tests directory. Here’s our test defined in the subscription_tests.yaml file:

suite: test main
values:
  - ./values/test.yaml
templates:
  - templates/subscriptions.yaml
chart:
  version: 1.0.0+test
  appVersion: 1.0
tests:
  - it: subscription default ns
    template: templates/subscriptions.yaml
    documentIndex: 0
    asserts:
      - equal:
          path: metadata.namespace
          value: openshift-operators
      - equal:
          path: metadata.name
          value: test1
      - equal:
          path: spec.channel
          value: ch1
      - equal:
          path: spec.source
          value: src1
      - isKind:
          of: Subscription
      - isAPIVersion:
          of: operators.coreos.com/v1alpha1
  - it: subscription custom ns
    template: templates/subscriptions.yaml
    documentIndex: 1
    asserts:
      - equal:
          path: metadata.namespace
          value: custom-ns
      - equal:
          path: metadata.name
          value: test2
      - equal:
          path: spec.channel
          value: ch2
      - equal:
          path: spec.source
          value: src2
      - isKind:
          of: Subscription
      - isAPIVersion:
          of: operators.coreos.com/v1alpha1
  - it: custom ns
    template: templates/subscriptions.yaml
    documentIndex: 2
    asserts:
      - equal:
          path: metadata.name
          value: custom-ns
      - isKind:
          of: Namespace
      - isAPIVersion:
          of: v1

We need to define test values:

subscriptions:
  - name: test1
    namespace: openshift-operators
    channel: ch1
    source: src1
  - name: test2
    namespace: custom-ns
    channel: ch2
    source: src2

We can prepare a build process for our repository. Here’s a sample Circle CI configuration for that. If you are interested in more details about Helm unit testing and releasing please refer to my article.

version: 2.1

orbs:
  helm: circleci/helm@2.0.1

jobs:
  build:
    docker:
      - image: cimg/base:2023.04
    steps:
      - checkout
      - helm/install-helm-client
      - run:
          name: Install Helm unit-test
          command: helm plugin install https://github.com/helm-unittest/helm-unittest
      - run:
          name: Run unit tests
          command: helm unittest global

workflows:
  helm_test:
    jobs:
      - build

Synchronize Configuration with Argo CD

Once we create a new Argo CD Application responsible for synchronization our process is starting. In the first step, Argo CD creates the required namespaces. Then, it proceeds to the operators’ installation phase. It may take some time.

Once ArgoCD installs all the Kubernetes operators you can verify their health checks. Here’s the value of a health check during the installation phase.

kubernetes-operators-argocd-healthcheck

Here’s the result after successful installation.

Now, Argo CD is proceeding to the CRDs creation phase. It runs the Kafka cluster and enables Knative. Let’s switch to the Openshift cluster console. We can display a list of installed operators:

kubernetes-operators-argocd-operators

We can also verify if the Kafka cluster is running in the kafka namespace:

Final Thoughts

With Argo CD we can configure the whole Kubernetes cluster configuration. It supports Helm charts, but there is another way for installing apps on Kubernetes – operators. I focused on the features and approach that allow us to install and manage operators in the GitOps way. I showed a practical example of how to use sync waves and apply CRDs not managed directly by Argo CD. With all mechanisms, we can easily handle Kubernetes operators with ArgoCD.

The post Manage Kubernetes Operators with ArgoCD appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/05/05/manage-kubernetes-operators-with-argocd/feed/ 14 14151
Manage Kubernetes Cluster with Terraform and Argo CD https://piotrminkowski.com/2022/06/28/manage-kubernetes-cluster-with-terraform-and-argo-cd/ https://piotrminkowski.com/2022/06/28/manage-kubernetes-cluster-with-terraform-and-argo-cd/#comments Tue, 28 Jun 2022 07:52:23 +0000 https://piotrminkowski.com/?p=11992 In this article, you will learn how to create a Kubernetes cluster with Terraform and then manage it with Argo CD. Terraform is very useful for automating infrastructure. On the other hand, Argo CD helps us implement GitOps and continuous delivery for our applications. It seems that we can successfully combine both these tools. Let’s […]

The post Manage Kubernetes Cluster with Terraform and Argo CD appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create a Kubernetes cluster with Terraform and then manage it with Argo CD. Terraform is very useful for automating infrastructure. On the other hand, Argo CD helps us implement GitOps and continuous delivery for our applications. It seems that we can successfully combine both these tools. Let’s consider how they can help us to work with Kubernetes in the GitOps style.

For a basic introduction to using Argo CD on Kubernetes, you may refer to this article.

Introduction

First of all, I would like to define the whole cluster and store its configuration in Git. I can’t use only Argo CD to achieve it, because Argo CD must run on the existing Kubernetes cluster. That’s why I need a tool that is able to create a cluster and then install Argo CD there. In that case, Terraform seems to be a natural choice. On the other hand, I don’t want to use Terraform to manage apps running on Kubernetes. It is perfect for a one-time activity like creating a cluster, but not for continuous tasks like app delivery and configuration management.

Here’s the list of things we are going to do:

  1. In the first step, we will create a local Kubernetes cluster using Terraform
  2. Then we will install OLM (Operator Lifecycle Manager) on the cluster. We need it to install Kafka with Strimzi (Step 5)
  3. We will use Terraform to install Argo CD from the Helm chart and create a single Argo CD Application responsible for the whole cluster configuration based on Git
  4. After that, Argo CD Application installs Strimzi Operator, creates Argo CD Project dedicated to Kafka installation and Argo CD Application that runs Kafka on Kubernetes
  5. Finally, the Argo CD Application automatically creates all the CRD objects required for running Kafka

The most important thing here is that everything should happen after running the terraform apply command. Terraform installs Argo CD, and then Argo CD installs Kafka, which is our sample app in that scenario. Let’s see how it works.

terraform-kubernetes-arch

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should just follow my instructions. Let’s begin.

1. Create Kubernetes Cluster with Terraform

In order to easily create a Kubernetes cluster, we will use Kind. There is a dedicated Terraform provider for Kind available here. Of course, you can run Kubernetes on any cloud, and you will also find Terraform providers for that.

Our cluster consists of three worker nodes and a single master node. We need three nodes because finally, we will install a Kafka cluster running in three instances. Each of them will be deployed on a different node. Here’s our Terraform main.tf file for that step. We need to define the latest version of the tehcyx/kind provider (which is 0.0.12) in the required_providers section. The name of our cluster is cluster1. We will also enable the wait_for_ready option, to proceed to the next steps after the cluster is ready.

terraform {
  required_providers {
    kind = {
      source = "tehcyx/kind"
      version = "0.0.12"
    }
  }
}

provider "kind" {}

resource "kind_cluster" "default" {
  name = "cluster-1"
  wait_for_ready = true
  kind_config {
    kind = "Cluster"
    api_version = "kind.x-k8s.io/v1alpha4"

    node {
      role = "control-plane"
    }

    node {
      role = "worker"
      image = "kindest/node:v1.23.4"
    }

    node {
      role = "worker"
      image = "kindest/node:v1.23.4"
    }

    node {
      role = "worker"
      image = "kindest/node:v1.23.4"
    }
  }
}

Just to verify a configuration you can run the command terraform init, and then terraform plan. After that, you could apply the configuration using terraform apply, but as you probably remember we will do it after the last all the configuration is ready to apply everything in one command.

2. Install OLM on Kubernetes

As I mentioned before, Operator Lifecycle Manager (OLM) is a prerequisite for installing the Strimzi Kafka operator. You can find the latest release of OLM here. In fact, it comes down to applying two YAML manifests on Kubernetes. The first of them crds.yaml contains CRD definitions. The second of them olm.yaml provides all required Kubernetes objects to install OLM. Let’s just copy both these files into the local directory inside our Terraform repository. In order to apply them to Kubernetes, we first need to enable the Terraform kubectl provider.

terraform {
  ...

  required_providers {
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.7.0"
    }
  }
}

Why do we use the kubectl provider instead of the official Terraform Kubernetes provider? The crds.yaml contains pretty large CRDs that go over size limits. We can easily solve that problem by enabling the server-side apply on the kubectl provider. The next case is that there are multiple Kubernetes objects defined inside both the YAML files. The kubectl provider supports it via the for_each parameter.

data "kubectl_file_documents" "crds" {
  content = file("olm/crds.yaml")
}

resource "kubectl_manifest" "crds_apply" {
  for_each  = data.kubectl_file_documents.crds.manifests
  yaml_body = each.value
  wait = true
  server_side_apply = true
}

data "kubectl_file_documents" "olm" {
  content = file("olm/olm.yaml")
}

resource "kubectl_manifest" "olm_apply" {
  depends_on = [data.kubectl_file_documents.crds]
  for_each  = data.kubectl_file_documents.olm.manifests
  yaml_body = each.value
}

Le’s consider the last case in this section. Before applying any YAML we are creating a new Kubernetes cluster in the previous step. Therefore, we cannot use the existing context. Fortunately, we can use the output arguments from the kubectl provider with the Kubernetes address and auth credentials.

provider "kubectl" {
  host = "${kind_cluster.default.endpoint}"
  cluster_ca_certificate = "${kind_cluster.default.cluster_ca_certificate}"
  client_certificate = "${kind_cluster.default.client_certificate}"
  client_key = "${kind_cluster.default.client_key}"
}

3. Install Argo CD with Helm

This is the last step on the Terraform side. We are going to install Argo CD using its Helm chart. We also need to create a single Argo CD Application responsible for the cluster management. This Application will install the Kafka Strimzi operator and create another Argo CD Application‘s used e.g. for running the Kafka cluster. In the first step, we need to do the same thing as before: define a provider and set the Kubernetes cluster address. Here’s our definition in Terraform:

provider "helm" {
  kubernetes {
    host = "${kind_cluster.default.endpoint}"
    cluster_ca_certificate = "${kind_cluster.default.cluster_ca_certificate}"
    client_certificate = "${kind_cluster.default.client_certificate}"
    client_key = "${kind_cluster.default.client_key}"
  }
}

The tricky thing here is that we need to create the Application just after Argo CD installation. By default, Terraform verifies if there are required CRD objects on Kubernetes. In that case, it requires the Application CRD from argoproj.io/v1alpha1. Fortunately, we can use the Helm chart parameter allowing us to pass the declaration of additional Applications. In order to do that, we have to set a custom values.yaml file. Here’s the Terraform declaration for the Argo CD installation:

resource "helm_release" "argocd" {
  name  = "argocd"

  repository       = "https://argoproj.github.io/argo-helm"
  chart            = "argo-cd"
  namespace        = "argocd"
  version          = "4.9.7"
  create_namespace = true

  values = [
    file("argocd/application.yaml")
  ]
}

In order to create an initial Application, we need to use the Helm chart server.additionalApplications parameter as shown. Here’s the whole argocd/application.yaml file. To simplify, the configuration used by Argo CD is located in the repository as Terraform configuration. You can find all the required YAMLs in the argocd/manifests directory.

server:
  additionalApplications:
   - name: cluster-config
     namespace: argocd
     project: default
     source:
       repoURL: https://github.com/piomin/sample-terraform-kubernetes-argocd.git
       targetRevision: HEAD
       path: argocd/manifests/cluster
       directory:
         recurse: true
     destination:
       server: https://kubernetes.default.svc
     syncPolicy:
       automated:
         prune: false
         selfHeal: false

4. Configure Kubernetes cluster with Argo CD

The last two steps are managed by Argo CD. We have successfully completed the Kubernetes cluster installation process. Now, it’s time to install our first application there. Our example app is Kafka. So, firstly we need to install the Kafka Strimzi operator. To do that, we just need to define a Subscription object managed by the previously installed OLM. The definition is available in the repository as the strimzi.yaml file.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: my-strimzi-kafka-operator
  namespace: operators
spec:
  channel: stable
  name: strimzi-kafka-operator
  source: operatorhubio-catalog
  sourceNamespace: olm

We could configure a lot of aspects related to the whole cluster here. However, we just need to create a dedicated Argo CD Project and Application for Kafka configuration. Here’s our Project definition:

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: kafka
  namespace: argocd
spec:
  clusterResourceWhitelist:
    - group: '*'
      kind: '*'
  destinations:
    - name: '*'
      namespace: '*'
      server: '*'
  sourceRepos:
    - '*'

Let’s place the kafka ArgoCD Application inside the newly created Project.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: kafka
  namespace: argocd
spec:
  destination:
    namespace: kafka
    server: https://kubernetes.default.svc
  project: kafka
  source:
    path: argocd/manifests/kafka
    repoURL: https://github.com/piomin/sample-terraform-kubernetes-argocd.git
    targetRevision: HEAD
  syncPolicy:
    syncOptions:
      - CreateNamespace=true

5. Create Kafka Cluster using GitOps

Finally, the last part of our exercise. We will create and run a 3-node Kafka cluster on Kind. Here’s the Kafka object definition we store in Git. We are setting 3 replicas for both Kafka and Zookeeper (used by the Kafka cluster). This manifest is available in the repository under the path argocd/manifests/kafka/cluster.yaml. We are exposing the cluster on 9092 (plain) and 9093 (TLS) ports. The Kafka cluster has storage mounted as the PVC into the Deployment.

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    replicas: 3
    version: 3.2.0
    logging:
      type: inline
      loggers:
        kafka.root.logger.level: "INFO"
    config:
      auto.create.topics.enable: "false"
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      default.replication.factor: 3
      min.insync.replicas: 2
      inter.broker.protocol.version: "3.2"
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    storage:
      type: jbod
      volumes:
        - id: 0
          type: persistent-claim
          size: 30Gi
          deleteClaim: true
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 10Gi
      deleteClaim: true
  entityOperator:
    topicOperator: {}
    userOperator: {}

We will also define a single Kafka Topic inside the argocd/manifests/kafka/cluster.yaml manifest.

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
  name: my-topic
  labels:
    strimzi.io/cluster: my-cluster
spec:
  partitions: 10
  replicas: 3

Execution on Kubernetes

Terraform

We have already prepared all the required scripts. Let’s proceed to the execution phase. If you still haven’t cloned the Git repository it’s time to do it:

$ git clone https://github.com/piomin/sample-terraform-kubernetes-argocd.git
$ cd sample-terraform-kubernetes-argocd

Firstly, let’s initialize our working directory containing Terraform configuration:

$ terraform init

Once we do it, we may preview a list of actions to perform:

$ terraform plan

You should receive a pretty large content as a response. Here’s the last part of my result:

If everything looks fine and there are no errors we may proceed to the next (final) step. Let’s begin the process:

$ terraform apply

All 24 objects should be successfully applied. Here’s the last part of the logs:

Now, you should have your cluster ready and running. Let’s display a list of Kind clusters:

$ kind get clusters
cluster-1

The name of our cluster is cluster-1. But the name of the Kubernetes context is kind-cluster-1:

Let’s display a list of applications deployed on the Kind cluster. You should have at least Argo CD and OLM installed. After some time Argo CD applies the configuration stored in the Git repository. Then, you should see the Kafka Strimzi operator installed in the operators namespace.

terraform-kubernetes-apps

Argo CD

After that, we can go to the Argo CD web console. To access it easily on the local port let’s enable port-forward:

$ kubectl port-forward service/argocd-server 8443:443 -n argocd

Now, you can display the Argo CD web console on the https://localhost:8443. The default username is admin. The password is auto-generated by the Argo CD. You can find it inside the Kubernetes Secret argocd-initial-admin-secret.

$ kubectl get secret argocd-initial-admin-secret -n argocd --template={{.data.password}} | base64 -D

Here’s the list of our Argo CD Applications. The cluster-config has an auto-sync option enabled. It installs the Strimzi operator and creates kafka Argo CD Application. I could also enable auto-sync for kafka Application. But just for the demo purpose, I left there a manual approval. So, let’s run Kafka on our cluster. To do that click the Sync button on the kafka tile.

terraform-kubernetes-argocd

Once you do the Kafka installation is starting. Finally, you should have the whole cluster ready and running. Each Kafka and Zookeeper node are running on the different Kubernetes worker node:

That’s all. We created everything using a single Terraform command and one click on the Argo CD web console. Of course, we could enable auto-sync for the kafka application, so we even don’t need to log in to the Argo CD web console for the final effect.

The post Manage Kubernetes Cluster with Terraform and Argo CD appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/06/28/manage-kubernetes-cluster-with-terraform-and-argo-cd/feed/ 23 11992