Continuous Delivery Archives - Piotr's TechBlog https://piotrminkowski.com/tag/continuous-delivery/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 08 Dec 2025 23:10:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Continuous Delivery Archives - Piotr's TechBlog https://piotrminkowski.com/tag/continuous-delivery/ 32 32 181738725 A Book: Hands-On Java with Kubernetes https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/ https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/#respond Mon, 08 Dec 2025 16:05:58 +0000 https://piotrminkowski.com/?p=15892 My book about Java and Kubernetes has finally been published! The book “Hands-On Java with Kubernetes” is the result of several months of work and, in fact, a summary of my experiences over the last few years of research and development. In this post, I want to share my thoughts on this book, explain why […]

The post A Book: Hands-On Java with Kubernetes appeared first on Piotr's TechBlog.

]]>
My book about Java and Kubernetes has finally been published! The book “Hands-On Java with Kubernetes” is the result of several months of work and, in fact, a summary of my experiences over the last few years of research and development. In this post, I want to share my thoughts on this book, explain why I chose to write and publish it, and briefly outline its content and concept. To purchase the latest version, go to this link.

Here is a brief overview of all my published books.

Motivation

I won’t hide that this post is mainly directed at my blog subscribers and people who enjoy reading it and value my writing style. As you know, all posts and content on my blog, along with sample application repositories on GitHub, are always accessible to you for free. Over the past eight years, I have worked to publish high-quality content on my blog, and I plan to keep doing so. It is a part of my life, a significant time commitment, but also a lot of fun and a hobby.

I want to explain why I decided to write this book, why now, and why in this way. But first, a bit of background. I wrote my last book first, then my first book, over seven years ago. It focused on topics I was mainly involved with at the time, specifically Spring Boot and Spring Cloud. Since then, a lot of time has passed, and much has changed – not only in the technology itself but also a little in my personal life. Today, I am more involved in Kubernetes and container topics than, for example, Spring Cloud. For years, I have been helping various organizations transition from traditional application architectures to cloud-native models based on Kubernetes. Of course, Java remains my main area of expertise. Besides Spring Boot, I also really like the Quarkus framework. You can read a lot about both in my book on Kubernetes.

Based on my experience over the past few years, involving development teams is a key factor in the success of the Kubernetes platform within an organization. Ultimately, it is the applications developed by these teams that are deployed there. For developers to be willing to use Kubernetes, it must be easy for them to do so. That is why I persuade organizations to remove barriers to using Kubernetes and to design it in a way that makes it easier for development teams. On my blog and in this book, I aim to demonstrate how to quickly and simply launch applications on Kubernetes using frameworks such as Spring Boot and Quarkus.

It’s an unusual time to publish a book. AI agents are producing more and more technical content online. More often than not, instead of grabbing a book, people turn to an AI chatbot for a quick answer, though not always the best one. Still, a book that thoroughly introduces a topic and offers a step-by-step guide remains highly valuable.

Content of the Book

This book demonstrates that Java is an excellent choice for building applications that run on Kubernetes. In the first chapter, I’ll show you how to quickly build your application, create its image, and run it on Kubernetes without writing a single line of YAML or Dockerfile. This chapter also covers the minimum Kubernetes architecture you must understand to manage applications effectively in this environment. The second chapter, on the other hand, demonstrates how to effectively organize your local development environment to work with a Kubernetes cluster. You’ll see several options for running a distribution of your cluster locally and learn about the essential set of tools you should have. The third chapter outlines best practices for building applications on the Kubernetes platform. Most of the presented requirements are supported by simple examples and explanations of the benefits of meeting them. The fourth chapter presents the most valuable tools for the inner development loop with Kubernetes. After reading the first four chapters, you will understand the main Kubernetes components related to application management, enabling you to navigate the platform efficiently. You’ll also learn to leverage Spring Boot and Quarkus features to adapt your application to Kubernetes requirements.

In the following chapters, I will focus on the benefits of migrating applications to Kubernetes. The first area to cover is security. Chapter five discusses mechanisms and tools for securing applications running in a cluster. Chapter six describes Spring and Quarkus projects that enable native integration with the Kubernetes API from within applications. In chapter seven, you’ll learn about the service mesh tool and the benefits of using it to manage HTTP traffic between microservices. Chapter eight addresses the performance and scalability of Java applications in a Kubernetes environment. Chapter Eight demonstrates how to design a CI/CD process that runs entirely within the cluster, leveraging Kubernetes-native tools for pipeline building and the GitOps approach. This book also covers AI. In the final, ninth chapter, you’ll learn how to run a simple Java application that integrates with an AI model deployed on Kubernetes.

Publication

I decided to publish my book on Leanpub. Leanpub is a platform for writing, publishing, and selling books, especially popular among technical content authors. I previously published a book with Packt, but honestly, I was alone during the writing process. Leanpub is similar but offers several key advantages over publishers like Packt. First, it allows you to update content collaboratively with readers and keep it current. Even though my book is finished, I don’t rule out adding more chapters, such as on AI on Kubernetes. I also look forward to your feedback and plan to improve the content and examples in the repository continuously. Overall, this has been another exciting experience related to publishing technical content.

And when you buy such a book, you can be sure that most of the royalties go to me as the author, unlike with other publishers, where most of the royalties go to them as promoters. So, I’m looking forward to improving my book with you!

Conclusion

My book aims to bring together all the most interesting elements surrounding Java application development on Kubernetes. It is intended not only for developers but also for architects and DevOps teams who want to move to the Kubernetes platform.

The post A Book: Hands-On Java with Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/12/08/a-book-hands-on-java-with-kubernetes/feed/ 0 15892
The Art of Argo CD ApplicationSet Generators with Kubernetes https://piotrminkowski.com/2025/03/20/the-art-of-argo-cd-applicationset-generators-with-kubernetes/ https://piotrminkowski.com/2025/03/20/the-art-of-argo-cd-applicationset-generators-with-kubernetes/#comments Thu, 20 Mar 2025 09:40:46 +0000 https://piotrminkowski.com/?p=15624 This article will teach you how to use the Argo CD ApplicationSet generators to manage your Kubernetes cluster using a GitOps approach. An Argo CD ApplicationSet is a Kubernetes resource that allows us to manage and deploy multiple Argo CD Applications. It dynamically generates multiple Argo CD Applications based on a given template. As a […]

The post The Art of Argo CD ApplicationSet Generators with Kubernetes appeared first on Piotr's TechBlog.

]]>
This article will teach you how to use the Argo CD ApplicationSet generators to manage your Kubernetes cluster using a GitOps approach. An Argo CD ApplicationSet is a Kubernetes resource that allows us to manage and deploy multiple Argo CD Applications. It dynamically generates multiple Argo CD Applications based on a given template. As a result, we can deploy applications across multiple Kubernetes clusters, create applications for different environments (e.g., dev, staging, prod), and manage many repositories or branches. Everything can be easily achieved with a minimal source code effort.

Argo CD ApplicationSet supports several different generators. In this article, we will focus on the Git generator type. It generates Argo CD Applications based on directory structure or branch changes in a Git repository. It contains two subtypes: the Git directory generator and the Git file generator. If you are interested in other Argo CD ApplicationSet generators you can find some articles on my blog. For example, the following post shows how to use List Generator to promote images between environments. You can also find a post about the Cluster Decision Resource generator, which shows how to spread applications dynamically between multiple Kubernetes clusters.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. You must go to the appset-helm-demo directory, which contains the whole configuration required for that exercise. Then you should only follow my instructions.

Argo CD Installation

Argo CD is the only tool we need to install on our Kubernetes cluster for that exercise. We can use the official Helm chart to install it on Kubernetes. Firstly. let’s add the following Helm repository:

helm repo add argo https://argoproj.github.io/argo-helm
ShellSession

After that, we can install ArgoCD in the current Kubernetes cluster in the argocd namespace using the following command:

helm install my-argo-cd argo/argo-cd -n argocd
ShellSession

I use OpenShift in that exercise. With the OpenShift Console, I can easily install ArgoCD on the cluster using the OpenShift GitOps operator.

Once we installed it we can easily access the Argo CD dashboard.

We can sign in there using OpenShift credentials.

Motivation

Our goal in this exercise is to deploy and run some applications (a simple Java app and Postgres database) on Kubernetes with minimal source code effort. Those two applications only show how to create a standard that can be easily applied to any application type deployed on our cluster. In this standard, a directory structure determines how and where our applications are deployed on Kubernetes. My example configuration is stored in a single Git repository. However, we can easily extend it with multiple repositories, where Argoc CD switches between the central repository and other Git repositories containing a configuration for concrete applications.

Here’s a directory structure and files for deploying our two applications. Both the custom app and Postgres database are deployed in three environments: dev, test, and prod. We use Helm charts for deploying them. Each environment directory contains a Helm values file with installation parameters. The configuration distinguishes two different types of installation: apps and components. Each app is installed using the same Helm chart dedicated to a standard deployment. Each component is installed using a custom Helm chart provided by that component. For example, for Postgres, we will use the following Bitnami chart.

.
├── apps
│   ├── aaa-1
│   │   └── basic
│   │       ├── prod
│   │       │   └── values.yaml
│   │       ├── test
│   │       │   └── values.yaml
│   │       ├── uat
│   │       │   └── values.yaml
│   │       └── values.yaml
│   ├── aaa-2
│   └── aaa-3
└── components
    └── aaa-1
        └── postgresql
            ├── prod
            │   ├── config.yaml
            │   └── values.yaml
            ├── test
            │   ├── config.yaml
            │   └── values.yaml
            └── uat
                ├── config.yaml
                └── values.yaml
ShellSession

Before deploying the application, we should prepare namespaces with quotas, Argo CD projects, and ApplicationSet generators for managing application deployments. Here’s the structure of a global configuration repository. It also uses Helm chart to apply that part of manifests to the Kubernetes cluster. Each directory inside the projects directory determines our project name. On the other hand, a project contains several Kubernetes namespaces. Each project may contain several different Kubernetes Deployments.

.
└── projects
    ├── aaa-1
    │   └── values.yaml
    ├── aaa-2
    │   └── values.yaml
    └── aaa-3
        └── values.yaml
ShellSession

Prepare Global Cluster Configuration

Helm Template for Namespaces and Quotas

Here’s the Helm template for creating namespaces and quotas for each namespace. We will create the project namespace per each environment (stage).

{{- range .Values.stages }}
---
apiVersion: v1
kind: Namespace
metadata:
  name: {{ $.Values.projectName }}-{{ .name }}
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: default-quota
  namespace: {{ $.Values.projectName }}-{{ .name }}
spec:
  hard:
    {{- if .config }}
    {{- with .config.quotas }}
    pods: {{ .pods | default "10" }}
    requests.cpu: {{ .cpuRequest | default "2" }}
    requests.memory: {{ .memoryRequest | default "2Gi" }}
    limits.cpu: {{ .cpuLimit | default "8" }}
    limits.memory: {{ .memoryLimit | default "8Gi" }}
    {{- end }}
    {{- else }}
    pods: "10"
    requests.cpu: "2"
    requests.memory: "2Gi"
    limits.cpu: "8"
    limits.memory: "8Gi"
    {{- end }}
{{- end }}
chart/templates/namespace.yaml

Helm Template for the Argo CD AppProject

Helm chart will also create a dedicated Argo CD AppProject object per our project.

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: {{ .Values.projectName }}
  namespace: {{ .Values.argoNamespace | default "argocd" }}
spec:
  clusterResourceWhitelist:
    - group: '*'
      kind: '*'
  destinations:
    - namespace: '*'
      server: '*'
  sourceRepos:
    - '*'
chart/templates/appproject.yaml

Helm Template for Argo CD ApplicationSet

After that, we can proceed to the most tricky part of our exercise. Helm chart also defines a template for creating the Argo CD ApplicationSet. This ApplicationSet must analyze the repository structure, which contains the configuration of apps and components. We define two ApplicationSets per each project. The first uses the Git Directory generator to determine the structure of the apps catalog and deploy the apps in all environments using my custom spring-boot-api-app chart. The chart parameters can be overridden with Helm values placed in each app directory.

The second ApplicationSet uses the Git Files generator to determine the structure of the components catalog. It reads the contents of the config.yaml file in each directory. The config.yaml file sets the repository, name, and version of the Helm chart that must be used to install the component on Kubernetes.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: '{{ .Values.projectName }}-apps-config'
  namespace: {{ .Values.argoNamespace | default "argocd" }}
spec:
  goTemplate: true
  generators:
    - git:
        repoURL: https://github.com/piomin/argocd-showcase.git
        revision: HEAD
        directories:
          {{- range .Values.stages }}
          - path: appset-helm-demo/apps/{{ $.Values.projectName }}/*/{{ .name }}
          {{- end }}
  template:
    metadata:
      name: '{{`{{ index .path.segments 3 }}`}}-{{`{{ index .path.segments 4 }}`}}'
    spec:
      destination:
        namespace: '{{`{{ index .path.segments 2 }}`}}-{{`{{ index .path.segments 4 }}`}}'
        server: 'https://kubernetes.default.svc'
      project: '{{ .Values.projectName }}'
      sources:
        - chart: spring-boot-api-app
          repoURL: 'https://piomin.github.io/helm-charts/'
          targetRevision: 0.3.8
          helm:
            valueFiles:
              - $values/appset-helm-demo/apps/{{ .Values.projectName }}/{{`{{ index .path.segments 3 }}`}}/{{`{{ index .path.segments 4 }}`}}/values.yaml
            parameters:
              - name: appName
                value: '{{ .Values.projectName }}'
        - repoURL: 'https://github.com/piomin/argocd-showcase.git'
          targetRevision: HEAD
          ref: values
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
---
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: '{{ .Values.projectName }}-components-config'
  namespace: {{ .Values.argoNamespace | default "argocd" }}
spec:
  goTemplate: true
  generators:
    - git:
        repoURL: https://github.com/piomin/argocd-showcase.git
        revision: HEAD
        files:
          {{- range .Values.stages }}
          - path: appset-helm-demo/components/{{ $.Values.projectName }}/*/{{ .name }}/config.yaml
          {{- end }}
  template:
    metadata:
      name: '{{`{{ index .path.segments 3 }}`}}-{{`{{ index .path.segments 4 }}`}}'
    spec:
      destination:
        namespace: '{{`{{ index .path.segments 2 }}`}}-{{`{{ index .path.segments 4 }}`}}'
        server: 'https://kubernetes.default.svc'
      project: '{{ .Values.projectName }}'
      sources:
        - chart: '{{`{{ .chart.name }}`}}'
          repoURL: '{{`{{ .chart.repository }}`}}'
          targetRevision: '{{`{{ .chart.version }}`}}'
          helm:
            valueFiles:
              - $values/appset-helm-demo/components/{{ .Values.projectName }}/{{`{{ index .path.segments 3 }}`}}/{{`{{ index .path.segments 4 }}`}}/values.yaml
            parameters:
              - name: appName
                value: '{{ .Values.projectName }}'
        - repoURL: 'https://github.com/piomin/argocd-showcase.git'
          targetRevision: HEAD
          ref: values
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
chart/templates/applicationsets.yaml

There are several essential elements in this configuration, which we should pay attention to. Both Helm and ApplicationSet use templating engines based on {{ ... }} placeholders. So to avoid conflicts we should escape Argo CD ApplicationSet templating elements from the Helm templating elements. The following part of the template responsible for generating the Argo CD Application name is a good example of that approach: '{{`{{ index .path.segments 3 }}`}}-{{`{{ index .path.segments 4 }}`}}'. First, we use the AppliocationSet Git generator parameter index .path.segments 3 that returns the name of the third part of the directory path. Those elements are escaped with the ` char so Helm doesn’t try to analyze it.

Helm Chart Structure

Our ApplicationSets use the “Multiple Sources for Application” feature to read parameters from Helm values files and inject them into the Helm chart from a remote repository. Thanks to that, our configuration repositories for apps and components contain only values.yaml files in the standardized directory structure. The only chart we store in the sample repository has been described above and is responsible for creating the configuration required to run app Deployments on the cluster.

.
└── chart
    ├── Chart.yaml
    ├── templates
    │   ├── additional.yaml
    │   ├── applicationsets.yaml
    │   ├── appproject.yaml
    │   └── namespaces.yaml
    └── values.yaml
ShellSession

By default, each project defines three environments (stages): test, uat, prod.

stages:
  - name: test
    additionalObjects: {}
  - name: uat
    additionalObjects: {}
  - name: prod
    additionalObjects: {}
chart/values.yml

We can override a default behavior for the specific project in Helm values. Each project directory contains the values.yaml file. Here are Helm parameters for the aaa-3 project that override CPU request quota from 2 CPUs to 4 CPUs only for the test environment.

stages:
  - name: test
    config:
      quotas:
        cpuRequest: 4
    additionalObjects: {}
  - name: uat
    additionalObjects: {}
  - name: prod
    additionalObjects: {}
projects/aaa-3/values.yaml

Run the Synchronization Process

Generate Global Structure on the Cluster

To start a process we must create the ApplicationSet that reads the structure of the projects directory. Each subdirectory in the projects directory indicates the name of our project. Our ApplicationSet uses a Git directory generator to create an Argo CD Application per each project. Its name contains the name of the subdirectory and the config suffix. Each generated Application uses the previously described Helm chart to create all namespaces, quotas, and other resources requested by the project. It also leverages the “Multiple Sources for Application” feature to allow us to override default Helm chart settings. It reads a project name from the directory name and passes it as a parameter to the generated Argo CD Application.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: global-config
  namespace: openshift-gitops
spec:
  goTemplate: true
  generators:
    - git:
        repoURL: https://github.com/piomin/argocd-showcase.git
        revision: HEAD
        directories:
          - path: appset-helm-demo/projects/*
  template:
    metadata:
      name: '{{.path.basename}}-config'
    spec:
      destination:
        namespace: '{{.path.basename}}'
        server: 'https://kubernetes.default.svc'
      project: default
      sources:
        - path: appset-helm-demo/chart
          repoURL: 'https://github.com/piomin/argocd-showcase.git'
          targetRevision: HEAD
          helm:
            valueFiles:
              - $values/appset-helm-demo/projects/{{.path.basename}}/values.yaml
            parameters:
              - name: projectName
                value: '{{.path.basename}}'
              - name: argoNamespace
                value: openshift-gitops
        - repoURL: 'https://github.com/piomin/argocd-showcase.git'
          targetRevision: HEAD
          ref: values
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
YAML

Once we create the global-config ApplicationSet object magic happens. Here’s the list of Argo CD Applications generated from our directories in Git configuration repositories.

argo-cd-applicationset-all-apps

First, there are three Argo CD Applications with the projects’ configuration. That’s happening because we defined 3 subdirectories in the projects directory with names aaa-1, aaa-2 and aaa-3.

The configuration applied by those Argo CD Applications is pretty similar since they are using the same Helm chart. We can look at the list of resources managed by the aaa-3-config Application. There are three namespaces (aaa-3-test, aaa-3-uat, aaa-3-prod) with resource quotas, a single Argo CD AppProject, and two ApplicationSet objects responsible for generating Argo CD Applications for apps and components directories.

argo-cd-applicationset-global-config

In this configuration, we can verify if the value of the request.cpu ResourceQuota object has been overridden from 2 CPUs to 4 CPUs.

Let’s analyze what happened. Here’s a list of Argo CD ApplicationSets. The global-config ApplicationSet generated Argo CD Application per each detected project inside the projects directory. Then, each of these Applications applied two ApplicationSet objects to cluster using the Helm template.

$ kubectl get applicationset
NAME                      AGE
aaa-1-components-config   29m
aaa-1-apps-config         29m
aaa-2-components-config   29m
aaa-2-apps-config         29m
aaa-3-components-config   29m
aaa-3-apps-config         29m
global-config             29m
ShellSession

There’s also a list of created namespaces:

$ kubectl get ns
NAME                                               STATUS   AGE
aaa-1-prod                                         Active   34m
aaa-1-test                                         Active   34m
aaa-1-uat                                          Active   34m
aaa-2-prod                                         Active   34m
aaa-2-test                                         Active   34m
aaa-2-uat                                          Active   34m
aaa-3-prod                                         Active   34m
aaa-3-test                                         Active   34m
aaa-3-uat                                          Active   34m
ShellSession

Generate and Apply Deployments

Our sample configuration contains only two Deployments. We defined the basic subdirectory in the apps directory and the postgres subdirectory in the components directory inside the aaa-1 project. The aaa-2 and aaa-3 projects don’t contain any Deployments for simplification. However, the more subdirectories with the values.yaml file we create there, the more applications will be deployed on the cluster. Here’s a typical values.yaml file for a simple app deployed with a standard Helm chart. It defines the image repository, name, and tag. It also set the Deployment name and environment.

image:
  repository: piomin/basic
  tag: 1.0.0
app:
  name: basic
  environment: prod
YAML

For the postgres component we must set more parameters in Helm values. Here’s the final list:

global:
  compatibility:
    openshift:
      adaptSecurityContext: force

image:
  tag: 1-54
  registry: registry.redhat.io
  repository: rhel9/postgresql-15

primary:
  containerSecurityContext:
    readOnlyRootFilesystem: false
  persistence:
    mountPath: /var/lib/pgsql
  extraEnvVars:
    - name: POSTGRESQL_ADMIN_PASSWORD
      value: postgresql123

postgresqlDataDir: /var/lib/pgsql/data
YAML

The following Argo CD Application has been generated by the aaa-1-apps-config ApplicationSet. It detected the basic subdirectory in the apps directory. The basic subdirectory contained 3 subdirectories: test, uat and prod with values.yaml file. As a result, we have Argo CD per environment responsible for deploying the basic app in the target namespaces.

argo-cd-applicationset-basic-apps

Here’s a list of resources managed by the basic-prod Application. It uses my custom Helm chart and applies Deployment and Service objects to the cluster.

The following Argo CD Application has been generated by the aaa-1-components-config ApplicationSet. It detected the basic subdirectory in the components directory. The postgres subdirectory contained 3 subdirectories: test, uat and prod with values.yaml and config.yaml files. The ApplicationSet Files generator reads the repository, name, and version from the configuration in the config.yaml file.

Here’s the config.yaml file with the Bitnami Postgres chart settings. We could place here any other chart we want to install something else on the cluster.

chart:
  repository: https://charts.bitnami.com/bitnami
  name: postgresql
  version: 15.5.38
components/aaa-1/postgresql/prod/config.yaml

Here’s the list of resources installed by the Bitnami Helm chart used by the generated Argo CD Applications.

argo-cd-applicationset-postgres

Final Thoughts

This article proves that Argo CD ApplicationSet and Helm templates can be used together to create advanced configuration structures. It shows how to use ApplicationSet Git Directory and Files generators to analyze the structure of directories and files in the Git config repository. With that approach, we can propose a standardization in the configuration structure across the whole organization and propagate it similarly for all the applications deployed in the Kubernetes clusters. Everything can be easily managed at the cluster admin level with the single global Argo CD ApplicationSet that accesses many different repositories with configuration.

The post The Art of Argo CD ApplicationSet Generators with Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/03/20/the-art-of-argo-cd-applicationset-generators-with-kubernetes/feed/ 2 15624
Continuous Promotion on Kubernetes with GitOps https://piotrminkowski.com/2025/01/14/continuous-promotion-on-kubernetes-with-gitops/ https://piotrminkowski.com/2025/01/14/continuous-promotion-on-kubernetes-with-gitops/#respond Tue, 14 Jan 2025 12:36:44 +0000 https://piotrminkowski.com/?p=15464 This article will teach you how to continuously promote application releases between environments on Kubernetes using the GitOps approach. Promotion between environments is one of the most challenging aspects in the continuous delivery process realized according to the GitOps principles. That’s because typically we manage that process independently per each environment by providing changes in […]

The post Continuous Promotion on Kubernetes with GitOps appeared first on Piotr's TechBlog.

]]>
This article will teach you how to continuously promote application releases between environments on Kubernetes using the GitOps approach. Promotion between environments is one of the most challenging aspects in the continuous delivery process realized according to the GitOps principles. That’s because typically we manage that process independently per each environment by providing changes in the Git configuration repository. If we use Argo CD, it comes down to creating three Application CRDs that refer to different places or files inside a repository. Each application is responsible for synchronizing changes to e.g. a specified namespace in the Kubernetes cluster. In that case, a promotion is driven by the commit in the part of the repository responsible for managing a given environment. Here’s the diagram that illustrates the described scenario.

kubernetes-promote-arch

The solution to these challenges is Kargo, an open-source tool implementing continuous promotion within CI/CD pipelines. It provides a structured mechanism for promoting changes in complex environments involving Kubernetes and GitOps. I’ve been following Kargo for several months. It’s an interesting tool that provides stage-to-stage promotion using GitOps principles with Argo CD. It reached the version in October 2024. Let’s take a closer look at it.

If you are interested in promotion between environments on Kubernetes using the GitOps approach, you can read about another tool that tackles that challenge – Devtron. Here’s the link to my article that explains its concept.

Understand the Concept

Before we start with Kargo, we need to understand the concept around that tool. Let’s analyze several basic terms defined and implemented by Kargo.

A project is a bunch of related Kargo resources that describe one or more delivery pipelines. It’s the basic unit of organization and multi-tenancy in Kargo. Every Kargo project has its own cluster-scoped Kubernetes resource of type Project. We should put all the resources related to a certain project into the same Kubernetes namespace.

A stage represents environments in Kargo. Stages are the most important concept in Kargo. We can link them together in a directed acyclic graph to describe a delivery pipeline. Typically, a delivery pipeline starts with a test or dev stage and ends with one or more prod stages.

A Freight object represents resources that Kargo promotes from one stage to another. It can reference one or more versioned artifacts, such as container images, Kubernetes manifests loaded from Git repositories, or Helm charts from chart repositories. 

A warehouse is a source of freight. It can refer to container image repositories, Git, or Helm chart repositories.

In that context, we should treat a promotion as a request to move a piece of freight into a specified stage.

Source Code

If you want to try out this exercise, go ahead and take a look at my source code. To do that, just clone my GitHub repository. It contains the sample Spring Boot application in the basic directory. The application exposes a single REST endpoint that returns the app Maven version number. Go to that directory. Then, you can follow my further instructions.

Kargo Installation

A few things must be ready before installing Kargo. We must have a Kubernetes cluster and the helm CLI installed on our laptop. I use Minikube. Kargo integrates with Cert-Manager, Argo CD, and Argo Rollouts. We can install all those tools using official Helm charts. Let’s begin with Cert-Manager. First, we must add the jetstack Helm repository:

helm repo add jetstack https://charts.jetstack.io
ShellSession

Here’s the helm command that installs it in the cert-manager namespace.

helm install cert-manager --namespace cert-manager jetstack/cert-manager \
  --set crds.enabled=true \
  --set crds.keep=true
ShellSession

To install Argo CD, we must first add the following Helm repository:

helm repo add argo https://argoproj.github.io/argo-helm
ShellSession

Then, we can install Argo CD in the argocd namespace.

helm install argo-cd argo/argo-cd
ShellSession

The Argo Rollouts chart is located in the same Helm repository as the Argo CD chart. We will also install it in the argocd namespace:

helm install my-argo-rollouts argo/argo-rollouts
ShellSession

Finally, we can proceed to the Kargo installation. We will install it in the kargo namespace. The installation command sets two Helm parameters. We should set the Bcrypt password hash and a key used to sign JWT tokens for the admin account:

helm install kargo \
  oci://ghcr.io/akuity/kargo-charts/kargo \
  --namespace kargo \
  --create-namespace \
  --set api.adminAccount.passwordHash='$2y$10$xu2U.Ux5nV5wKmerGcrDlO261YeiTlRrcp2ngDGPxqXzDyiPQvDXC' \
  --set api.adminAccount.tokenSigningKey=piomin \
  --wait
ShellSession

We can generate and print the password hash using e.g. htpasswd. Here’s the sample command:

htpasswd -nbB admin 123456
ShellSession

After installation is finished, we can verify it by displaying a list of pods running in the kargo namespace.

$ kubectl get po -n kargo
NAME                                          READY   STATUS    RESTARTS   AGE
kargo-api-dbb4d5cb7-zvnc6                     1/1     Running   0          44s
kargo-controller-c4964bbb7-4ngnv              1/1     Running   0          44s
kargo-management-controller-dc5569759-596ch   1/1     Running   0          44s
kargo-webhooks-server-6df6dd58c-g5jlp         1/1     Running   0          44s
ShellSession

Kargo provides a UI dashboard that allows us to display and manage continuous promotion configuration. Let’s expose it locally on the 8443 port using a port-forward feature:

kubectl port-forward svc/kargo-api -n kargo 8443:443
ShellSession

Once we sign in to the dashboard using the admin password set during an installation we can create a new kargo-demo project:

Sample Application

Our sample application is simple. It exposes a single GET /basic/ping endpoint that returns the version number read from Maven pom.xml.

@RestController
@RequestMapping("/basic")
public class BasicController {

    @Autowired
    Optional<BuildProperties> buildProperties;

    @GetMapping("/ping")
    public String ping() {
        return "I'm basic:" + buildProperties.orElseThrow().getVersion();
    }
}
Java

We will build an application image using the Jib Maven Plugin. It is already configured in Maven pom.xml. I set my Docker Hub as the target registry, but you can relate it to your account.

<plugin>
  <groupId>com.google.cloud.tools</groupId>
  <artifactId>jib-maven-plugin</artifactId>
  <version>3.4.4</version>
  <configuration>
    <container>
      <user>1001</user>
    </container>
    <to>
      
    </to>
  </configuration>
</plugin>
XML

The following Maven command builds the application and its image from a source code. Before the build, we should increase the version number in the project.version field in pom.xml. We begin with the 1.0.0 version, which should be pushed to your registry before proceeding.

mvn clean package -DskipTests jib:build
ShellSession

Here’s the result of my initial build.

Let’s switch to the Docker Registry dashboard after pushing the 1.0.0 version.

Configure Kargo for Promotion on Kubernetes

First, we will create the Kargo Warehouse object. It refers to the piomin/basic repository containing the image with our sample app. The Warehouse object is responsible for discovering new image tags pushed into the registry. We also use my Helm chart to deploy the image to Kubernetes. However, we will only use the latest version of that chart. Otherwise, we should also place that chart inside the basic Warehouse object to enable new chart version discovery.

apiVersion: kargo.akuity.io/v1alpha1
kind: Warehouse
metadata:
 name: basic
 namespace: demo
spec:
 subscriptions:
 - image:
     discoveryLimit: 5
     repoURL: piomin/basic
YAML

Then, we will create the Argo CD ApplicationSet to generate an application per environment. There are three environments: test, uat, prod. Each Argo CD application must be annotated by kargo.akuity.io/authorized-stage containing the project and stage name. Argo uses multiple sources. The argocd-showcase repository contains Helm values files with parameters per each stage. The piomin.github.io/helm-charts repository provides the spring-boot-api-app Helm chart that refers to those values.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
 name: demo
 namespace: argocd
spec:
 generators:
 - list:
     elements:
     - stage: test
     - stage: uat
     - stage: prod
 template:
   metadata:
     name: demo-{{stage}}
     annotations:
       kargo.akuity.io/authorized-stage: demo:{{stage}}
   spec:
     project: default
     sources:
       - chart: spring-boot-api-app
         repoURL: 'https://piomin.github.io/helm-charts/'
         targetRevision: 0.3.8
         helm:
           valueFiles:
             - $values/values/values-{{stage}}.yaml
       - repoURL: 'https://github.com/piomin/argocd-showcase.git'
         targetRevision: HEAD
         ref: values
     destination:
       server: https://kubernetes.default.svc
       namespace: demo-{{stage}}
     syncPolicy:
       syncOptions:
       - CreateNamespace=true
YAML

Now, we can proceed to the most complex element of our exercise – stage creation. The Stage object refers to the previously created Warehouse object to request freight to promote. Then, it defines the steps to perform during the promotion process. Kargo’s promotion steps define the workflow of a promotion process. They do the things needed to promote a piece of freight into the next stage. We can use several built-in steps that cover the most common operations like cloning Git repo, updating Helm values, or pushing changes to the remote repository. Our stage definition contains five steps. After cloning the repository with Helm values, we must update the image.tag parameter in the values-test.yaml file with the tag value read from the basic Warehouse. Then, Kargo commits and pushes changes to the configuration repository and triggers Argo CD application synchronization.

apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
 name: test
 namespace: demo
spec:
 requestedFreight:
 - origin:
     kind: Warehouse
     name: basic
   sources:
     direct: true
 promotionTemplate:
   spec:
     vars:
     - name: gitRepo
       value: https://github.com/piomin/argocd-showcase.git
     - name: imageRepo
       value: piomin/basic
     steps:
       - uses: git-clone
         config:
           repoURL: ${{ vars.gitRepo }}
           checkout:
           - branch: master
             path: ./out
       - uses: helm-update-image
         as: update-image
         config:
           path: ./out/values/values-${{ ctx.stage }}.yaml
           images:
           - image: ${{ vars.imageRepo }}
             key: image.tag
             value: Tag
       - uses: git-commit
         as: commit
         config:
           path: ./out
           messageFromSteps:
           - update-image
       - uses: git-push
         config:
           path: ./out
       - uses: argocd-update
         config:
           apps:
           - name: demo-${{ ctx.stage }}
             sources:
             - repoURL: ${{ vars.gitRepo }}
               desiredRevision: ${{ outputs.commit.commit }}
YAML

Here’s the values-test.yaml file in the Argo CD configuration repository.

image:
  repository: piomin/basic
  tag: 1.0.0
app:
  name: basic
  environment: test
values-test.yaml

Here’s the Stage definition of the uat environment. It is pretty similar to the definition of the test environment. It just defines the previous source stage to test.

apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
  name: uat
  namespace: demo
spec:
 requestedFreight:
 - origin:
     kind: Warehouse
     name: basic
   sources:
     stages:
       - test
 promotionTemplate:
   spec:
     vars:
     - name: gitRepo
       value: https://github.com/piomin/argocd-showcase.git
     - name: imageRepo
       value: piomin/basic
     steps:
       - uses: git-clone
         config:
           repoURL: ${{ vars.gitRepo }}
           checkout:
           - branch: master
             path: ./out
       - uses: helm-update-image
         as: update-image
         config:
           path: ./out/values/values-${{ ctx.stage }}.yaml
           images:
           - image: ${{ vars.imageRepo }}
             key: image.tag
             value: Tag
       - uses: git-commit
         as: commit
         config:
           path: ./out
           messageFromSteps:
           - update-image
       - uses: git-push
         config:
           path: ./out
       - uses: argocd-update
         config:
           apps:
           - name: demo-${{ ctx.stage }}
             sources:
             - repoURL: ${{ vars.gitRepo }}
               desiredRevision: ${{ outputs.commit.commit }}
YAML

Perform Promotion Process

Once a new image tag is published to the registry, it becomes visible in the Kargo Dashboard. We must click the “Promote into Stage” button to promote a selected version to the target stage.

Then we should specify the source image tag. A choice is obvious since we only have the image tagged with 1.0.0. After approving the selection by clicking the “Yes” button Kargo starts a promotion process on Kubernetes.

kubernetes-promote-initial-deploy

After a while, we should have a new image promoted to the test stage.

Let’s repeat the promotion process of the 1.0.0 version for the other two stages. Each stage should be at a healthy status. That status is read directly from the corresponding Argo CD Application.

kubernetes-promote-all-initial

Let’s switch to the Argo CD dashboard. There are three applications.

We can make a test call of the sample application HTTP endpoint. Currently, all the environments run the same 1.0.0 version of the app. Let’s enable port forwarding for the basic service in the demo-prod namespace.

kubectl port-forward svc/basic -n demo-prod 8080:8080
ShellSession

The endpoint returns the application name and version as a response.

$ curl http://localhost:8080/basic/ping
I'm basic:1.0.0
ShellSession

Then, we will build another three versions of the basic application beginning from 1.0.1 to 1.0.5.

Once we push each version to the registry, we can refresh the list of images. With the default configuration, Kargo should add the latest version to the list. After pushing the 1.0.3 version I promoted it to the test stage. Then I refreshed a list after pushing the 1.0.4 tag. Now, the 1.0.3 tag can be promoted to a higher environment. In the illustration below, I’m promoting it to the uat stage.

kubernetes-promote-accept

After that, we can promote the 1.0.4 version to the test stage, and refresh a list of images once again to see the currently pushed 1.0.5 tag.

Here’s another promotion. This time, I moved the 1.0.3 version to the prod stage.

After clicking on the image tag tile, we will see its details. For example, the 1.0.3 tag has been verified on the test and uat stages. There is also an approval section. However, we still didn’t approve freight. To do that, we need to switch to the kargo CLI.

The kargo CLI binary for a particular OS on the project GitHub releases page. We must download and copy it to the directory under the PATH. Then, we can sign in to the Kargo server running on Kubernetes using admin credentials.

kargo login https://localhost:8443 --admin \
  --password 123456 \
  --insecure-skip-tls-verify
ShellSession

We can approve a specific freight. Let’s display a list of Freight objects.

$ kubectl get freight -n demo
NAME                                       ALIAS            ORIGIN (KIND)   ORIGIN (NAME)   AGE
0501f8d8018a953821ea437078c1ec34e6db5a6b   ideal-horse      Warehouse       basic           8m7s
683be41b1cef57ed755fc7a0f8e8d7776f90c63a   wiggly-tuatara   Warehouse       basic           11m
6a1219425ddeceabfe94f0605d7a5f6d9d20043e   ulterior-zebra   Warehouse       basic           13m
f737178af6492ea648f85fc7d082e34b7a085927   eager-snail      Warehouse       basic           7h49m
ShellSession

The kargo approve command takes Freight ID as the input parameter.

kargo approve --freight 6a1219425ddeceabfe94f0605d7a5f6d9d20043e \
  --stage uat \
  --project demo
ShellSession

Now, the image tag details window should display the approved stage name.

kubernetes-promote-approved

Let’s enable port forwarding for the basic service in the demo-uat namespace.

kubectl port-forward svc/basic -n demo-uat 8080:8080
ShellSession

Then we call the /basic/ping endpoint to check out the current version.

$ curl http://localhost:8080/basic/ping
I'm basic:1.0.3
ShellSession

Final Thoughts

This article explains the idea behind continuous app promotion between environments on Kubernetes with Kargo and Argo CD. Kargo is a relatively new project in the Kubernetes ecosystem, that smoothly addresses the challenges related to GitOps promotion. It seems promising. I will closely monitor the further development of this project.

The post Continuous Promotion on Kubernetes with GitOps appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/01/14/continuous-promotion-on-kubernetes-with-gitops/feed/ 0 15464
Azure DevOps with OpenShift https://piotrminkowski.com/2024/09/12/azure-devops-with-openshift/ https://piotrminkowski.com/2024/09/12/azure-devops-with-openshift/#comments Thu, 12 Sep 2024 10:38:24 +0000 https://piotrminkowski.com/?p=15375 This article will teach you how to integrate Azure DevOps with the OpenShift cluster to build and deploy your app there. You will learn how to run Azure Pipelines self-hosted agents on OpenShift and use the oc client from your pipelines. If you are interested in Azure DevOps you can read my previous article about […]

The post Azure DevOps with OpenShift appeared first on Piotr's TechBlog.

]]>
This article will teach you how to integrate Azure DevOps with the OpenShift cluster to build and deploy your app there. You will learn how to run Azure Pipelines self-hosted agents on OpenShift and use the oc client from your pipelines. If you are interested in Azure DevOps you can read my previous article about that platform and Terraform used together to prepare the environment and run the Spring Boot app on the Azure cloud.

Before we begin, let me clarify some things and explain my decisions. If you were searching for something about Azure DevOps and OpenShift integration, you came across several articles about the Red Hat Azure DevOps extension for OpenShift. I won’t use that extension. In my opinion, it is not actively developed right now, and therefore it provides some limitations that may complicate our integration process. On the other hand, it won’t offer many useful features, so we can do just as well without it.

You will also find articles that show how to prepare a self-hosted agent image based on the Red Hat dotnet-runtime as a base image (e.g. this one). I also won’t use it. Instead, I’m going to leverage the image built on top of UBI9 provided by the tool called Blue Agent (formerly Azure Pipelines Agent). It is a self-hosted Azure Pipelines agent for Kubernetes, easy to run, secure, and auto-scaled. We will have to modify that image slightly, but more details later.

Prerequisites

In order to proceed with the exercise, we need an active subscription to the Azure cloud and an instance of Azure DevOps. We also have to run the OpenShift cluster, which is accessible for our pipelines running on Azure DevOps. I’m running that cluster also on the Azure cloud using the Azure Red Hat OpenShift managed service (ARO). The details about Azure DevOps creation or OpenShift installation are out of the scope of that article.

Source Code

If you would like to try this exercise by yourself, you may always take a look at my source code. Today you will have to clone two sample Git repositories. The first one contains a sample Spring Boot app used in our exercise. We will try to build that app with Azure Pipelines and deploy it on OpenShift. The second repository is a fork of the official Blue Agent repository. It contains a new version of the Dockerfile for our sample self-agent image based on Red Hat UBI9. Once you clone both of these repositories, you just need to follow my instructions.

Azure DevOps Self-Hosted Agent on OpenShift with Blue Agent

Build the Agent Image

In this section, we will build the image of the self-agent based on UBI9. Then, we will run it on the OpenShift cluster. We need to open the Dockerfile in the repository located in the src/docker/Dockerfile-ubi9 path. We don’t need to change much inside that file. It contains several clients, e.g. for AWS or Azure interaction. We will include the line for installing the oc client, which allows us to interact with OpenShift.

RUN curl -s https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-client-linux.tar.gz -o - | tar zxvf - -C /usr/bin/
Dockerfile

After that, we need to build a new image. You can completely omit this step and pull the final image published on my Docker account and available under the piomin/blue-agent:ubi9 tag.

$ docker build -t piomin/blue-agent:ubi9 -f Dockerfile-ubi9 \
  --build-arg JQ_VERSION=1.6 \
  --build-arg AZURE_CLI_VERSION=2.63.0 \
  --build-arg AWS_CLI_VERSION=2.17.42 \
  --build-arg GCLOUD_CLI_VERSION=490.0.0 \
  --build-arg POWERSHELL_VERSION=7.2.23 \
  --build-arg TINI_VERSION=0.19.0 \
  --build-arg BUILDKIT_VERSION=0.15.2  \
  --build-arg AZP_AGENT_VERSION=3.243.1 \
  --build-arg ROOTLESSKIT_VERSION=2.3.1 \
  --build-arg GO_VERSION=1.22.7 \
  --build-arg YQ_VERSION=4.44.3 .
ShellSession

We can use a Helm chart to install Blue Agent on Kubernetes. This project is still under active development. I could not customize it with parameters to prepare a chart for installing Blue Agent on OpenShift. So, I just set some of them inside the values.yaml file:

pipelines:
  organizationURL: https://dev.azure.com/pminkows
  personalAccessToken: <AZURE_DEVOPS_PERSONAL_ACCESS_TOKEN>
  poolName: Default

image:
  repository: piomin/blue-agent
  version: 2
  flavor: ubi9
YAML

Deploy Agent on OpenShift

We can generate the YAML manifests for the defined parameters without installing it with the following command:

$ helm template -f blue-agent-values.yaml --dry-run .
ShellSession

In order to run it on OpenShift, we need to customize the Deployment object. First of all, I had to grant the privileged SCC (Security Context Constraint) to the container and remove some fields from the securityContext section. Here’s our Deployment object. It refers to the objects previously generated by the Helm chart: agent-blue-agent ServiceAccount and the Secret with the same name.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: agent-blue-agent    
  labels:
    app.kubernetes.io/component: agent
    app.kubernetes.io/instance: agent
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: blue-agent
    app.kubernetes.io/part-of: blue-agent
    app.kubernetes.io/version: 3.243.1
    helm.sh/chart: blue-agent-7.0.3
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/instance: agent
      app.kubernetes.io/name: blue-agent
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: agent
        app.kubernetes.io/name: blue-agent
      annotations:
        cluster-autoscaler.kubernetes.io/safe-to-evict: 'false'
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: Always
      serviceAccountName: agent-blue-agent
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 3600
      securityContext: {}
      containers:
        - resources:
            limits:
              cpu: '2'
              ephemeral-storage: 8Gi
              memory: 4Gi
            requests:
              cpu: '1'
              ephemeral-storage: 2Gi
              memory: 2Gi
          terminationMessagePath: /dev/termination-log
          lifecycle:
            preStop:
              exec:
                command:
                  - bash
                  - '-c'
                  - ''
                  - 'rm -rf ${AZP_WORK};'
                  - 'rm -rf ${TMPDIR};'
          name: azp-agent
          env:
            - name: AGENT_DIAGLOGPATH
              value: /app-root/azp-logs
            - name: VSO_AGENT_IGNORE
              value: AZP_TOKEN
            - name: AGENT_ALLOW_RUNASROOT
              value: '1'
            - name: AZP_AGENT_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: AZP_URL
              valueFrom:
                secretKeyRef:
                  name: agent-blue-agent
                  key: organizationURL
            - name: AZP_POOL
              value: Default
            - name: AZP_TOKEN
              valueFrom:
                secretKeyRef:
                  name: agent-blue-agent
                  key: personalAccessToken
            - name: flavor_ubi9
            - name: version_7.0.3
          securityContext:
            privileged: true
          imagePullPolicy: Always
          volumeMounts:
            - name: azp-logs
              mountPath: /app-root/azp-logs
            - name: azp-work
              mountPath: /app-root/azp-work
            - name: local-tmp
              mountPath: /app-root/.local/tmp
          terminationMessagePolicy: File
          image: 'piomin/blue-agent:ubi9'
      serviceAccount: agent-blue-agent
      volumes:
        - name: azp-logs
          emptyDir:
            sizeLimit: 1Gi
        - name: azp-work
          ephemeral:
            volumeClaimTemplate:
              spec:
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 10Gi
                storageClassName: managed-csi
                volumeMode: Filesystem
        - name: local-tmp
          ephemeral:
            volumeClaimTemplate:
              spec:
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 1Gi
                storageClassName: managed-csi
                volumeMode: Filesystem
      dnsPolicy: ClusterFirst
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 50%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
YAML

After removing the --dry-run option from the helm template command we can install the solution on OpenShift. It’s worth noting that there were some problems with the Deployment object in the Helm chart, so we will apply it directly later.

azure-devops-openshift-blue-agent

Before applying Deployment we have to add privileged SCC to the ServiceAccount used by the agent. By the way, adding privileged SCC is not the best approach to managing containers on OpenShift. However, the oc client installed in the image creates a directory during the login procedure

$ oc adm policy add-scc-to-user privileged -z agent-blue-agent
ShellSession

Once we deploy an agent on OpenShift we should see three running pods. They are trying to connect to the Default pool defined in the Azure DevOps for our self-hosted agent.

$ oc get po
NAME                                READY   STATUS    RESTARTS   AGE
agent-blue-agent-5458b9b76d-2gk2z   1/1     Running   0          13h
agent-blue-agent-5458b9b76d-nwcxh   1/1     Running   0          13h
agent-blue-agent-5458b9b76d-tcpnp   1/1     Running   0          13h
ShellSession

Connect Self-hosted Agent to Azure DevOps

Let’s switch to the Azure DevOps instance. Then, we should go to our project. After that, we need to go to the Organization Settings -> Agent pools. We need to create a new agent pool. The pool name depends on the name configured for the Blue Agent deployed on OpenShift. In our case, this name is Default.

Once we create a pool we should see all three instances of agents in the following list.

azure-devops-openshift-agents

We can switch to the OpenShift once again. Then, let’s take a look at the logs printed out by one of the agents. As you see, it has been started successfully and listens for the incoming jobs.

If agents are running on OpenShift and they successfully connect with Azure DevOps, we finished the first part of our exercise. Now, we can create a pipeline for our sample Spring Boot app.

Create Azure DevOps Pipeline for OpenShift

Azure Pipeline Definition

Firstly, we need to go to the sample-spring-boot-web repository. The pipeline is configured in the azure-pipelines.yml file in the repository root directory. Let’s take a look at it. It’s very simple simple. I’m just using the Azure DevOps Command-Line task to interact with OpenShift through the oc client. Of course, it has to define the target agent pool used for running its jobs (1). In the first step, we need to log in to the OpenShift cluster (2). We can use the internal address of the Kubernetes Service, since the agent is running inside the cluster. All the actions should be performed inside our sample myapp project (3). We need to create BuildConfig and Deployment objects using the templates from the repository (4). The pipeline uses its BuildId parameter to the output image. Finally, it starts the build configured by the previously applied BuildConfig object (5).

trigger:
- master

# (1)
pool:
  name: Default

steps:
# (2)
- task: CmdLine@2
  inputs:
    script: 'oc login https://172.30.0.1:443 -u kubeadmin -p $(OCP_PASSWORD) --insecure-skip-tls-verify=true'
  displayName: Login to OpenShift
# (3)
- task: CmdLine@2
  inputs:
    script: 'oc project myapp'
  displayName: Switch to project
# (4)
- task: CmdLine@2
  inputs:
    script: 'oc process -f ocp/openshift.yaml -o yaml -p IMAGE_TAG=v1.0-$(Build.BuildId) -p NAMESPACE=myapp | oc apply -f -'
  displayName: Create build
# (5)
- task: CmdLine@2
  inputs:
    script: 'oc start-build sample-spring-boot-web-bc -w'
    failOnStderr: true
  displayName: Start build
  timeoutInMinutes: 5
- task: CmdLine@2
  inputs:
    script: 'oc status'
  displayName: Check status
YAML

Integrate Azure Pipelines with OpenShift

We use the OpenShift Template object for defining the YAML manifest with Deployment and BuildConfig. The BuildConfig object typically manages the image build process on OpenShift. In order to build the image directly from the source code, it uses the source-2-image (S2I) tool. We need to set at least three parameters to configure the process properly. The first of them is the address of the output image (1). We can use the internal OpenShift repository available at image-registry.openshift-image-registry.svc:5000 or any other external registry like Quay or Docker Hub. We should also set the name of the builder image (2). Our app requires at least Java 21. Of course, the process requires the source code repository as the input (3). At the same time, we define the Deployment object. It uses the image previously built by the BuildConfig object (4). The whole template takes two input parameters: IMAGE_TAG and NAMESPACE (5).

ind: Template
apiVersion: template.openshift.io/v1
metadata:
  name: sample-spring-boot-web-tmpl
objects:
  - kind: BuildConfig
    apiVersion: build.openshift.io/v1
    metadata:
      name: sample-spring-boot-web-bc
      labels:
        build: sample-spring-boot-web-bc
    spec:
      # (1)
      output:
        to:
          kind: DockerImage
          name: 'image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/sample-spring-boot-web:${IMAGE_TAG}'
      # (2)
      strategy:
        type: Source
        sourceStrategy:
          from:
            kind: ImageStreamTag
            namespace: openshift
            name: 'openjdk-21:stable'
      # (3)
      source:
        type: Git
        git:
          uri: 'https://github.com/piomin/sample-spring-boot-web.git'
  - kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: sample-spring-boot-web
      labels:
        app: sample-spring-boot-web
        app.kubernetes.io/component: sample-spring-boot-web
        app.kubernetes.io/instance: sample-spring-boot-web
    spec:
      replicas: 1
      selector:
        matchLabels:
          deployment: sample-spring-boot-web
      template:
        metadata:
          labels:
            deployment: sample-spring-boot-web
        spec:
          containers:
            - name: sample-spring-boot-web
              # (4)
              image: 'image-registry.openshift-image-registry.svc:5000/${NAMESPACE}/sample-spring-boot-web:${IMAGE_TAG}'
              ports:
                - containerPort: 8080
                  protocol: TCP
                - containerPort: 8443
                  protocol: TCP
# (5)
parameters:
  - name: IMAGE_TAG
    displayName: Image tag
    description: The output image tag
    value: v1.0
    required: true
  - name: NAMESPACE
    displayName: Namespace
    description: The OpenShift Namespace where the ImageStream resides
    value: openshift
YAML

Currently, OpenShift doesn’t provide OpenJDK 21 image by default. So we need to manually import it to our cluster before running the pipeline.

$ oc import-image openjdk-21:stable \
  --from=registry.access.redhat.com/ubi9/openjdk-21:1.20-2.1725851045 \
  --confirm
ShellSession

Now, we can create a pipeline in Azure DevOps. In order to do it, we need to go to the Pipelines section, and then click the New pipeline button. After that, we just need to pass the address of our repository with the pipeline definition.

Our pipeline requires the OCP_PASSWORD input parameter with the OpenShift admin user password. We can set it as the pipeline secret variable. In order to do that, we need to edit the pipeline and then click the Variables button.

Run the Pipeline

Finally, we can run our pipeline. If everything finishes successfully, the status of the job is Success. It takes around one minute to execute all the steps defined in the pipeline.

We can see detailed logs for each step.

azure-devops-openshift-pipeline-logs

Each time, we run a pipeline a new build on OpenShift starts. Note, that the pipeline updates the BuildConfig object with the new version of the output image.

azure-devops-openshift-builds

We can take a look at detailed logs of each build. Within such a build OpenShift starts a new pod, which performs the whole process. It uses the S2I approach for building the image from the source code and pushing that image to the internal OpenShift registry.

Finally, let’s take a look at the Deployment. As you see, it was reloaded with the latest version of the image and worked fine.

$ oc describe deploy sample-spring-boot-web -n myapp
Name:                   sample-spring-boot-web
Namespace:              myapp
CreationTimestamp:      Wed, 11 Sep 2024 16:09:11 +0200
Labels:                 app=sample-spring-boot-web
                        app.kubernetes.io/component=sample-spring-boot-web
                        app.kubernetes.io/instance=sample-spring-boot-web
Annotations:            deployment.kubernetes.io/revision: 3
Selector:               deployment=sample-spring-boot-web
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  deployment=sample-spring-boot-web
  Containers:
   sample-spring-boot-web:
    Image:        image-registry.openshift-image-registry.svc:5000/myapp/sample-spring-boot-web:v1.0-134
    Ports:        8080/TCP, 8443/TCP
    Host Ports:   0/TCP, 0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  sample-spring-boot-web-78559697c9 (0/0 replicas created), sample-spring-boot-web-7985d6b844 (0/0 replicas created)
NewReplicaSet:   sample-spring-boot-web-5584447f5d (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  19m   deployment-controller  Scaled up replica set sample-spring-boot-web-5584447f5d to 1
  Normal  ScalingReplicaSet  17m   deployment-controller  Scaled down replica set sample-spring-boot-web-7985d6b844 to 0 from 1
ShellSession

By the way, Azure Pipelines jobs are “load-balanced” between the agents. Here’s the name of the agent used for the first pipeline run.

Here’s the name of the agent used for the second pipeline run.

There are three instances of the agent running on OpenShift. So, Azure DevOps can process max 3 pipeline runs. By the way, we could enable autoscaling for Blue Agent based on KEDA.

azure-devops-openshift-jobs

Final Thoughts

In this article, I showed the simplest and most OpenShift-native approach to building CI/CD pipelines on Azure DevOps. We just need to use the oc client on the agent image and the OpenShift BuildConfig object to orchestrate the whole process of building and deploying the Spring Boot app on the cluster. Of course, we could implement the same process in several different. For example, we could leverage the plugins for Kubernetes and completely omit the tools provided by OpenShift. On the other hand, it is possible to use Argo CD for the delivery phase, and Azure Pipelines for the integration phase.

The post Azure DevOps with OpenShift appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/09/12/azure-devops-with-openshift/feed/ 2 15375
Migrate from Kubernetes to OpenShift in the GitOps Way https://piotrminkowski.com/2024/04/15/migrate-from-kubernetes-to-openshift-in-the-gitops-way/ https://piotrminkowski.com/2024/04/15/migrate-from-kubernetes-to-openshift-in-the-gitops-way/#comments Mon, 15 Apr 2024 12:09:50 +0000 https://piotrminkowski.com/?p=15190 In this article, you will learn how to migrate your apps from Kubernetes to OpenShift in the GitOps way using tools like Kustomize, Helm, operators, and Argo CD. We will discuss the best practices in that area. This requires us to avoid approaches like starting a pod in the privileged mode. We will focus not […]

The post Migrate from Kubernetes to OpenShift in the GitOps Way appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to migrate your apps from Kubernetes to OpenShift in the GitOps way using tools like Kustomize, Helm, operators, and Argo CD. We will discuss the best practices in that area. This requires us to avoid approaches like starting a pod in the privileged mode. We will focus not just on running your custom apps, but mostly on the popular pieces of cloud-native or legacy software including:

  • Argo CD
  • Istio
  • Apache Kafka
  • Postgres
  • HashiCorp Vault
  • Prometheus
  • Redis
  • Cert Manager

Finally, we will migrate our sample Spring Boot app. I will also show you how to build such an app on Kubernetes and OpenShift in the same way using the Shipwright tool. However, before we start, let’s discuss some differences between “vanilla” Kubernetes and OpenShift.

Introduction

What are the key differences between Kubernetes and OpenShift? That’s probably the first question you will ask yourself when considering migration from Kubernetes. Today, I will focus only on those aspects that impact running the apps from our list. First of all, OpenShift is built on top of Kubernetes and is fully compatible with Kubernetes APIs and resources. If you can do something on Kubernetes, you can do it on OpenShift in the same way unless it doesn’t compromise security policy. OpenShift comes with additional security policies out of the box. For example, by default, it won’t allow you to run containers with the root user.

Apart from security reasons, only the fact that you can do something doesn’t mean that you should do it in that way. So, you can run images from Docker Hub, but Red Hat provides many supported container images built from Red Hat Enterprise Linux. You can find a full list of supported images here. Although you can install popular software on OpenShift using Helm charts, Red Hat provides various supported Kubernetes operators for that. With those operators, you can be sure that the installation will go without any problems and the solution might be integrated with OpenShift better. We will analyze all those things based on the examples from the tools list.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. I will explain the structure of our sample in detail later. So after cloning the Git repository you should just follow my instructions.

Install Argo CD

Use Official Helm Chart

In the first step, we will install Argo CD on OpenShift. I’m assuming that on Kubernetes, you’re using the official Helm chart for that. In order to install that chart, we need to add the following Helm repository:

$ helm repo add argo https://argoproj.github.io/argo-helm
ShellSession

Then, we can install the Argo CD in the argocd namespace on OpenShift with the following command. The Argo CD Helm chart provides some parameters dedicated to OpenShift. We need to enable arbitrary uid for the repo server by setting the openshift.enabled property to true. If we want to access the Argo CD dashboard outside of the cluster we should expose it as the Route. In order to do that, we need to enable the server.route.enabled property and set the hostname using the server.route.hostname parameter (piomin.eastus.aroapp.io is my OpenShift domain).

$ helm install argocd argo/argo-cd -n argocd --create-namespace \
    --set openshift.enabled=true \
    --set server.route.enabled=true \
    --set server.route.hostname=argocd.apps.piomin.eastus.aroapp.io
ShellSession

After that, we can access the Argo CD dashboard using the Route address as shown below. The admin user password may be taken from the argocd-initial-admin-secret Secret generated by the Helm chart.

Use the OpenShift GitOps Operator (Recommended Way)

The solution presented in the previous section works fine. However, it is not the optimal approach for OpenShift. In that case, the better idea is to use OpenShift GitOps Operator. Firstly, we should find the “Red Hat GitOps Operator” inside the “Operator Hub” section in the OpenShift Console. Then, we have to install the operator.

During the installation, the operator automatically creates the Argo CD instance in the openshift-gitops namespace.

OpenShift GitOps operator automatically exposes the Argo CD dashboard through the Route. It is also integrated with OpenShift auth, so we can use cluster credentials to sign in there.

kubernetes-to-openshift-argocd

Install Redis, Postgres and Apache Kafka

OpenShift Support in Bitnami Helm Charts

Firstly, let’s assume that we use Bitnami Helm charts to install all three tools from the chapter title (Redis, Postgres, Kafka) on Kubernetes. Fortunately, the latest versions of Bitnami Helm charts provide out-of-the-box compatibility with the OpenShift platform. Let’s analyze what it means.

Beginning from the 4.11 version OpenShift introduces new Security Context Constraints (SCC) called restricted-v2. In OpenShift, security context constraints allow us to control permissions assigned to the pods. The restricted-v2 SCC includes a minimal set of privileges usually required for a generic workload to run. It is the most restrictive policy that matches the current pod security standards. As I mentioned before, the latest version of the most popular Bitnami Helm charts supports the restricted-v2 SCC. We can check which of the charts support that feature by checking if they provide the global.compatibility.openshift.adaptSecurityContext parameter. The default value of that parameter is auto. It means that it is applied only if the detected running cluster is Openshift.

So, in short, we don’t have to change anything in the Helm chart configuration used on Kubernetes to make it work also on OpenShift. However, it doesn’t mean that we won’t change that configuration. Let’s analyze it tool after tool.

Install Redis on OpenShift with Helm Chart

In the first step, let’s add the Bitnami Helm repository with the following command:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
ShellSession

Then, we can install and run a three-node Redis cluster with a single master node in the redis namespace using the following command:

$ helm install redis bitnami/redis -n redis --create-namespace
ShellSession

After installing the chart we can display a list of pods running the redis namespace:

$ oc get po
NAME               READY   STATUS    RESTARTS   AGE
redis-master-0     1/1     Running   0          5m31s
redis-replicas-0   1/1     Running   0          5m31s
redis-replicas-1   1/1     Running   0          4m44s
redis-replicas-2   1/1     Running   0          4m3s
ShellSession

Let’s take a look at the securityContext section inside one of the Redis cluster pods. It contains characteristic fields for the restricted-v2 SCC, which removes runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs.

kubernetes-to-openshift-security-context

However, let’s stop for a moment to analyze the current situation. We installed Redis on OpenShift using the Bitnami Helm chart. By default, this chart is based on the Redis Debian image provided by Bitnami in the Docker Hub.

On the other hand, Red Hat provides its build of Redis image based on RHEL 9. Consequently, this image would be more suitable for running on OpenShift.

kubernetes-to-openshift-redis

In order to use a different Redis image with the Bitnami Helm chart, we need to override the registry, repository, and tag fields in the image section. The full address of the current latest Red Hat Redis image is registry.redhat.io/rhel9/redis-7:1-16. In order to make the Bitnami chart work with that image, we need to override the default data path to /var/lib/redis/data and disable the container’s Security Context read-only root filesystem for the slave pods.

image:
  tag: 1-16
  registry: registry.redhat.io
  repository: rhel9/redis-7

master:
  persistence:
    path: /var/lib/redis/data

replica:
  persistence:
    path: /var/lib/redis/data
  containerSecurityContext:
    readOnlyRootFilesystem: false
YAML

Install Postgres on OpenShift with Helm Chart

With Postgres, we have every similar as before with Redis. The Bitnami Helm chart also supports OpenShift restricted-v2 SCC and Red Hat provide the Postgres image based on RHEL 9. Once again, we need to override some chart parameters to adapt to a different image than the default one provided by Bitnami.

image:
  tag: 1-54
  registry: registry.redhat.io
  repository: rhel9/postgresql-15

primary:
  containerSecurityContext:
    readOnlyRootFilesystem: false
  persistence:
    mountPath: /var/lib/pgsql
  extraEnvVars:
    - name: POSTGRESQL_ADMIN_PASSWORD
      value: postgresql123

postgresqlDataDir: /var/lib/pgsql/data
YAML

Of course, we can consider switching to one of the available Postgres operators. From the “Operator Hub” section we can install e.g. Postgres using Crunchy or EDB operators. However, these are not operators provided by Red Hat. Of course, you can use them on “vanilla” Kubernetes as well. In that case, the migration to OpenShift also won’t be complicated.

Install Kafka on OpenShift with the Strimzi Operator

The situation is slightly different in the case of Apache Kafka. Of course, we can use the Kafka Helm chart provided by Bitnami. However, Red Hat provides a supported version of Kafka through the Strimzi operator. This operator is a part of the Red Hat product ecosystem and is available commercially as the AMQ Streams. In order to install Kafka with AMQ Streams on OpenShift, we need to install the operator first.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: amq-streams
  namespace: openshift-operators
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  channel: stable
  installPlanApproval: Automatic
  name: amq-streams
  source: redhat-operators
  sourceNamespace: openshift-marketplace
YAML

Once we install the operator with the Strimzi CRDs we can provision the Kafka instance on OpenShift. In order to do that, we need to define the Kafka object. The name of the cluster is my-cluster. We should install it after a successful installation of the operator CRD, so we set the higher value of the Argo CD sync-wave parameter than for the amq-streams Subscription object. Argo CD should also ignore missing CRDs installed by the operator during sync thanks to the SkipDryRunOnMissingResource option.

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
  namespace: kafka
  annotations:
    argocd.argoproj.io/sync-wave: "3"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
  kafka:
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      default.replication.factor: 3
      min.insync.replicas: 2
      inter.broker.protocol.version: '3.6'
    storage:
      type: persistent-claim
      size: 5Gi
      deleteClaim: true
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    version: 3.6.0
    replicas: 3
  entityOperator:
    topicOperator: {}
    userOperator: {}
  zookeeper:
    storage:
      type: persistent-claim
      deleteClaim: true
      size: 2Gi
    replicas: 3
YAML

GitOps Strategy for Kubernetes and OpenShift

In this section, we will focus on comparing differences in the GitOps manifest between Kubernetes and Openshift. We will use Kustomize to configure two overlays: openshift and kubernetes. Here’s the structure of our configuration repository:

.
├── base
│   ├── kustomization.yaml
│   └── namespaces.yaml
└── overlays
    ├── kubernetes
    │   ├── kustomization.yaml
    │   ├── namespaces.yaml
    │   ├── values-cert-manager.yaml
    │   └── values-vault.yaml
    └── openshift
        ├── cert-manager-operator.yaml
        ├── kafka-operator.yaml
        ├── kustomization.yaml
        ├── service-mesh-operator.yaml
        ├── values-postgres.yaml
        ├── values-redis.yaml
        └── values-vault.yaml
ShellSession

Configuration for Kubernetes

In addition to the previously discussed tools, we will also install “cert-manager”, Prometheus, and Vault using Helm charts. Kustomize allows us to define a list of managed charts using the helmCharts section. Here’s the kustomization.yaml file containing a full set of installed charts:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base
  - namespaces.yaml

helmCharts:
  - name: redis
    repo: https://charts.bitnami.com/bitnami
    releaseName: redis
    namespace: redis
  - name: postgresql
    repo: https://charts.bitnami.com/bitnami
    releaseName: postgresql
    namespace: postgresql
  - name: kafka
    repo: https://charts.bitnami.com/bitnami
    releaseName: kafka
    namespace: kafka
  - name: cert-manager
    repo: https://charts.jetstack.io
    releaseName: cert-manager
    namespace: cert-manager
    valuesFile: values-cert-manager.yaml
  - name: vault
    repo: https://helm.releases.hashicorp.com
    releaseName: vault
    namespace: vault
    valuesFile: values-vault.yaml
  - name: prometheus
    repo: https://prometheus-community.github.io/helm-charts
    releaseName: prometheus
    namespace: prometheus
  - name: istio
    repo: https://prometheus-community.github.io/helm-charts
    releaseName: istio
    namespace: istio-system
overlays/kubernetes/kustomization.yaml

For some of them, we need to override default Helm parameters. Here’s the values-vault.yaml file with the parameters for Vault. We enable development mode and UI dashboard:

server:
  dev:
    enabled: true
ui:
  enabled: true
overlays/kubernetes/values-vault.yaml

Let’s also customize the default behavior of the “cert-manager” chart with the following values:

installCRDs: true
startupapicheck:
  enabled: false
overlays/kubernetes/values-cert-manager.yaml

Configuration for OpenShift

Then, we can switch to the configuration for Openshift. Vault has to be installed with the Helm chart, but for “cert-manager” we can use the operator provided by Red Hat. Since Openshift comes with built-in Prometheus, we don’t need to install it. We will also replace the Helm chart with Istio with the Red Hat-supported OpenShift Service Mesh operator. Here’s the kustomization.yaml for OpenShift:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base
  - kafka-operator.yaml
  - cert-manager-operator.yaml
  - service-mesh-operator.yaml

helmCharts:
  - name: redis
    repo: https://charts.bitnami.com/bitnami
    releaseName: redis
    namespace: redis
    valuesFile: values-redis.yaml
  - name: postgresql
    repo: https://charts.bitnami.com/bitnami
    releaseName: postgresql
    namespace: postgresql
    valuesFile: values-postgres.yaml
  - name: vault
    repo: https://helm.releases.hashicorp.com
    releaseName: vault
    namespace: vault
    valuesFile: values-vault.yaml
overlays/openshift/kustomization.yaml

For Vault we should enable integration with Openshift and support for the Route object. Red Hat provides a Vault image based on UBI in the registry.connect.redhat.com/hashicorp/vault registry. Here’s the values-vault.yaml file for OpenShift:

server:
  dev:
    enabled: true
  route:
    enabled: true
    host: ""
    tls: null
  image:
    repository: "registry.connect.redhat.com/hashicorp/vault"
    tag: "1.16.1-ubi"
global:
  openshift: true
injector:
  enabled: false
overlays/openshift/values-vault.yaml

In order to install operators we need to define at least the Subscription object. Here’s the subscription for the OpenShift Service Mesh. After installing the operator we can create a control plane in the istio-system namespace using the ServiceMeshControlPlane CRD object. In order to apply the CRD after installing the operator, we need to use the Argo CD sync waves and define the SkipDryRunOnMissingResource parameter:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: servicemeshoperator
  namespace: openshift-operators
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  channel: stable
  installPlanApproval: Automatic
  name: servicemeshoperator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
---
apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
  name: basic
  namespace: istio-system
  annotations:
    argocd.argoproj.io/sync-wave: "3"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
  tracing:
    type: None
    sampling: 10000
  policy:
    type: Istiod
  addons:
    grafana:
      enabled: false
    jaeger:
      install:
        storage:
          type: Memory
    kiali:
      enabled: false
    prometheus:
      enabled: false
  telemetry:
    type: Istiod
  version: v2.5
overlays/openshift/service-mesh-operator.yaml

Since the “cert-manager” operator is installed in a different namespace than openshift-operators, we also need to define the OperatorGroup object.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-cert-manager-operator
  namespace: cert-manager
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  channel: stable-v1
  installPlanApproval: Automatic
  name: openshift-cert-manager-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
---
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
  name: cert-manager-operator
  namespace: cert-manager
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  targetNamespaces:
    - cert-manager
overlays/openshift/cert-manager-operator.yaml

Finally, OpenShift comes with built-in Prometheus monitoring, so we don’t need to install it.

Apply the Configuration with Argo CD

Here’s the Argo CD Application responsible for installing our sample configuration on OpenShift. We should create it in the openshift-gitops namespace.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: install
  namespace: openshift-gitops
spec:
  destination:
    server: 'https://kubernetes.default.svc'
  project: default
  source:
    path: overlays/openshift
    repoURL: 'https://github.com/piomin/kubernetes-to-openshift-argocd.git'
    targetRevision: HEAD
YAML

Before that, we need to enable the use of the Helm chart inflator generator with Kustomize in Argo CD. In order to do that, we can add the kustomizeBuildOptions parameter in the openshift-gitops ArgoCD object as shown below.

apiVersion: argoproj.io/v1beta1
kind: ArgoCD
metadata:
  name: openshift-gitops
  namespace: openshift-gitops
spec:
  # ...
  kustomizeBuildOptions: '--enable-helm'
YAML

After creating the Argo CD Application and triggering the sync process, the installation starts on OpenShift.

kubernetes-to-openshift-gitops

Build App Images

We installed several software solutions including the most popular databases, message brokers, and security tools. However, now we want to build and run our own apps. How to migrate them from Kubernetes to OpenShift? Of course, we can run the app images exactly in the same way as in Kubernetes. On the other hand, we can build them on OpenShift using the Shipwright project. We can install it on OpenShift using the “Builds for Red Hat OpenShift Operator”.

kubernetes-to-openshift-shipwright

After that, we need to create the ShiwrightBuild object. It needs to contain the name of the target namespace for running Shipwright in the targetNamespace field. In my case, the target namespace is builds-demo. For a detailed description of the Shipwright build, you can refer to that article on my blog.

apiVersion: operator.shipwright.io/v1alpha1
kind: ShipwrightBuild
metadata:
  name: openshift-builds
spec:
  targetNamespace: builds-demo
YAML

With Shipwright we can easily switch between multiple build strategies on Kubernetes, and on OpenShift as well. For example, on OpenShift we can use a built-in source-to-image (S2I) strategy, while on Kubernetes e.g. Kaniko or Cloud Native Buildpacks.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: sample-spring-kotlin-build
  namespace: builds-demo
spec:
  output:
    image: quay.io/pminkows/sample-kotlin-spring:1.0-shipwright
    pushSecret: pminkows-piomin-pull-secret
  source:
    git:
      url: https://github.com/piomin/sample-spring-kotlin-microservice.git
  strategy:
    name: source-to-image
    kind: ClusterBuildStrategy
YAML

Final Thoughts

Migration from Kubernetes to Openshift is not a painful process. Many popular Helm charts support OpenShift restricted-v2 SCC. Thanks to that, in some cases, you don’t need to change anything. However, sometimes it’s worth switching to the version of the particular tool supported by Red Hat.

The post Migrate from Kubernetes to OpenShift in the GitOps Way appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/04/15/migrate-from-kubernetes-to-openshift-in-the-gitops-way/feed/ 2 15190
GitOps on Kubernetes for Postgres and Vault with Argo CD https://piotrminkowski.com/2024/04/05/gitops-on-kubernetes-for-postgres-and-vault-with-argo-cd/ https://piotrminkowski.com/2024/04/05/gitops-on-kubernetes-for-postgres-and-vault-with-argo-cd/#respond Fri, 05 Apr 2024 09:01:41 +0000 https://piotrminkowski.com/?p=15149 In this article, you will learn how to prepare the GitOps process on Kubernetes for the Postgres database and Hashicorp Vault with Argo CD. I guess that you are using Argo CD widely on your Kubernetes clusters for managing standard objects like deployment, services, or secrets. However, our configuration around the apps usually contains several […]

The post GitOps on Kubernetes for Postgres and Vault with Argo CD appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to prepare the GitOps process on Kubernetes for the Postgres database and Hashicorp Vault with Argo CD. I guess that you are using Argo CD widely on your Kubernetes clusters for managing standard objects like deployment, services, or secrets. However, our configuration around the apps usually contains several other additional tools like databases, message brokers, or secrets engines. Today, we will consider how to implement the GitOps approach for such tools.

We will do the same thing as described in that article, but fully with the GitOps approach applied by Argo CD. The main goal here is to integrate Postgres with the Vault database secrets engine to generate database credentials dynamically and initialize the DB schema for the sample Spring Boot app. In order to achieve these goals, we are going to install two Kubernetes operators: Atlas and Vault Config. Atlas is a tool for managing the database schema as code. Its Kubernetes Operator allows us to define the schema and apply it to our database using the CRD objects. The Vault Config Operator provided by the Red Hat Community of Practice does a very similar thing but for Hashicorp Vault.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. I will explain the structure of our sample in detail later. So after cloning the Git repository you should just follow my instructions 🙂

How It Works

Before we start, let’s describe our sample scenario. Thanks to the database secrets engine Vault integrates with Postgres and generates its credentials dynamically based on configured roles. On the other hand, our sample Spring Boot app integrates with Vault and uses its database engine to authenticate against Postgres. All the aspects of that scenario are managed in the GitOps style. Argo CD installs Vault, Postgres, and additional operators on Kubernetes via their Helm charts. Then, it applies all the required CRD objects to configure both Vault and Postgres. We keep the whole configuration in a single Git repository in the form of YAML manifests.

Argo CD prepares the configuration on Vault and creates a table on Postgres for the sample Spring Boot app. Our app integrates with Vault through the Spring Cloud Vault project. It also uses Spring Data JPA to interact with the database. Here’s the illustration of our scenario.

argo-cd-vault-postgres-arch

Install Argo CD on Kubernetes

Traditionally, we need to start our GitOps exercise by installing Argo CD on the Kubernetes cluster. Of course, we can do it using the Helm chart. In the first step, we need to add the following repository:

$ helm repo add argo https://argoproj.github.io/argo-helm
ShellSession

We will add one parameter to the argo-cm ConfigMap to ignore the MutatingWebhookConfiguration kind. This step is not necessary. It allows us to ignore the specific resource generated by one of the Helm charts used in the further steps. Thanks to that we will have everything in Argo CD in the “green” color 🙂 Here’s the Helm values.yaml file with the required configuration:

configs:
  cm:
    resource.exclusions: |
      - apiGroups:
        - admissionregistration.k8s.io
        kinds:
        - MutatingWebhookConfiguration
        clusters:
        - "*"
YAML

Now, we can install the Argo CD in the argocd namespace using the configuration previously defined in the values.yml file:

$ helm install argo-cd argo/argo-cd \
    --version 6.7.8 \
    -n argo \
    --create-namespace
ShellSession

That’s not all. Since the Atlas operator is available in the OCI-type Helm repository, we need to apply the following Secret in the argocd namespace. By default, Argo CD doesn’t allow the OCI-type repo, so we need to include the enableOCI parameter in the definition.

apiVersion: v1
kind: Secret
metadata:
  name: ghcr-io-helm-oci
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  name: ariga
  url: ghcr.io/ariga
  enableOCI: "true"
  type: helm
YAML

Let’s take a look at the list of repositories in the Argo CD UI dashboard. You should see the “Successful” connection status.

Prepare Configuration Manifests for Argo CD

Config Repository Structure

Let me first explain the structure of our Git config repository. The additional configuration is stored in the apps directory. It includes the CRD objects required to initialize the database schema or Vault engines. In the bootstrap directory, we keep the values.yaml file for each Helm chart managed by Argo CD. It’s all that we need. The bootstrap-via-appset/bootstrap.yaml contains the definition of Argo CD ApplicationSet we need to apply to the Kubernetes cluster. This ApplicationSet will generate all required Argo CD applications responsible for installing the charts and creating CRD objects.

.
├── apps
│   ├── postgresql
│   │   ├── database.yaml
│   │   ├── policies.yaml
│   │   ├── roles.yaml
│   │   └── schema.yaml
│   └── vault
│       └── job.yaml
├── bootstrap
│   ├── values
│   │   ├── atlas
│   │   │   └── values.yaml
│   │   ├── cert-manager
│   │   │   └── values.yaml
│   │   ├── postgresql
│   │   │   └── values.yaml
│   │   ├── vault
│   │   │   └── values.yaml
│   │   └── vault-config-operator
│   │       └── values.yaml
└── bootstrap-via-appset
    └── bootstrap.yaml
ShellSession

Bootstrap with the Argo CD ApplicationSet

Let’s take a look at the ApplicationSet. It’s pretty interesting (I hope :)). I’m using here some relatively new Argo CD features like multiple sources (Argo CD 2.6) or application sets template patch (Argo CD 2.10). We need to generate an Argo CD Application per each tool we want to install on Kubernetes (1). In the generators section, we define parameters for Vault, PostgreSQL, Atlas Operator, Vault Config Operator, and Cert Manager (which is required by the Vault Config Operator). In the templatePatch section, we prepare a list of source repositories used by each Argo CD Application (2). There is always a Helm chart repo, which refers to our Git repository containing dedicated values.yaml files. For the Vault and PostgreSQL charts, we include another source containing CRDs or additional Kubernetes objects. We will discuss it later.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: bootstrap-config
  namespace: argocd
spec:
  goTemplate: true
  generators:
  - list:
      elements:
        - chart: vault
          name: vault
          repo: https://helm.releases.hashicorp.com
          revision: 0.27.0
          namespace: vault
          postInstall: true
        - chart: postgresql
          name: postgresql
          repo: https://charts.bitnami.com/bitnami
          revision: 12.12.10
          namespace: default
          postInstall: true
        - chart: cert-manager
          name: cert-manager
          repo: https://charts.jetstack.io
          revision: v1.14.4
          namespace: cert-manager
          postInstall: false
        - chart: vault-config-operator
          name: vault-config-operator
          repo: https://redhat-cop.github.io/vault-config-operator
          revision: v0.8.25
          namespace: vault-config-operator
          postInstall: false
        - chart: charts/atlas-operator
          name: atlas
          repo: ghcr.io/ariga
          revision: 0.4.2
          namespace: atlas
          postInstall: false
  template:
    metadata:
      name: '{{.name}}'
      annotations:
        argocd.argoproj.io/sync-wave: "1"
    spec:
      syncPolicy:
        automated: {}
        syncOptions:
          - CreateNamespace=true
      destination:
        namespace: '{{.namespace}}'
        server: https://kubernetes.default.svc
      project: default
  templatePatch: |
    spec:
      sources:
        - repoURL: '{{ .repo }}'
          chart: '{{ .chart }}'
          targetRevision: '{{ .revision }}'
          helm:
            valueFiles:
              - $values/bootstrap/values/{{ .name }}/values.yaml
        - repoURL: https://github.com/piomin/kubernetes-config-argocd.git
          targetRevision: HEAD
          ref: values
        {{- if .postInstall }}
        - repoURL: https://github.com/piomin/kubernetes-config-argocd.git
          targetRevision: HEAD
          path: apps/{{ .name }}
        {{- end }}
YAML

Once we apply the bootstrap-config ApplicationSet to the argocd namespace, all the magic just happens. You should see five applications in the Argo CD UI dashboard. All of them are automatically synchronized (Argo CD autoSync enabled) to the cluster. It does the whole job. Now, let’s analyze step-by-step what we have to put in that configuration.

argo-cd-vault-postgres-apps

The Argo CD ApplicationSet generates five applications for installing all required tools. Here’s the Application generated for installing Vault with Helm charts and applying an additional configuration stored in the apps/vault directory.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: vault
  namespace: argocd
spec:
  destination:
    namespace: vault
    server: https://kubernetes.default.svc
  project: default
  sources:
    - chart: vault
      helm:
        valueFiles:
          - $values/bootstrap/values/vault/values.yaml
      repoURL: https://helm.releases.hashicorp.com
      targetRevision: 0.27.0
    - ref: values
      repoURL: https://github.com/piomin/kubernetes-config-argocd.git
      targetRevision: HEAD
    - path: apps/vault
      repoURL: https://github.com/piomin/kubernetes-config-argocd.git
      targetRevision: HEAD
  syncPolicy:
    automated: {}
    syncOptions:
      - CreateNamespace=true
YAML

Configure Vault on Kubernetes

Customize Helm Charts

Let’s take a look at the Vault values.yaml file. We run it in the development mode (single, in-memory node, no unseal needed). We will also enable the UI dashboard.

server:
  dev:
    enabled: true
ui:
  enabled: true
bootstrap/values/vault/values.yaml

With the parameters visible above Argo CD installs Vault in the vault namespace. Here’s a list of running pods:

$ kubectl get po -n vault
NAME                                    READY   STATUS      RESTARTS      AGE
vault-0                                 1/1     Running     0            1h
vault-agent-injector-7f7f68d457-fvsd2   1/1     Running     0            1h
ShellSession

It also exposes Vault API under the 8200 port in the vault Kubernetes Service.

$ kubectl get svc -n vault
NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
vault                      ClusterIP   10.110.69.159    <none>        8200/TCP,8201/TCP   21h
vault-agent-injector-svc   ClusterIP   10.111.24.183    <none>        443/TCP             21h
vault-internal             ClusterIP   None             <none>        8200/TCP,8201/TCP   21h
vault-ui                   ClusterIP   10.110.160.239   <none>        8200/TCP            21h
ShellSession

For the Vault Config Operator, we need to override the default address of Vault API to vault.vault.svc:8200 (an a). In order to do that, we need to set the VAULT_ADDR env variable in the values.yaml file. We also disable Prometheus monitoring and enable integration with Cert Manager. Thanks to “cert-manager” we don’t need to generate any certificates or keys manually.

enableMonitoring: false
enableCertManager: true
env:
  - name: VAULT_ADDR
    value: http://vault.vault:8200
bootstrap/values/vault-config-operator/values.yaml

Enable Vault Config Operator

The Vault Config Operator needs to authenticate against Vault API using Kubernetes Authentication. So we need to configure a root Kubernetes Authentication mount point and role. Then we can create more roles or other Vault objects via the operator. Here’s the Kubernetes Job responsible for configuring Kubernetes mount point and role. It uses the Vault image and the vault CLI available inside that image. As you see, it creates the vault-admin role allowed in the default namespace.

apiVersion: batch/v1
kind: Job
metadata:
  name: vault-admin-initializer
  annotations:
    argocd.argoproj.io/sync-wave: "3"
spec:
  template:
    spec:
      containers:
        - name: vault-admin-initializer
          image: hashicorp/vault:1.15.2
          env:
            - name: VAULT_ADDR
              value: http://vault.vault.svc:8200
          command:
            - /bin/sh
            - -c
            - |
              export VAULT_TOKEN=root
              sleep 10
              vault auth enable kubernetes
              vault secrets enable database
              vault write auth/kubernetes/config kubernetes_host=https://kubernetes.default.svc:443 kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
              vault write auth/kubernetes/role/vault-admin bound_service_account_names=default bound_service_account_namespaces=default policies=vault-admin ttl=1h
              vault policy write vault-admin - <<EOF
                path "/*" {
                  capabilities = ["create", "read", "update", "delete", "list","sudo"]
                }          
              EOF
      restartPolicy: Never
apps/vault/job.yaml

Argo CD applies such a Job after installing the Vault chart.

$ kubectl get job -n vault
NAME                      COMPLETIONS   DURATION   AGE
vault-admin-initializer   1/1           15s        1h
ShellSession

Configure Vault via CRDs

Once a root Kubernetes authentication is ready, we can proceed to the CRD object creation. In the first step, we create objects responsible for configuring a connection to the Postgres database. In the DatabaseSecretEngineConfig we set the connection URL, credentials, and the name of a Vault plugin used to interact with the database (postgresql-database-plugin). We also define a list of allowed roles (postgresql-default-role). In the next step, we define the postgresql-default-role DatabaseSecretEngineRole object. Of course, the name of the role should be the same as the name passed in the allowedRoles list in the previous step. The role defines a target database connection name in Vault and the SQL statement for creating new users with privileges.

kind: DatabaseSecretEngineConfig
apiVersion: redhatcop.redhat.io/v1alpha1
metadata:
  name: postgresql-database-config
  annotations:
    argocd.argoproj.io/sync-wave: "3"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
  allowedRoles:
    - postgresql-default-role
  authentication:
    path: kubernetes
    role: vault-admin
  connectionURL: 'postgresql://{{username}}:{{password}}@postgresql.default:5432?sslmode=disable'
  path: database
  pluginName: postgresql-database-plugin
  rootCredentials:
    passwordKey: postgres-password
    secret:
      name: postgresql
  username: postgres
---
apiVersion: redhatcop.redhat.io/v1alpha1
kind: DatabaseSecretEngineRole
metadata:
  name: postgresql-default-role
  annotations:
    argocd.argoproj.io/sync-wave: "3"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
  creationStatements:
    - CREATE ROLE "{{name}}" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO "{{name}}"; GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO "{{name}}";
  maxTTL: 10m0s
  defaultTTL: 1m0s
  authentication:
    path: kubernetes
    role: vault-admin
  dBName: postgresql-database-config
  path: database
apps/postgresql/database.yaml

Once Argo CD applies both DatabaseSecretEngineConfig and DatabaseSecretEngineRole objects, we can verify it works fine by generating database credentials using the vault read command. We need to pass the name of the previously created role (postgresql-default-role). Our sample app will do the same thing but through the Spring Cloud Vault module.

argo-cd-vault-postgres-test-creds

Finally, we can create a policy and role for our sample Spring Boot. The policy requires only the privilege to generate new credentials:

kind: Policy
apiVersion: redhatcop.redhat.io/v1alpha1
metadata:
  name: database-creds-view
  annotations:
    argocd.argoproj.io/sync-wave: "3"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
  authentication:
    path: kubernetes
    role: vault-admin
  policy: |
    path "database/creds/default" {
      capabilities = ["read"]
    }
apps/postgresql/policies.yaml

Now, we have everything to proceed to the last step in this section. We need to create a Vault role with the Kubernetes authentication method dedicated to our sample app. In this role, we set the name and location of the Kubernetes ServiceAccount and the name of the Vault policy created in the previous step.

kind: KubernetesAuthEngineRole
apiVersion: redhatcop.redhat.io/v1alpha1
metadata:
  name: database-engine-creds-role
  annotations:
    argocd.argoproj.io/sync-wave: "3"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
  authentication:
    path: kubernetes
    role: vault-admin
  path: kubernetes
  policies:
    - database-creds-view
  targetServiceAccounts:
    - default
  targetNamespaces:
    targetNamespaces:
      - default
apps/postgresql/roles.yaml

Managing Postgres Schema with Atlas Operator

Finally, we can proceed to the last step in the configuration part. We will use the AtlasSchema CRD object to configure the database schema for our sample app. The object contains two sections: credentials and schema. In the credentials section, we refer to the PostgreSQL Secret to obtain a password. In the schema section, we create the person table with the id primary key.

apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasSchema
metadata:
  name: sample-spring-cloud-vault
  annotations:
    argocd.argoproj.io/sync-wave: "4"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
  credentials:
    scheme: postgres
    host: postgresql.default
    user: postgres
    passwordFrom:
      secretKeyRef:
        key: postgres-password
        name: postgresql
    database: postgres
    port: 5432
    parameters:
      sslmode: disable
  schema:
    sql: |
      create table person (
        id serial primary key,
        name varchar(255),
        gender varchar(255),
        age int,
        external_id int
      );
apps/postgresql/schema.yaml

Here’s the corresponding app @Entity model class in the sample Spring Boot app.

@Entity
public class Person {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer id;
    private String name;
    private int age;
    @Enumerated(EnumType.STRING)
    private Gender gender;
    private Integer externalId;   
    
   // GETTERS AND SETTERS ...
   
}
Java

Once Argo CD applies the AtlasSchema object, we can verify its status. As you see, it has been successfully executed on the target database.

We can log in to the database using psql CLI and verify that the person table exists in the postgres database:

Run Sample Spring Boot App

Dependencies

For this demo, I created a simple Spring Boot application. It exposes REST API and connects to the PostgreSQL database. It uses Spring Data JPA to interact with the database. Here are the most important dependencies of our app in the Maven pom.xml:

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-bootstrap</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-vault-config-databases</artifactId>
</dependency>
<dependency>
  <groupId>org.postgresql</groupId>
  <artifactId>postgresql</artifactId>
  <scope>runtime</scope>
</dependency>
XML

The first of them enables bootstrap.yml processing on the application startup. The third one includes Spring Cloud Vault Database engine support.

Integrate with Vault using Spring Cloud Vault

The only thing we need to do is to provide the right configuration settings. Here’s the minimal set of the required dependencies to make it work without any errors. The following configuration is provided in the bootstrap.yml file:

spring:
  application:
    name: sample-db-vault
  datasource:
    url: jdbc:postgresql://postgresql:5432/postgres #(1)
  jpa:
    hibernate:
      ddl-auto: update
  cloud:
    vault:
      config.lifecycle: #(2)
        enabled: true
        min-renewal: 10s
        expiry-threshold: 30s
      kv.enabled: false #(3)
      uri: http://vault.vault:8200 #(4)
      authentication: KUBERNETES #(5)
      postgresql: #(6)
        enabled: true
        role: postgresql-default-role
        backend: database
      kubernetes: #(7)
        role: database-engine-creds-role
YAML

Let’s analyze the configuration visible above in the details:

(1) Firstly, we need to set the database connection URL without any credentials. Our application uses standard properties for authentication against the database (spring.datasource.username and spring.datasource.password). Thanks to that, we don’t need to do anything else

(2) As you probably remember, the maximum TTL for the database lease is 10 minutes. We enable lease renewal every 30 seconds. Just for the demo purpose. You will see that Spring Cloud Vault will create new credentials in PostgreSQL every 30 seconds, and the application still works without any errors

(3) Vault KV is not needed here, since I’m using only the database engine

(4) The application is going to be deployed in the default namespace, while Vault is running in the vault namespace. So, the address of Vault should include the namespace name

(5) (7) Our application uses the Kubernetes authentication method to access Vault. We just need to set the role name, which is database-engine-creds-role. All other settings should be left with the default values

(6) We also need to enable postgres database backend support. The name of the backend in Vault is database and the name of the Vault role used for that engine is postgresql-default-role.

Run the App on Kubernetes

Finally, we can run our sample app on Kubernetes by applying the following YAML manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app-deployment
spec:
  selector:
    matchLabels:
      app: sample-app
  template:
    metadata:
      labels:
        app: sample-app
    spec:
      containers:
        - name: sample-app
          image: piomin/sample-app:1.0-gitops
          ports:
            - containerPort: 8080
      serviceAccountName: default
---
apiVersion: v1
kind: Service
metadata:
  name: sample-app
spec:
  type: ClusterIP
  selector:
    app: sample-app
  ports:
  - port: 8080
YAML

Our app exposes REST API under the /persons path. We can easily test it with curl after enabling port forwarding as shown below:

$ kubectl port-forward svc/sample-app 8080:8080
$ curl http://localhost:8080/persons
ShellSession

Final Thoughts

This article proves that we can effectively configure and manage tools like Postgres database or Hashicorp Vault on Kubernetes with Argo CD. The database schema or Vault configuration can be stored in the Git repository in the form of YAML manifests thanks to Atlas and Vault Config Kubernetes operators. Argo CD applies all required CRDs automatically, which results in the integration between Vault, Postgres, and our sample Spring Boot app.

The post GitOps on Kubernetes for Postgres and Vault with Argo CD appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/04/05/gitops-on-kubernetes-for-postgres-and-vault-with-argo-cd/feed/ 0 15149
Azure DevOps and Terraform for Spring Boot https://piotrminkowski.com/2024/01/03/azure-devops-and-terraform-for-spring-boot/ https://piotrminkowski.com/2024/01/03/azure-devops-and-terraform-for-spring-boot/#respond Wed, 03 Jan 2024 13:50:27 +0000 https://piotrminkowski.com/?p=14759 This article will teach you how to automate your Spring Boot app deployment with Azure DevOps and Terraform. In the previous article in this series, we created a simple Spring Boot RESTful app. Then we integrated it with the popular Azure services like Cosmos DB or App Configuration using the Spring Cloud Azure project. We […]

The post Azure DevOps and Terraform for Spring Boot appeared first on Piotr's TechBlog.

]]>
This article will teach you how to automate your Spring Boot app deployment with Azure DevOps and Terraform. In the previous article in this series, we created a simple Spring Boot RESTful app. Then we integrated it with the popular Azure services like Cosmos DB or App Configuration using the Spring Cloud Azure project. We also leveraged the Azure Spring Apps service to deploy, run, and manage our app on the Azure cloud. All the required steps have been performed with the az CLI and Azure Portal.

Today, we are going to design the CI/CD process for building and deploying the app created in the previous article on Azure. In order to configure required services like Azure Spring Apps or Cosmos DB automatically we will use Terraform. We will use Azure DevOps and Azure Pipelines to build and deploy the app.

Preparation

For the purpose of that exercise, we need to provision an account on Azure and another one on the Azure DevOps platform. Once you install the az CLI and log in to Azure you can execute the following command for verification:

az account show
ShellSession

In the next step, you should set up the account on the Azure DevOps platform. In order to do it, go to the following site and click the “Start free” button or “Sign in” if you already have an account there. After that, you should see the Azure DevOps main page. Before we start, we need to create the organization and project. I’m using the “pminkows” name in both cases.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The Spring Boot app used in the article is located in the microservices/account-service directory. You will also find the Terraform manifest inside the microservice/terraform directory and the azure-pipelines.yml file in the repository root directory. After you go to that directory you should just follow my further instructions.

Create Azure Resources with Terraform

Terraform is a great tool for defining resources according to the “Infrastructure as a code” approach. An official Terraform provider is allowing to configure infrastructure on Azure with the Azure Resource Manager API’s. In order to use it, we need to include the azurerm provider in the Terraform manifest. We will put all the required objects into the spring-group resource group.

terraform {
  required_version = ">= 1.0"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">=3.3.0"
    }
  }
}

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "spring-group" {
  location = "eastus"
  name     = "spring-apps"
}
HCL

Configure the Azure Cosmos DB Service

In the first step, we are going to configure the Cosmos DB instance required by our sample Spring Boot app. It requires a new database account (1). The name of my account is sample-pminkows-cosmosdb. It is placed inside our sample-spring-cloud resource group. We also need to define a default consistency level (consistency_policy) and a replication policy (geo_location). Once we enable a database account we can create a database instance (2). The name of our database is sampledb. Of course, it has to be placed in the previously created sample-pminkows-cosmosdb Cosmos DB account. Finally, we need to create a container inside our database (3). The name of the container should be the same as the value of the containerName field declared in the model class. We also have to set the partition key path. It corresponds to the name of the field inside the model class annotated with @PartitionKey.

# (1)
resource "azurerm_cosmosdb_account" "sample-db-account" {
  name                = "sample-pminkows-cosmosdb"
  location            = azurerm_resource_group.spring-group.location
  resource_group_name = azurerm_resource_group.spring-group.name
  offer_type          = "Standard"

  consistency_policy {
    consistency_level = "Session"
  }

  geo_location {
    failover_priority = 0
    location          = "eastus"
  }
}

# (2)
resource "azurerm_cosmosdb_sql_database" "sample-db" {
  name                = "sampledb"
  resource_group_name = azurerm_cosmosdb_account.sample-db-account.resource_group_name
  account_name        = azurerm_cosmosdb_account.sample-db-account.name
}

# (3)
resource "azurerm_cosmosdb_sql_container" "sample-db-container" {
  name                  = "accounts"
  resource_group_name   = azurerm_cosmosdb_account.sample-db-account.resource_group_name
  account_name          = azurerm_cosmosdb_account.sample-db-account.name
  database_name         = azurerm_cosmosdb_sql_database.sample-db.name
  partition_key_paths   = ["/customerId"]
  partition_key_version = 1
  throughput            = 400
}
HCL

Just for the record, here’s a model Java class inside our sample Spring Boot app:

@Container(containerName = "accounts")
public class Account {
   @Id
   @GeneratedValue
   private String id;
   private String number;
   @PartitionKey
   private String customerId;

   // GETTERS AND SETTERS ...
}
Java

Install the Azure App Configuration Service

In the next step, we need to enable the Azure App Configuration service and put some properties into the store (1). The name of that instance is sample-spring-cloud-config. Our sample Spring Boot app uses the Spring Cloud Azure project to interact with the App Configuration service. In order to take advantage of that integration, we need to give proper names to all the configuration keys. They should be prefixed with the /application. They should also contain the name of the property automatically recognized by Spring Cloud. In our case, these are the properties used for establishing a connection with the Cosmos DB instance. We need to define three Spring Cloud properties: spring.cloud.azure.cosmos.key (2)spring.cloud.azure.cosmos.database (3), and spring.cloud.azure.cosmos.endpoint (4). We can retrieve the value of the Comso DB instance primary key or endpoint URL from the previously created azurerm_cosmosdb_account resource.

# (1)
resource "azurerm_app_configuration" "sample-config" {
  name                = "sample-spring-cloud-config"
  resource_group_name = azurerm_resource_group.spring-group.name
  location            = azurerm_resource_group.spring-group.location
}

# (2)
resource "azurerm_app_configuration_key" "cosmosdb-key" {
  configuration_store_id = azurerm_app_configuration.sample-config.id
  key                    = "/application/spring.cloud.azure.cosmos.key"
  value                  = azurerm_cosmosdb_account.sample-db-account.primary_key
}

# (3)
resource "azurerm_app_configuration_key" "cosmosdb-key" {
  configuration_store_id = azurerm_app_configuration.sample-config.id
  key                    = "/application/spring.cloud.azure.cosmos.database"
  value                  = "sampledb"
}

# (4)
resource "azurerm_app_configuration_key" "cosmosdb-key" {
  configuration_store_id = azurerm_app_configuration.sample-config.id
  key                    = "/application/spring.cloud.azure.cosmos.endpoint"
  value                  = azurerm_cosmosdb_account.sample-db-account.endpoint
}
HCL

info
Title

The App Configuration service requires some additional permissions. First of all, you may need to install the provider with the ‘az provider register –namespace Microsoft.AppConfiguration’ command. Also, the Terraform script in the sample Git repository assigns the additional role ‘App Configuration Data Owner’ to the client.

Create the Azure Spring Apps Instance

In the last step, we need to configure the Azure Spring Apps service instance used for running Spring Boot apps. The name of our instance is sample-spring-cloud-apps (1). We will enable tracing for the Spring Azure Apps instance with the Application Insights service (2). After that, we will create a single app inside sample-spring-cloud-apps with the account-service name (3). This app requires a basic configuration containing an amount of requested resources, a version of Java runtime, or some environment variables including the address of the Azure App Configuration service instance. All those things should be set inside the deployment object represented by the azurerm_spring_cloud_java_deployment resource (4).

resource "azurerm_application_insights" "spring-insights" {
  name                = "spring-insights"
  location            = azurerm_resource_group.spring-group.location
  resource_group_name = azurerm_resource_group.spring-group.name
  application_type    = "web"
}

# (1)
resource "azurerm_spring_cloud_service" "spring-cloud-apps" {
  name                = "sample-spring-cloud-apps"
  location            = azurerm_resource_group.spring-group.location
  resource_group_name = azurerm_resource_group.spring-group.name
  sku_name            = "S0"

  # (2)
  trace {
    connection_string = azurerm_application_insights.spring-insights.connection_string
    sample_rate       = 10.0
  }

  tags = {
    Env = "Staging"
  }
}

# (3)
resource "azurerm_spring_cloud_app" "account-service" {
  name                = "account-service"
  resource_group_name = azurerm_resource_group.spring-group.name
  service_name        = azurerm_spring_cloud_service.spring-cloud-apps.name

  identity {
    type = "SystemAssigned"
  }
}

# (4)
resource "azurerm_spring_cloud_java_deployment" "slot-staging" {
  name                = "dep1"
  spring_cloud_app_id = azurerm_spring_cloud_app.account-service.id
  instance_count      = 1
  jvm_options         = "-XX:+PrintGC"
  runtime_version     = "Java_17"

  quota {
    cpu    = "500m"
    memory = "1Gi"
  }

  environment_variables = {
    "Env" : "Staging",
    "APP_CONFIGURATION_CONNECTION_STRING": azurerm_app_configuration.sample-config.primary_read_key[0].connection_string
  }
}

resource "azurerm_spring_cloud_active_deployment" "dep-staging" {
  spring_cloud_app_id = azurerm_spring_cloud_app.account-service.id
  deployment_name     = azurerm_spring_cloud_java_deployment.slot-staging.name
}
HCL

Apply the Terraform Manifest to Azure

Our Terraform configuration is ready. Finally, we can apply it to the target Azure account. Go to the microservices/terraform directory and then run the following commands:

$ terraform init
$ terraform apply -auto-approve
ShellSession

It can take several minutes until the command finishes. In the end, you should have a similar result. Terraform created 15 resources on Azure successfully.

We can switch to the Azure Portal for a moment. Let’s take a look at a list of resources inside our spring-apps resource group. As you see, all the required resources including Cosmos DB, App Configuration, and Azure Spring Apps are ready.

Build And Deploy the App with Azure Pipelines

After preparing the required infrastructure on Azure, we may proceed to the creation of a CI/CD pipeline for the app. Assuming you have already logged in to the Azure DevOps portal, you should find the “Pipelines” item on the left-side menu. Once you expand it, you should see several options.

Create Environment

Let’s start with the “Environments”. We will prepare just a single staging environment as shown below. We don’t need to choose any resources now (the “None” option).

azure-devops-environment

Thanks to environments we can add approval checks for our pipelines. In order to do it, you should go to your environment details and switch to the “Approvals and checks” tab. There are several different available options. Let’s choose a simple approval, which requires someone to manually approve running the particular stage of the pipeline.

azure-devops-approval-check

After clicking the “Next” button, you will be redirected to the next page containing a list of approvers. We can set a single person responsible for it or a whole group. For me, it doesn’t matter since I have only one user in the project. After defining a list of approvers click the “Create” button.

Define the Azure Pipeline

Now, let’s switch to the “Pipelines” view. We can create a pipeline manually with a GUI editor or just provide the azure-pipelines.yml file in the repository root directory. Of course, a GUI editor is also creating and committing the YAML manifest with a pipeline definition to the Git repository.

Let’s analyze our pipeline step by step. It is triggered by the commit into the master branch in the Git repository (1). We choose a standard agent pool (2). Our pipeline consists of two stages: Build_Test and Deploy_Stage (3). In the Build_Test stage we are building the app with Maven (4) and publishing the JAR file to the Azure Artifacts Feeds (5). Thanks to that we will be able to use that artifact in the next stage.

The next Deploy_Stage stage (6) waits until the previous stage is finished successfully (7). However, it won’t continue until we do a review and approve the pipeline. In order to do that, the job must refer to the previously defined staging environment (8) that contains the approval check. Once we approve the pipeline it proceeds to the step responsible for downloading artifacts from the Azure Artifacts Feeds (9). After that, it starts a deployment process (10). We need to use the AzureSpringCloud task responsible for deploying to the Azure Spring Apps service.

The deployment task requires several inputs. We need to set the Azure subscription ID (11), the ID of the Azure Spring Apps instance (12), the name of the app inside the Azure Spring Apps (13), and the name of a target deployment slot (14). Finally, we are setting the path to the JAR file downloaded in the previous step of the whole job (15). The pipeline reads the values of the Azure subscription ID and Azure Spring Apps instance ID from the input variables: subscription and serviceName.

# (1)
trigger:
- master

# (2)
pool:
  vmImage: ubuntu-latest

# (3)
stages:
- stage: Build_Test
  jobs:
  - job: Maven_Package
    steps:
    - task: MavenAuthenticate@0
      inputs:
        artifactsFeeds: 'pminkows'
        mavenServiceConnections: 'pminkows'
      displayName: 'Maven Authenticate'
    # (4)
    - task: Maven@3
      inputs:
        mavenPomFile: 'microservices/account-service/pom.xml'
        mavenOptions: '-Xmx3072m'
        javaHomeOption: 'JDKVersion'
        jdkVersionOption: '1.17'
        jdkArchitectureOption: 'x64'
        publishJUnitResults: true
        testResultsFiles: '**/surefire-reports/TEST-*.xml'
        goals: 'deploy'
        mavenAuthenticateFeed: true # (5)
      displayName: 'Build'

# (6)
- stage: Deploy_Stage
  dependsOn: Build_Test
  condition: succeeded() # (7)
  jobs:
    - deployment: Deployment_Staging
      environment:
        name: staging # (8) 
      strategy:
        runOnce:
          deploy:
            steps:
            # (9)
            - task: DownloadPackage@1
              inputs:
                packageType: 'maven'
                feed: 'pminkows'
                view: 'Local'
                definition: 'pl.piomin:account-service'
                version: '1.0'
                downloadPath: '$(System.ArtifactsDirectory)'
            - script: 'ls -la $(System.ArtifactsDirectory)' 
            # (10)
            - task: AzureSpringCloud@0
              inputs:
                azureSubscription: $(subscription) # (11)
                Action: 'Deploy'
                AzureSpringCloud: $(serviceName) # (12)
                AppName: 'account-service' # (13)
                DeploymentName: dep1 # (14)
                Package: '$(System.ArtifactsDirectory)/account-service-1.0.jar' # (15)
YAML

Run the Azure Pipeline

Let’s import our pipeline into the Azure DevOps platform. Azure DevOps provides a simple wizard for that. We need to choose the Git repository containing the pipeline definition.

After selecting the repository we will see the review page. We can change the definition of our pipeline taken from the azure-pipelines.yml file. If there is no need for any changes, we may add some variables or run (and save) the pipeline.

azure-devops-pipeline-yaml

However, before running the pipeline we should define the required variables. The serviceName variable needs to contain the fully qualified ID of the Azure Spring Apps resource, e.g. /subscriptions/d4cde383-3611-4557-b2b1-b64b50378c9d/resourceGroups/spring-apps/providers/Microsoft.AppPlatform/Spring/sample-spring-cloud-apps.

azure-devops-pipeline-variables

We also need to create the Azure Artifact Feed. The pipeline uses it to cache and store artifacts during the Maven build. We should go to the “Artifacts” section. Then click the “Create Feed” button. The name of my feed is pminkows.

Once we run the pipeline, it will publish the app artifact to the target feed. The Maven group ID and its name determine the artifact’s name. The current version number is 1.0.

Let’s run the pipeline. It is starting from the build phase.

azure-devops-pipeline-run

After finishing the build phase successfully, it proceeds to the deployment phase. However, the pipeline requires us to perform a review and approve the movement to the next step.

We need to click the Deploy_Stage tile. After that, you should see a similar approval screen as shown below. You can approve or reject the changes.

After approval, the pipeline starts the deployment phase. After around one minute it should deploy our app into the target Azure Spring Apps instance. Here’s the successfully finished run of the pipeline.

We can switch to the Azure Portal once again. Go to the sample-spring-cloud-apps Azure Spring Apps instance, then choose “Apps” and “account-service”. Finally, go to the “Deployments” section and choose the dep1. It is the deployment slot used by our pipeline. As you see, our app is running in the staging environment.

azure-devops-spring-apps

info

Title

Before running the pipeline you should set the ‘dep1’ as the default staging deployment (option ‘Set as staging’)

Final Thoughts

This article shows the holistic approach to app deployment on Azure. We can use Terraform to define all the resources and services required by the app. After that, we can define the CI/CD pipeline with Azure DevOps. As a result, we have a fully automated way of managing all the aspects related to our Spring Boot app running in the Azure cloud.

The post Azure DevOps and Terraform for Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/01/03/azure-devops-and-terraform-for-spring-boot/feed/ 0 14759
Kubernetes Testing with CircleCI, Kind, and Skaffold https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/ https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/#respond Tue, 28 Nov 2023 13:04:18 +0000 https://piotrminkowski.com/?p=14706 In this article, you will learn how to use tools like Kind or Skaffold to build integration tests on CircleCI for apps running on Kubernetes. Our main goal in this exercise is to build the app image and verify the Deployment on Kubernetes in the CircleCI pipeline. Skaffold and Jib Maven plugin build the image […]

The post Kubernetes Testing with CircleCI, Kind, and Skaffold appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use tools like Kind or Skaffold to build integration tests on CircleCI for apps running on Kubernetes. Our main goal in this exercise is to build the app image and verify the Deployment on Kubernetes in the CircleCI pipeline. Skaffold and Jib Maven plugin build the image from the source and deploy it on Kind using YAML manifests. Finally, we will run some load tests on the deployed app using the Grafana k6 tool and its integration with CircleCI.

If you want to build and run tests against Kubernetes, you can read my article about integration tests with JUnit. On the other hand, if you are looking for other testing tools for testing in a Kubernetes-native environment you can refer to that article about Testkube.

Introduction

Before we start, let’s do a brief introduction. There are three simple Spring Boot apps that communicate with each other. The first-service app calls the endpoint exposed by the caller-service app, and then the caller-service app calls the endpoint exposed by the callme-service app. The diagram visible below illustrates that architecture.

kubernetes-circleci-arch

So in short, our goal is to deploy all the sample apps on Kind during the CircleCI build and then test the communication by calling the endpoint exposed by the first-service through the Kubernetes Service.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. It contains three apps: first-service, caller-service, and callme-service. The main Skaffold config manifest is available in the project root directory. Required Kubernetes YAML manifests are always placed inside the k8s directory. Once you take a look at the source code, you should just follow my instructions. Let’s begin.

Our sample Spring Boot apps are very simple. They are exposing a single “ping” endpoint over HTTP and call “ping” endpoints exposed by other apps. Here’s the @RestController in the first-service app:

@RestController
@RequestMapping("/first")
public class FirstController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(FirstController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() 
         ? buildProperties.get().getName() : "first-service", version);
      String response = restTemplate.getForObject(
         "http://caller-service:8080/caller/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm first-service " + version + ". Calling... " + response;
   }

}

Here’s the @RestController inside the caller-service app. The endpoint is called by the first-service app through the RestTemplate bean.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallerController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
         buildProperties.or(Optional::empty), version);
      String response = restTemplate.getForObject(
         "http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }

}

Finally, here’s the @RestController inside the callme-service app. It also exposes a single GET /callme/ping endpoint called by the caller-service app:

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallmeController.class);
   private static final String INSTANCE_ID = UUID.randomUUID().toString();
   private Random random = new Random();

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() 
         ? buildProperties.get().getName() : "callme-service", version);
      return "I'm callme-service " + version;
   }

}

Build and Deploy Images with Skaffold and Jib

Firstly, let’s take a look at the main Maven pom.xml in the project root directory. We use the latest version of Spring Boot and the latest LTS version of Java for compilation. All three app modules inherit settings from the parent pom.xml. In order to build the image with Maven we are including jib-maven-plugin. Since it is still using Java 17 in the default base image, we need to override this behavior with the <from>.
            </from>
          </configuration>
        </plugin>
      </plugins>
    </build>
  </profile>
</profiles>

Now, let’s take a look at the main skaffold.yaml file. Skaffold builds the image using Jib support and deploys all three apps on Kubernetes using manifests available in the k8s/deployment.yaml file inside each app module. Skaffold disables JUnit tests for Maven and activates the jib profile. It is also able to deploy Istio objects after activating the istio Skaffold profile. However, we won’t use it today.

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: simple-istio-services
build:
  artifacts:
    - image: piomin/first-service
      jib:
        project: first-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/caller-service
      jib:
        project: caller-service
        args:
          - -Pjib
          - -DskipTests
    - image: piomin/callme-service
      jib:
        project: callme-service
        args:
          - -Pjib
          - -DskipTests
  tagPolicy:
    gitCommit: {}
manifests:
  rawYaml:
    - '*/k8s/deployment.yaml'
deploy:
  kubectl: {}
profiles:
  - name: istio
    manifests:
      rawYaml:
        - k8s/istio-*.yaml
        - '*/k8s/deployment-versions.yaml'
        - '*/k8s/istio-*.yaml'

Here’s the typical Deployment for our apps. The app is running on the 8080 port.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: first-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: first-service
  template:
    metadata:
      labels:
        app: first-service
    spec:
      containers:
        - name: first-service
          image: piomin/first-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

For testing purposes, we need to expose the first-service outside of the Kind cluster. In order to do that, we will use the Kubernetes NodePort Service. Our app will be available under the 30000 port.

apiVersion: v1
kind: Service
metadata:
  name: first-service
  labels:
    app: first-service
spec:
  type: NodePort
  ports:
  - port: 8080
    name: http
    nodePort: 30000
  selector:
    app: first-service

Note that all other Kubernetes services (“caller-service” and “callme-service”) are exposed only internally using a default ClusterIP type.

How It Works

In this section, we will discuss how we would run the whole process locally. Of course, our goal is to configure it as the CircleCI pipeline. In order to expose the Kubernetes Service outside Kind we need to define the externalPortMappings section in the configuration manifest. As you probably remember, we are exposing our app under the 30000 port. The following file is available in the repository under the k8s/kind-cluster-test.yaml path:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
  - role: control-plane
    extraPortMappings:
      - containerPort: 30000
        hostPort: 30000
        listenAddress: "0.0.0.0"
        protocol: tcp

Assuming we already installed kind CLI on our machine, we need to execute the following command to create a new cluster:

$ kind create cluster --name c1 --config k8s/kind-cluster-test.yaml

You should have the same result as visible on my screen:

We have a single-node Kind cluster ready. There is a single c1-control-plane container running on Docker. As you see, it exposes 30000 port outside of the cluster:

The Kubernetes context is automatically switched to kind-c1. So now, we just need to run the following command from the repository root directory to build and deploy the apps:

$ skaffold run

If you see a similar output in the skaffold run logs, it means that everything works fine.

kubernetes-circleci-skaffold

We can verify a list of Kubernetes services. The first-service is exposed under the 30000 port as expected.

$ kubectl get svc
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
caller-service   ClusterIP   10.96.47.193   <none>        8080/TCP         2m24s
callme-service   ClusterIP   10.96.98.53    <none>        8080/TCP         2m24s
first-service    NodePort    10.96.241.11   <none>        8080:30000/TCP   2m24s

Assuming you have already installed the Grafana k6 tool locally, you may run load tests using the following command:

$ k6 run first-service/src/test/resources/k6/load-test.js

That’s all. Now, let’s define the same actions with the CircleCI workflow.

Test Kubernetes Deployment with the CircleCI Workflow

The CircleCI config.yml file should be placed in the .circle directory. We are doing two things in our pipeline. In the first step, we are executing Maven unit tests without the Kubernetes cluster. That’s why we need a standard executor with OpenJDK 21 and the maven ORB. In order to run Kind during the CircleCI build, we need to have access to the Docker daemon. Therefore, we use the latest version of the ubuntu-2204 machine.

version: 2.1

orbs:
  maven: circleci/maven@1.4.1

executors:
  jdk:
    docker:
      - image: 'cimg/openjdk:21.0'
  machine_executor_amd64:
    machine:
      image: ubuntu-2204:2023.10.1
    environment:
      architecture: "amd64"
      platform: "linux/amd64"

After that, we can proceed to the job declaration. The name of our job is deploy-k8s. It uses the already-defined machine executor. Let’s discuss the required steps after running a standard checkout command:

  1. We need to install the kubectl CLI and copy it to the /usr/local/bin directory. Skaffold uses kubectl to interact with the Kubernetes cluster.
  2. After that, we have to install the skaffold CLI
  3. Our job also requires the kind CLI to be able to create or delete Kind clusters on Docker…
  4. … and the Grafana k6 CLI to run load tests against the app deployed on the cluster
  5. There is a good chance that this step won’t required once CircleCI releases a new version of ubuntu-2204 machine (probably 2024.1.1 according to the release strategy). For now, ubuntu-2204 provides OpenJDK 17, so we need to install OpenJDK 17 to successfully build the app from the source code
  6. After installing all the required tools we can create a new Kubernetes with the kind create cluster command.
  7. Once a cluster is ready, we can deploy our apps using the skaffold run command.
  8. Once the apps are running on the cluster, we can proceed to the tests phase. We are running the test defined inside the first-service/src/test/resources/k6/load-test.js file.
  9. After doing all the required steps, it is important to remove the Kind cluster
 jobs:
  deploy-k8s:
    executor: machine_executor_amd64
    steps:
      - checkout
      - run: # (1)
          name: Install Kubectl
          command: |
            curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
            chmod +x kubectl
            sudo mv ./kubectl /usr/local/bin/kubectl
      - run: # (2)
          name: Install Skaffold
          command: |
            curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64
            chmod +x skaffold
            sudo mv skaffold /usr/local/bin
      - run: # (3)
          name: Install Kind
          command: |
            [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
            chmod +x ./kind
            sudo mv ./kind /usr/local/bin/kind
      - run: # (4)
          name: Install Grafana K6
          command: |
            sudo gpg -k
            sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
            echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
            sudo apt-get update
            sudo apt-get install k6
      - run: # (5)
          name: Install OpenJDK 21
          command: |
            java -version
            sudo apt-get update && sudo apt-get install openjdk-21-jdk
            sudo update-alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
            sudo update-alternatives --set javac /usr/lib/jvm/java-21-openjdk-amd64/bin/javac
            java -version
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
      - run: # (6)
          name: Create Kind Cluster
          command: |
            kind create cluster --name c1 --config k8s/kind-cluster-test.yaml
      - run: # (7)
          name: Deploy to K8s
          command: |
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
            skaffold run
      - run: # (8)
          name: Run K6 Test
          command: |
            kubectl get svc
            k6 run first-service/src/test/resources/k6/load-test.js
      - run: # (9)
          name: Delete Kind Cluster
          command: |
            kind delete cluster --name c1

Here’s the definition of our load test. It has to be written in JavaScript. It defines some thresholds like a % of maximum failed requests or maximum response time for 95% of requests. As you see, we are testing the http://localhost:30000/first/ping endpoint:

import { sleep } from 'k6';
import http from 'k6/http';

export const options = {
  duration: '60s',
  vus: 10,
  thresholds: {
    http_req_failed: ['rate<0.25'],
    http_req_duration: ['p(95)<1000'],
  },
};

export default function () {
  http.get('http://localhost:30000/first/ping');
  sleep(2);
}

Finally, the last part of the CircleCI config file. It defines pipeline workflow. In the first step, we are running tests with Maven. After that, we proceeded to the deploy-k8s job.

workflows:
  build-and-deploy:
    jobs:
      - maven/test:
          name: test
          executor: jdk
      - deploy-k8s:
          requires:
            - test

Once we push a change to the sample Git repository we trigger a new CircleCI build. You can verify it by yourself here in my CircleCI project page.

As you see all the pipeline steps have been finished successfully.

kubernetes-circleci-build

We can display logs for every single step. Here are the logs from the k6 load test step.

There were some errors during the warm-up. However, the test shows that our scenario works on the Kubernetes cluster.

Final Thoughts

CircleCI is one of the most popular CI/CD platforms. Personally, I’m using it for running builds and tests for all my demo repositories on GitHub. For the sample projects dedicated to the Kubernetes cluster, I want to verify such steps as building images with Jib, Kubernetes deployment scripts, or Skaffold configuration. This article shows how to easily perform such tests with CircleCI and Kubernetes cluster running on Kind. Hope it helps 🙂

The post Kubernetes Testing with CircleCI, Kind, and Skaffold appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/11/28/kubernetes-testing-with-circleci-kind-and-skaffold/feed/ 0 14706
Preview Environments on Kubernetes with ArgoCD https://piotrminkowski.com/2023/06/19/preview-environments-on-kubernetes-with-argocd/ https://piotrminkowski.com/2023/06/19/preview-environments-on-kubernetes-with-argocd/#comments Mon, 19 Jun 2023 09:57:47 +0000 https://piotrminkowski.com/?p=14252 In this article, you will learn how to create preview environments for development purposes on Kubernetes with ArgoCD. Preview environments are quickly gaining popularity. This approach allows us to generate an on-demand namespace for testing a specific git branch before it’s merged. Sometimes we are also calling that approach “ephemeral environments” since they are provisioned […]

The post Preview Environments on Kubernetes with ArgoCD appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create preview environments for development purposes on Kubernetes with ArgoCD. Preview environments are quickly gaining popularity. This approach allows us to generate an on-demand namespace for testing a specific git branch before it’s merged. Sometimes we are also calling that approach “ephemeral environments” since they are provisioned only for a limited time. Several ways and tools may help in creating preview environments on Kubernetes. But if we use the GitOps approach in the CI/CD process it is worth considering ArgoCD for that. With ArgoCD and Helm charts, it is possible to organize that process in a fully automated and standardized way.

You can find several posts on my blog about ArgoCD and continuous delivery on Kubernetes. For a quick intro to CI/CD process with Tekton and ArgoCD, you can refer to the following article. For a more advanced approach dedicated to database management in the CD process see the following post.

Prerequisites

In order to do the exercise, you need to have a Kubernetes cluster. Then you need to install the tools we will use today – ArgoCD and Tekton. Here are the installation instructions for Tekton Pipelines and Tekton Triggers. Tekton is optional in our exercise. We will just use it to build the application image after pushing the commit to the repository.

ArgoCD is the key tool today. We can use the official Helm chart to install it on Kubernetes. Firstly. let’s add the following Helm repository:

$ helm repo add argo https://argoproj.github.io/argo-helm

After that, we can install ArgoCD in the current Kubernetes cluster in the argocd namespace using the following command:

$ helm install my-argo-cd argo/argo-cd -n argocd

I’m using OpenShift to run that exercise. With the OpenShift Console, I can easily install both Tekton and ArgoCD using operators. Tekton can be installed with the OpenShift Pipelines operator, while ArgoCD with the OpenShift Gitops operator.

Once we install ArgoCD, we can display a list of running pods. You should have a similar result to mine:

$ kubectl get pod
openshift-gitops-application-controller-0                     1/1     Running     0          1m
openshift-gitops-applicationset-controller-654f99c9b4-pwnc2   1/1     Running     0          1m
openshift-gitops-dex-server-5dc77fcb7d-6tkg5                  1/1     Running     0          1m
openshift-gitops-redis-87698688c-r59zf                        1/1     Running     0          1m
openshift-gitops-repo-server-5f6f7f4996-rfdg8                 1/1     Running     0          1m
openshift-gitops-server-dcf746865-tlmlp                       1/1     Running     0          1m

Finally, you also need to have an account on GitHub. In our scenario, ArgoCD will require access to the repository to obtain a list of opened pull requests. Therefore, we need to create a personal access token for authentication over GitHub. In your GitHub profile go to Settings > Developer Settings > Personal access tokens. Choose Tokens (classic) and then click the Generate new token button. Then you should enable the repo scope. Of course, you need to save the value of the generated token. We will create a secret on Kubernetes using that value.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. To do that you need to clone my two GitHub repositories. First of them, contains the source code of our sample app written in Kotlin. The second of them contains configuration managed by ArgoCD with YAML manifests for creating preview environments on Kubernetes. Finally, you should just follow my instructions.

How it works

Let’s describe our scenario. There are two repositories on GitHub. In the repository with app source code, we are creating branches for working on new features. Usually, when we are starting a new branch we are just at the beginning of our work. Therefore, we don’t want to deploy it anywhere. Once we make progress and we have a version for testing, we are creating a pull request. Pull request represents the relation between source and target branches. We may still push commits to the source branch. Once we merge a pull request all the commits from the source branch will also be merged.

After creating a new pull request we want ArgoCD to provision a new preview environment on Kubernetes. Once we merge a pull request we want ArgoCD to remove the preview environment automatically. Fortunately, ArgoCD can monitor pull requests with ApplicationSet generators. Our ApplicationSet will connect to the app source repository to detect new pull requests. However, it will use YAML manifests stored in a different, config repository. Those manifests contain a generic definition of our preview environments. They are written in Helm and may be shared across several different apps and scenarios. Here’s the diagram that illustrates our scenario. Let’s proceed to the technical details.

kubernetes-preview-environments-arch

Using ArgoCD ApplicationSet and Helm Templates

ArgoCD requires access to the GitHub API to detect a current list of opened pull requests. Therefore we will create a Kubernetes Secret that contains our GitHub personal access token:

$ kubectl create secret generic github-token \
  --from-literal=token=<YOUR_GITHUB_PERSON_ACCESS_TOKEN>

In the config repository, we will define a template for our sample preview environment. It is available inside the preview directory. It contains the namespace declaration:

apiVersion: v1
kind: Namespace
metadata:
  name: {{ .Values.namespace }}

We are also defining Kubernetes Deployment for a sample app:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.name }}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: {{ .Values.name }}
  template:
    metadata:
      labels:
        app: {{ .Values.name }}
    spec:
      containers:
      - name: {{ .Values.name }}
        image: quay.io/pminkows/{{ .Values.image }}:{{ .Values.version }}
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "1024Mi"
            cpu: "1000m"
        ports:
        - containerPort: 8080

Let’s also add the Kubernetes Service:

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.name }}-service
spec:
  type: ClusterIP
  selector:
    app: {{ .Values.name }}
  ports:
  - port: 8080
    name: http-port

Finally, we can create the ArgoCD ApplicationSet with the Pull Request Generator. We are monitoring the app source code repository (1). In order to authenticate over GitHub, we are injecting the Secret containing access token (2). While the ApplicationSet targets the source code repository, the generated ArgoCD Application refers to the config repository (3). It also sets several Helm parameters. The name of the preview namespace is the same as the name of the branch with the preview prefix (4). The app image is tagged with the commit hash (5). We are also setting the name of the app image (6). All the configuration settings are applied automatically by ArgoCD (7).

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: sample-spring-preview
spec:
  generators:
    - pullRequest:
        github:
          owner: piomin
          repo: sample-spring-kotlin-microservice # (1)
          tokenRef:
            key: token
            secretName: github-token # (2)
        requeueAfterSeconds: 60
  template:
    metadata:
      name: 'sample-spring-{{branch}}-{{number}}'
    spec:
      destination:
        namespace: 'preview-{{branch}}'
        server: 'https://kubernetes.default.svc'
      project: default
      source:
        # (3)
        path: preview/
        repoURL: 'https://github.com/piomin/openshift-cluster-config.git'
        targetRevision: HEAD
        helm:
          parameters:
            # (4)
            - name: namespace
              value: 'preview-{{branch}}'
            # (5)
            - name: version
              value: '{{head_sha}}'
            # (6)
            - name: image
              value: sample-kotlin-spring
            - name: name
              value: sample-spring-kotlin
      # (7)
      syncPolicy:
        automated:
          selfHeal: true

Build Image with Tekton

ArgoCD is responsible for creating a preview environment on Kubernetes and applying the Deployment manifest there. However, we still need to build the image after a push to the source branch. In order to do that, we will create a Tekton pipeline. It’s a very simple pipeline. It just clones the repository and builds the image with the commit hash as a tag.

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: sample-kotlin-pipeline
spec:
  params:
    - description: branch
      name: git-revision
      type: string
  tasks:
    - name: git-clone
      params:
        - name: url
          value: 'https://github.com/piomin/sample-spring-kotlin-microservice.git'
        - name: revision
          value: $(params.git-revision)
        - name: sslVerify
          value: 'false'
      taskRef:
        kind: ClusterTask
        name: git-clone
      workspaces:
        - name: output
          workspace: source-dir
    - name: s2i-java-preview
      params:
        - name: PATH_CONTEXT
          value: .
        - name: TLSVERIFY
          value: 'false'
        - name: MAVEN_CLEAR_REPO
          value: 'false'
        - name: IMAGE
          value: >-
            quay.io/pminkows/sample-kotlin-spring:$(tasks.git-clone.results.commit)
      runAfter:
        - git-clone
      taskRef:
        kind: ClusterTask
        name: s2i-java
      workspaces:
        - name: source
          workspace: source-dir
  workspaces:
    - name: source-dir

This pipeline should be triggered by the push in the app repository. Therefore we have to create the TriggerTemplate and EventListener CRD objects.

apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
  name: sample-kotlin-spring-trigger-template
  namespace: pminkows-cicd
spec:
  params:
    - default: master
      description: The git revision
      name: git-revision
    - description: The git repository url
      name: git-repo-url
  resourcetemplates:
    - apiVersion: tekton.dev/v1beta1
      kind: PipelineRun
      metadata:
        generateName: sample-kotlin-spring-pipeline-run-
      spec:
        params:
          - name: git-revision
            value: $(tt.params.git-revision)
        pipelineRef:
          name: sample-kotlin-pipeline
        serviceAccountName: pipeline
        workspaces:
          - name: source-dir
            persistentVolumeClaim:
              claimName: kotlin-pipeline-pvc
---
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
  name: sample-kotlin-spring
spec:
  serviceAccountName: pipeline
  triggers:
    - bindings:
        - kind: ClusterTriggerBinding
          ref: github-push
      name: trigger-1
      template:
        ref: sample-kotlin-spring-trigger-template

After that Tekton automatically creates Kubernetes Service with the webhook for triggering the pipeline.

Since I’m using Openshift Pipelines I can create the Route object that allows me to expose Kubernetes Service outside of the cluster. Thanks to that, it is possible to easily set a webhook in the GitHub repository that triggers the pipeline after the push. On Kubernetes, you need to configure the Ingress provider, e.g. using the Nginx controller.

Finally, we need to set the webhook URL in our GitHub app repository. That’s all that we need to do. Let’s see how it works.

Kubernetes Preview Environments in Action

Creating environment

In the first step, we will create two branches in our sample GitHub repository: branch-a and branch-c. If we push some changes into each of those branches, our pipeline should be triggered by the webhook. It will build the image from the source branch and push it to the remote registry.

kubernetes-preview-environments-branches

As you see, our pipeline was running two times.

Here’s the Quay registry with our sample app images. They are tagged using the commit hash.

kubernetes-preview-environments-images

Now, we can create pull requests for our branches. As you see I have two pull requests in the sample repository.

kubernetes-preview-environments-pull-requests

Let’s take a look at one of our pull requests. Firstly, pay attention to the pull request id (55) and a list of commits assigned to the pull request.

ArgoCD monitors a list of opened PRs via ApplicationSet. Each time it detects a new PR it creates a dedicated ArgoCD Application for synchronizing YAML manifests stored in the Git config repository with the target Kubernetes cluster. We have two opened PRs, so there are two applications in ArgoCD.

kubernetes-preview-environments-argocd

We can take a look at the ArgoCD Application details. As you see it creates the namespace containing Kubernetes Deployment and Service for our app.

Let’s display a list of running in one of our preview namespaces:

$ kubectl get po -n preview-branch-a
NAME                                    READY   STATUS    RESTARTS   AGE
sample-spring-kotlin-5c7cc45bc7-wck78   1/1     Running   0          22m

Let’s verify the tag of the image used in the pod:

kubernetes-preview-environments-pod

Adding commits to the existing PR

What about making some changes in one of our preview branches? Our latest commit with the “Make some changes” title will be automatically included in the PR.

ArgoCD ApplicationSet will detect a commit in the pull request. Then it will update the ArgoCD Application with the latest commit hash (71d05d8). As a result, it will try to run a new pod containing the latest version of the app. In the meantime, our pipeline is building a new image staged by the commit hash. As you see, the image is not available yet.

Let’s display a list of running pods in the preview-branch-c namespace:

$ kubectl get po -n preview-branch-c
NAME                                    READY   STATUS             RESTARTS   AGE
sample-spring-kotlin-67f6947c89-xn2r8   1/1     Running            0          29m
sample-spring-kotlin-6d844c8c94-qjrr4   0/1     ImagePullBackOff   0          24s

Once the pipeline will finish the build, it pushes the image to the Quay registry:

And the latest version of the app from the branch-c is available on Kubernetes:

Now, you can close or merge the pull request. As a result, ArgoCD will automatically remove our preview namespace with the app.

Final Thoughts

In this article, I showed you how to create and manage preview environments on Kubernetes in the GitOps way with ArgoCD. In this concept, a preview environment exists on Kubernetes as long as the particular pull request lives in the GitHub repository. ArgoCD uses a global, generic template for creating such an environment. Thanks to that, we can have a single, shared process across the whole organization.

The post Preview Environments on Kubernetes with ArgoCD appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/06/19/preview-environments-on-kubernetes-with-argocd/feed/ 5 14252
Manage Kubernetes Operators with ArgoCD https://piotrminkowski.com/2023/05/05/manage-kubernetes-operators-with-argocd/ https://piotrminkowski.com/2023/05/05/manage-kubernetes-operators-with-argocd/#comments Fri, 05 May 2023 11:59:32 +0000 https://piotrminkowski.com/?p=14151 In this article, you will learn how to install and configure operators on Kubernetes with ArgoCD automatically. A Kubernetes operator is a method of packaging, deploying, and managing applications on Kubernetes. It has its own lifecycle managed by the OLM. It also uses custom resources (CR) to manage applications and their components. The Kubernetes operator watches […]

The post Manage Kubernetes Operators with ArgoCD appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to install and configure operators on Kubernetes with ArgoCD automatically. A Kubernetes operator is a method of packaging, deploying, and managing applications on Kubernetes. It has its own lifecycle managed by the OLM. It also uses custom resources (CR) to manage applications and their components. The Kubernetes operator watches a CR object and takes actions to ensure the current state matches the desired state of that resource. Assuming we want to manage our Kubernetes cluster in the GitOps way, we want to keep the list of operators, their configuration, and CR objects definitions in the Git repository. Here comes Argo CD.

In this article, I’m describing several more advanced Argo CD features. If you looking for the basics you can find a lot of other articles about Argo CD on my blog. For example, you may about Kubernetes CI/CD with Tekton and ArgoCD in the following article.

Introduction

The main goal of this exercise is to run the scenario, in which we can automatically install and use operators on Kubernetes in the GitOps way. Therefore, the state of the Git repository should be automatically applied to the target Kubernetes cluster. We will define a single Argo CD Application that performs all the required steps. In the first step, it will trigger the operator installation process. It may take some time since we need to install the controller application and Kubernetes CRDs. Then we may define some CR objects to run our apps on the cluster.

We cannot create a CR object before installing an operator. Fortunately, with ArgoCD we can divide the sync process into multiple separate phases. This ArgoCD feature is called sync waves. In order to proceed to the next phase, ArgoCD first needs to finish the previous sync wave. ArgoCD checks the health checks of all objects created during the particular phase. If all of those checks reply with success the phase is considered to be finished. Argo CD provides some built-in health check implementations for several standard Kubernetes types. However, in this exercise, we will have to override the health check for the main operator CR – the Subscription object.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that go to the global directory. Then you should just follow my instructions. Let’s begin.

Prerequisites

Before starting the exercise you need to have a running Kubernetes cluster with ArgoCD and Operator Lifecycle Manager (OLM) installed. You can install Argo CD using Helm chart or with the operator. In order to read about the installation details please refer to the Argo CD docs.

Install Operators with Argo CD

In the first step, we will define templates responsible for operators’ installation. If you have OLM installed on the Kubernetes cluster that process comes to the creation of the Subscription object (1). In some cases, we have to create the OperatorGroup object. It provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its members. Before installing in a different namespace than openshift-operators, we have to create the OperatorGroup in that namespace (2). We use the argocd.argoproj.io/sync-wave annotation to configure sync phases (3). The lower value of that parameter is – the highest priority for the object (before OperatorGroup we need to create the namespace).

{{- range .Values.subscriptions }}
apiVersion: operators.coreos.com/v1alpha1 # (1)
kind: Subscription
metadata:
  name: {{ .name }}
  namespace: {{ .namespace }}
  annotations:
    argocd.argoproj.io/sync-wave: "2" # (3)
spec:
  channel: {{ .channel }}
  installPlanApproval: Automatic
  name: {{ .name }}
  source: {{ .source }}
  sourceNamespace: openshift-marketplace
---
{{- if ne .namespace "openshift-operators" }}
apiVersion: v1
kind: Namespace
metadata:
  name: {{ .namespace }}
  annotations:
    argocd.argoproj.io/sync-wave: "1" # (3)
---
apiVersion: operators.coreos.com/v1alpha2 # (2)
kind: OperatorGroup
metadata:
  name: {{ .name }}
  namespace: {{ .namespace }}
  annotations:
    argocd.argoproj.io/sync-wave: "2" # (3)
spec: {}
---
{{- end }}
{{- end }}

I’m using Helm for templating the YAML manifests. Thanks to that we can use it to apply several Subscription and OperatorGroup objects. Our Helm templates iterate over the subscriptions list. In order to define a list of operators we just need to provide a similar configuration in the values.yaml file visible below. There are operators installed with that example: Kiali, Service Mesh (Istio), AMQ Streams (Strimzi Kafka), Patch Operator, and Serverless (Knative).

subscriptions:
  - name: kiali-ossm
    namespace: openshift-operators
    channel: stable
    source: redhat-operators
  - name: servicemeshoperator
    namespace: openshift-operators
    channel: stable
    source: redhat-operators
  - name: amq-streams
    namespace: openshift-operators
    channel: stable
    source: redhat-operators
  - name: patch-operator
    namespace: patch-operator
    channel: alpha
    source: community-operators
  - name: serverless-operator
    namespace: openshift-serverless
    channel: stable
    source: redhat-operators

Override Argo CD Health Check

As I mentioned before, we need to override the default Argo CD health check for the Subscription CR. Normally, Argo CD just creates the Subscription objects and doesn’t wait until the operator is installed on the cluster. In order to do that, we need to verify the value of the status.state field. If it equals the AtLatestKnown value, it means that the operator has been successfully installed. In that case, we can set the value of the Argo CD health check to Healthy. We can also override the default health check description to display the current version of the operator (the status.currentCSV field). If you installed Argo CD using Helm chart you can provide your health check implementation directly in the argocd-cm ConfigMap.

apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-cm
  namespace: argocd
  labels:
    app.kubernetes.io/name: argocd-cm
    app.kubernetes.io/part-of: argocd
data:
  resource.customizations: |
    operators.coreos.com/Subscription:
      health.lua: |
        hs = {}
        hs.status = "Progressing"
        hs.message = ""
        if obj.status ~= nil then
          if obj.status.state ~= nil then
            if obj.status.state == "AtLatestKnown" then
              hs.message = obj.status.state .. " - " .. obj.status.currentCSV
              hs.status = "Healthy"
            end
          end
        end
        return hs

For those of you, who installed Argo CD using the operator (including me) there is another way to override the health check. We need to provide it inside the extraConfig field in the ArgoCD CR.

apiVersion: argoproj.io/v1alpha1
kind: ArgoCD
metadata:
  name: openshift-gitops
  namespace: openshift-gitops
spec:
  ...
  extraConfig:
    resource.customizations: |
      operators.coreos.com/Subscription:
        health.lua: |
          hs = {}
          hs.status = "Progressing"
          hs.message = ""
          if obj.status ~= nil then
            if obj.status.state ~= nil then
              if obj.status.state == "AtLatestKnown" then
                hs.message = obj.status.state .. " - " .. obj.status.currentCSV
                hs.status = "Healthy"
              end
            end
          end
          return hs

After the currently described steps, we achieved two things. We divided our sync process into multiple phases with the Argo CD waves feature. We also forced Argo CD to wait before going to the next phase until the operator installation process is finished. Let’s proceed to the next step – defining CRDs.

Create Custom Resources with Argo CD

In the previous steps, we successfully installed Kubernetes operators with ArgoCD. Now, it is time to use them. We will do everything in a single synchronization process. In the previous phase (wave=2), we installed the Kafka operator (Strimzi). In this phase, we will run the Kafka cluster using CRD provided by the Strimzi project. To be sure that we apply it after the Strimzi operator installation, we will do it in the third phase (1). That’s not all. Since our CRD has been created by the operator, it is not part of the sync process. By default, Argo CD tries to find the CRD in the sync and will fail with the error the server could not find the requested resource. To avoid it we will skip the dry run for missing resource types (2) during sync.

apiVersion: v1
kind: Namespace
metadata:
  name: kafka
  annotations:
    argocd.argoproj.io/sync-wave: "1"
---
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
  namespace: kafka
  annotations:
    argocd.argoproj.io/sync-wave: "3" # (1)
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true # (2)
spec:
  kafka:
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      default.replication.factor: 3
      min.insync.replicas: 2
      inter.broker.protocol.version: '3.2'
    storage:
      type: persistent-claim
      size: 5Gi
      deleteClaim: true
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    version: 3.2.3
    replicas: 3
  entityOperator:
    topicOperator: {}
    userOperator: {}
  zookeeper:
    storage:
      type: persistent-claim
      deleteClaim: true
      size: 2Gi
    replicas: 3

We can also install Knative Serving on our cluster since we previously installed the Knative operator. The same as before we are setting the wave=3 and skipping the dry run on missing resources during the sync.

apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
  name: knative-serving
  namespace: knative-serving
  annotations:
    argocd.argoproj.io/sync-wave: "3"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec: {}

Finally, let’s create the Argo CD Application that manages all the defined manifests and automatically applies them to the Kubernetes cluster. We need to define the source Git repository and the directory containing our YAMLs (global).

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: cluster-config
spec:
  destination:
    server: 'https://kubernetes.default.svc'
  project: default
  source:
    path: global
    repoURL: 'https://github.com/piomin/openshift-cluster-config.git'
    targetRevision: HEAD
    helm:
      valueFiles:
        - values.yaml
  syncPolicy:
    automated:
      selfHeal: true

Helm Unit Testing

Just to ensure that we defined all the Helm templates properly we can include some unit tests. We can use helm-unittest for that. We will place the test sources inside the global/tests directory. Here’s our test defined in the subscription_tests.yaml file:

suite: test main
values:
  - ./values/test.yaml
templates:
  - templates/subscriptions.yaml
chart:
  version: 1.0.0+test
  appVersion: 1.0
tests:
  - it: subscription default ns
    template: templates/subscriptions.yaml
    documentIndex: 0
    asserts:
      - equal:
          path: metadata.namespace
          value: openshift-operators
      - equal:
          path: metadata.name
          value: test1
      - equal:
          path: spec.channel
          value: ch1
      - equal:
          path: spec.source
          value: src1
      - isKind:
          of: Subscription
      - isAPIVersion:
          of: operators.coreos.com/v1alpha1
  - it: subscription custom ns
    template: templates/subscriptions.yaml
    documentIndex: 1
    asserts:
      - equal:
          path: metadata.namespace
          value: custom-ns
      - equal:
          path: metadata.name
          value: test2
      - equal:
          path: spec.channel
          value: ch2
      - equal:
          path: spec.source
          value: src2
      - isKind:
          of: Subscription
      - isAPIVersion:
          of: operators.coreos.com/v1alpha1
  - it: custom ns
    template: templates/subscriptions.yaml
    documentIndex: 2
    asserts:
      - equal:
          path: metadata.name
          value: custom-ns
      - isKind:
          of: Namespace
      - isAPIVersion:
          of: v1

We need to define test values:

subscriptions:
  - name: test1
    namespace: openshift-operators
    channel: ch1
    source: src1
  - name: test2
    namespace: custom-ns
    channel: ch2
    source: src2

We can prepare a build process for our repository. Here’s a sample Circle CI configuration for that. If you are interested in more details about Helm unit testing and releasing please refer to my article.

version: 2.1

orbs:
  helm: circleci/helm@2.0.1

jobs:
  build:
    docker:
      - image: cimg/base:2023.04
    steps:
      - checkout
      - helm/install-helm-client
      - run:
          name: Install Helm unit-test
          command: helm plugin install https://github.com/helm-unittest/helm-unittest
      - run:
          name: Run unit tests
          command: helm unittest global

workflows:
  helm_test:
    jobs:
      - build

Synchronize Configuration with Argo CD

Once we create a new Argo CD Application responsible for synchronization our process is starting. In the first step, Argo CD creates the required namespaces. Then, it proceeds to the operators’ installation phase. It may take some time.

Once ArgoCD installs all the Kubernetes operators you can verify their health checks. Here’s the value of a health check during the installation phase.

kubernetes-operators-argocd-healthcheck

Here’s the result after successful installation.

Now, Argo CD is proceeding to the CRDs creation phase. It runs the Kafka cluster and enables Knative. Let’s switch to the Openshift cluster console. We can display a list of installed operators:

kubernetes-operators-argocd-operators

We can also verify if the Kafka cluster is running in the kafka namespace:

Final Thoughts

With Argo CD we can configure the whole Kubernetes cluster configuration. It supports Helm charts, but there is another way for installing apps on Kubernetes – operators. I focused on the features and approach that allow us to install and manage operators in the GitOps way. I showed a practical example of how to use sync waves and apply CRDs not managed directly by Argo CD. With all mechanisms, we can easily handle Kubernetes operators with ArgoCD.

The post Manage Kubernetes Operators with ArgoCD appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/05/05/manage-kubernetes-operators-with-argocd/feed/ 14 14151