kustomize Archives - Piotr's TechBlog https://piotrminkowski.com/tag/kustomize/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 15 Apr 2024 12:09:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 kustomize Archives - Piotr's TechBlog https://piotrminkowski.com/tag/kustomize/ 32 32 181738725 Migrate from Kubernetes to OpenShift in the GitOps Way https://piotrminkowski.com/2024/04/15/migrate-from-kubernetes-to-openshift-in-the-gitops-way/ https://piotrminkowski.com/2024/04/15/migrate-from-kubernetes-to-openshift-in-the-gitops-way/#comments Mon, 15 Apr 2024 12:09:50 +0000 https://piotrminkowski.com/?p=15190 In this article, you will learn how to migrate your apps from Kubernetes to OpenShift in the GitOps way using tools like Kustomize, Helm, operators, and Argo CD. We will discuss the best practices in that area. This requires us to avoid approaches like starting a pod in the privileged mode. We will focus not […]

The post Migrate from Kubernetes to OpenShift in the GitOps Way appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to migrate your apps from Kubernetes to OpenShift in the GitOps way using tools like Kustomize, Helm, operators, and Argo CD. We will discuss the best practices in that area. This requires us to avoid approaches like starting a pod in the privileged mode. We will focus not just on running your custom apps, but mostly on the popular pieces of cloud-native or legacy software including:

  • Argo CD
  • Istio
  • Apache Kafka
  • Postgres
  • HashiCorp Vault
  • Prometheus
  • Redis
  • Cert Manager

Finally, we will migrate our sample Spring Boot app. I will also show you how to build such an app on Kubernetes and OpenShift in the same way using the Shipwright tool. However, before we start, let’s discuss some differences between “vanilla” Kubernetes and OpenShift.

Introduction

What are the key differences between Kubernetes and OpenShift? That’s probably the first question you will ask yourself when considering migration from Kubernetes. Today, I will focus only on those aspects that impact running the apps from our list. First of all, OpenShift is built on top of Kubernetes and is fully compatible with Kubernetes APIs and resources. If you can do something on Kubernetes, you can do it on OpenShift in the same way unless it doesn’t compromise security policy. OpenShift comes with additional security policies out of the box. For example, by default, it won’t allow you to run containers with the root user.

Apart from security reasons, only the fact that you can do something doesn’t mean that you should do it in that way. So, you can run images from Docker Hub, but Red Hat provides many supported container images built from Red Hat Enterprise Linux. You can find a full list of supported images here. Although you can install popular software on OpenShift using Helm charts, Red Hat provides various supported Kubernetes operators for that. With those operators, you can be sure that the installation will go without any problems and the solution might be integrated with OpenShift better. We will analyze all those things based on the examples from the tools list.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. I will explain the structure of our sample in detail later. So after cloning the Git repository you should just follow my instructions.

Install Argo CD

Use Official Helm Chart

In the first step, we will install Argo CD on OpenShift. I’m assuming that on Kubernetes, you’re using the official Helm chart for that. In order to install that chart, we need to add the following Helm repository:

$ helm repo add argo https://argoproj.github.io/argo-helm
ShellSession

Then, we can install the Argo CD in the argocd namespace on OpenShift with the following command. The Argo CD Helm chart provides some parameters dedicated to OpenShift. We need to enable arbitrary uid for the repo server by setting the openshift.enabled property to true. If we want to access the Argo CD dashboard outside of the cluster we should expose it as the Route. In order to do that, we need to enable the server.route.enabled property and set the hostname using the server.route.hostname parameter (piomin.eastus.aroapp.io is my OpenShift domain).

$ helm install argocd argo/argo-cd -n argocd --create-namespace \
    --set openshift.enabled=true \
    --set server.route.enabled=true \
    --set server.route.hostname=argocd.apps.piomin.eastus.aroapp.io
ShellSession

After that, we can access the Argo CD dashboard using the Route address as shown below. The admin user password may be taken from the argocd-initial-admin-secret Secret generated by the Helm chart.

Use the OpenShift GitOps Operator (Recommended Way)

The solution presented in the previous section works fine. However, it is not the optimal approach for OpenShift. In that case, the better idea is to use OpenShift GitOps Operator. Firstly, we should find the “Red Hat GitOps Operator” inside the “Operator Hub” section in the OpenShift Console. Then, we have to install the operator.

During the installation, the operator automatically creates the Argo CD instance in the openshift-gitops namespace.

OpenShift GitOps operator automatically exposes the Argo CD dashboard through the Route. It is also integrated with OpenShift auth, so we can use cluster credentials to sign in there.

kubernetes-to-openshift-argocd

Install Redis, Postgres and Apache Kafka

OpenShift Support in Bitnami Helm Charts

Firstly, let’s assume that we use Bitnami Helm charts to install all three tools from the chapter title (Redis, Postgres, Kafka) on Kubernetes. Fortunately, the latest versions of Bitnami Helm charts provide out-of-the-box compatibility with the OpenShift platform. Let’s analyze what it means.

Beginning from the 4.11 version OpenShift introduces new Security Context Constraints (SCC) called restricted-v2. In OpenShift, security context constraints allow us to control permissions assigned to the pods. The restricted-v2 SCC includes a minimal set of privileges usually required for a generic workload to run. It is the most restrictive policy that matches the current pod security standards. As I mentioned before, the latest version of the most popular Bitnami Helm charts supports the restricted-v2 SCC. We can check which of the charts support that feature by checking if they provide the global.compatibility.openshift.adaptSecurityContext parameter. The default value of that parameter is auto. It means that it is applied only if the detected running cluster is Openshift.

So, in short, we don’t have to change anything in the Helm chart configuration used on Kubernetes to make it work also on OpenShift. However, it doesn’t mean that we won’t change that configuration. Let’s analyze it tool after tool.

Install Redis on OpenShift with Helm Chart

In the first step, let’s add the Bitnami Helm repository with the following command:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
ShellSession

Then, we can install and run a three-node Redis cluster with a single master node in the redis namespace using the following command:

$ helm install redis bitnami/redis -n redis --create-namespace
ShellSession

After installing the chart we can display a list of pods running the redis namespace:

$ oc get po
NAME               READY   STATUS    RESTARTS   AGE
redis-master-0     1/1     Running   0          5m31s
redis-replicas-0   1/1     Running   0          5m31s
redis-replicas-1   1/1     Running   0          4m44s
redis-replicas-2   1/1     Running   0          4m3s
ShellSession

Let’s take a look at the securityContext section inside one of the Redis cluster pods. It contains characteristic fields for the restricted-v2 SCC, which removes runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs.

kubernetes-to-openshift-security-context

However, let’s stop for a moment to analyze the current situation. We installed Redis on OpenShift using the Bitnami Helm chart. By default, this chart is based on the Redis Debian image provided by Bitnami in the Docker Hub.

On the other hand, Red Hat provides its build of Redis image based on RHEL 9. Consequently, this image would be more suitable for running on OpenShift.

kubernetes-to-openshift-redis

In order to use a different Redis image with the Bitnami Helm chart, we need to override the registry, repository, and tag fields in the image section. The full address of the current latest Red Hat Redis image is registry.redhat.io/rhel9/redis-7:1-16. In order to make the Bitnami chart work with that image, we need to override the default data path to /var/lib/redis/data and disable the container’s Security Context read-only root filesystem for the slave pods.

image:
  tag: 1-16
  registry: registry.redhat.io
  repository: rhel9/redis-7

master:
  persistence:
    path: /var/lib/redis/data

replica:
  persistence:
    path: /var/lib/redis/data
  containerSecurityContext:
    readOnlyRootFilesystem: false
YAML

Install Postgres on OpenShift with Helm Chart

With Postgres, we have every similar as before with Redis. The Bitnami Helm chart also supports OpenShift restricted-v2 SCC and Red Hat provide the Postgres image based on RHEL 9. Once again, we need to override some chart parameters to adapt to a different image than the default one provided by Bitnami.

image:
  tag: 1-54
  registry: registry.redhat.io
  repository: rhel9/postgresql-15

primary:
  containerSecurityContext:
    readOnlyRootFilesystem: false
  persistence:
    mountPath: /var/lib/pgsql
  extraEnvVars:
    - name: POSTGRESQL_ADMIN_PASSWORD
      value: postgresql123

postgresqlDataDir: /var/lib/pgsql/data
YAML

Of course, we can consider switching to one of the available Postgres operators. From the “Operator Hub” section we can install e.g. Postgres using Crunchy or EDB operators. However, these are not operators provided by Red Hat. Of course, you can use them on “vanilla” Kubernetes as well. In that case, the migration to OpenShift also won’t be complicated.

Install Kafka on OpenShift with the Strimzi Operator

The situation is slightly different in the case of Apache Kafka. Of course, we can use the Kafka Helm chart provided by Bitnami. However, Red Hat provides a supported version of Kafka through the Strimzi operator. This operator is a part of the Red Hat product ecosystem and is available commercially as the AMQ Streams. In order to install Kafka with AMQ Streams on OpenShift, we need to install the operator first.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: amq-streams
  namespace: openshift-operators
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  channel: stable
  installPlanApproval: Automatic
  name: amq-streams
  source: redhat-operators
  sourceNamespace: openshift-marketplace
YAML

Once we install the operator with the Strimzi CRDs we can provision the Kafka instance on OpenShift. In order to do that, we need to define the Kafka object. The name of the cluster is my-cluster. We should install it after a successful installation of the operator CRD, so we set the higher value of the Argo CD sync-wave parameter than for the amq-streams Subscription object. Argo CD should also ignore missing CRDs installed by the operator during sync thanks to the SkipDryRunOnMissingResource option.

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
  namespace: kafka
  annotations:
    argocd.argoproj.io/sync-wave: "3"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
  kafka:
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      default.replication.factor: 3
      min.insync.replicas: 2
      inter.broker.protocol.version: '3.6'
    storage:
      type: persistent-claim
      size: 5Gi
      deleteClaim: true
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    version: 3.6.0
    replicas: 3
  entityOperator:
    topicOperator: {}
    userOperator: {}
  zookeeper:
    storage:
      type: persistent-claim
      deleteClaim: true
      size: 2Gi
    replicas: 3
YAML

GitOps Strategy for Kubernetes and OpenShift

In this section, we will focus on comparing differences in the GitOps manifest between Kubernetes and Openshift. We will use Kustomize to configure two overlays: openshift and kubernetes. Here’s the structure of our configuration repository:

.
├── base
│   ├── kustomization.yaml
│   └── namespaces.yaml
└── overlays
    ├── kubernetes
    │   ├── kustomization.yaml
    │   ├── namespaces.yaml
    │   ├── values-cert-manager.yaml
    │   └── values-vault.yaml
    └── openshift
        ├── cert-manager-operator.yaml
        ├── kafka-operator.yaml
        ├── kustomization.yaml
        ├── service-mesh-operator.yaml
        ├── values-postgres.yaml
        ├── values-redis.yaml
        └── values-vault.yaml
ShellSession

Configuration for Kubernetes

In addition to the previously discussed tools, we will also install “cert-manager”, Prometheus, and Vault using Helm charts. Kustomize allows us to define a list of managed charts using the helmCharts section. Here’s the kustomization.yaml file containing a full set of installed charts:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base
  - namespaces.yaml

helmCharts:
  - name: redis
    repo: https://charts.bitnami.com/bitnami
    releaseName: redis
    namespace: redis
  - name: postgresql
    repo: https://charts.bitnami.com/bitnami
    releaseName: postgresql
    namespace: postgresql
  - name: kafka
    repo: https://charts.bitnami.com/bitnami
    releaseName: kafka
    namespace: kafka
  - name: cert-manager
    repo: https://charts.jetstack.io
    releaseName: cert-manager
    namespace: cert-manager
    valuesFile: values-cert-manager.yaml
  - name: vault
    repo: https://helm.releases.hashicorp.com
    releaseName: vault
    namespace: vault
    valuesFile: values-vault.yaml
  - name: prometheus
    repo: https://prometheus-community.github.io/helm-charts
    releaseName: prometheus
    namespace: prometheus
  - name: istio
    repo: https://prometheus-community.github.io/helm-charts
    releaseName: istio
    namespace: istio-system
overlays/kubernetes/kustomization.yaml

For some of them, we need to override default Helm parameters. Here’s the values-vault.yaml file with the parameters for Vault. We enable development mode and UI dashboard:

server:
  dev:
    enabled: true
ui:
  enabled: true
overlays/kubernetes/values-vault.yaml

Let’s also customize the default behavior of the “cert-manager” chart with the following values:

installCRDs: true
startupapicheck:
  enabled: false
overlays/kubernetes/values-cert-manager.yaml

Configuration for OpenShift

Then, we can switch to the configuration for Openshift. Vault has to be installed with the Helm chart, but for “cert-manager” we can use the operator provided by Red Hat. Since Openshift comes with built-in Prometheus, we don’t need to install it. We will also replace the Helm chart with Istio with the Red Hat-supported OpenShift Service Mesh operator. Here’s the kustomization.yaml for OpenShift:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base
  - kafka-operator.yaml
  - cert-manager-operator.yaml
  - service-mesh-operator.yaml

helmCharts:
  - name: redis
    repo: https://charts.bitnami.com/bitnami
    releaseName: redis
    namespace: redis
    valuesFile: values-redis.yaml
  - name: postgresql
    repo: https://charts.bitnami.com/bitnami
    releaseName: postgresql
    namespace: postgresql
    valuesFile: values-postgres.yaml
  - name: vault
    repo: https://helm.releases.hashicorp.com
    releaseName: vault
    namespace: vault
    valuesFile: values-vault.yaml
overlays/openshift/kustomization.yaml

For Vault we should enable integration with Openshift and support for the Route object. Red Hat provides a Vault image based on UBI in the registry.connect.redhat.com/hashicorp/vault registry. Here’s the values-vault.yaml file for OpenShift:

server:
  dev:
    enabled: true
  route:
    enabled: true
    host: ""
    tls: null
  image:
    repository: "registry.connect.redhat.com/hashicorp/vault"
    tag: "1.16.1-ubi"
global:
  openshift: true
injector:
  enabled: false
overlays/openshift/values-vault.yaml

In order to install operators we need to define at least the Subscription object. Here’s the subscription for the OpenShift Service Mesh. After installing the operator we can create a control plane in the istio-system namespace using the ServiceMeshControlPlane CRD object. In order to apply the CRD after installing the operator, we need to use the Argo CD sync waves and define the SkipDryRunOnMissingResource parameter:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: servicemeshoperator
  namespace: openshift-operators
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  channel: stable
  installPlanApproval: Automatic
  name: servicemeshoperator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
---
apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
  name: basic
  namespace: istio-system
  annotations:
    argocd.argoproj.io/sync-wave: "3"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
  tracing:
    type: None
    sampling: 10000
  policy:
    type: Istiod
  addons:
    grafana:
      enabled: false
    jaeger:
      install:
        storage:
          type: Memory
    kiali:
      enabled: false
    prometheus:
      enabled: false
  telemetry:
    type: Istiod
  version: v2.5
overlays/openshift/service-mesh-operator.yaml

Since the “cert-manager” operator is installed in a different namespace than openshift-operators, we also need to define the OperatorGroup object.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-cert-manager-operator
  namespace: cert-manager
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  channel: stable-v1
  installPlanApproval: Automatic
  name: openshift-cert-manager-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
---
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
  name: cert-manager-operator
  namespace: cert-manager
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  targetNamespaces:
    - cert-manager
overlays/openshift/cert-manager-operator.yaml

Finally, OpenShift comes with built-in Prometheus monitoring, so we don’t need to install it.

Apply the Configuration with Argo CD

Here’s the Argo CD Application responsible for installing our sample configuration on OpenShift. We should create it in the openshift-gitops namespace.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: install
  namespace: openshift-gitops
spec:
  destination:
    server: 'https://kubernetes.default.svc'
  project: default
  source:
    path: overlays/openshift
    repoURL: 'https://github.com/piomin/kubernetes-to-openshift-argocd.git'
    targetRevision: HEAD
YAML

Before that, we need to enable the use of the Helm chart inflator generator with Kustomize in Argo CD. In order to do that, we can add the kustomizeBuildOptions parameter in the openshift-gitops ArgoCD object as shown below.

apiVersion: argoproj.io/v1beta1
kind: ArgoCD
metadata:
  name: openshift-gitops
  namespace: openshift-gitops
spec:
  # ...
  kustomizeBuildOptions: '--enable-helm'
YAML

After creating the Argo CD Application and triggering the sync process, the installation starts on OpenShift.

kubernetes-to-openshift-gitops

Build App Images

We installed several software solutions including the most popular databases, message brokers, and security tools. However, now we want to build and run our own apps. How to migrate them from Kubernetes to OpenShift? Of course, we can run the app images exactly in the same way as in Kubernetes. On the other hand, we can build them on OpenShift using the Shipwright project. We can install it on OpenShift using the “Builds for Red Hat OpenShift Operator”.

kubernetes-to-openshift-shipwright

After that, we need to create the ShiwrightBuild object. It needs to contain the name of the target namespace for running Shipwright in the targetNamespace field. In my case, the target namespace is builds-demo. For a detailed description of the Shipwright build, you can refer to that article on my blog.

apiVersion: operator.shipwright.io/v1alpha1
kind: ShipwrightBuild
metadata:
  name: openshift-builds
spec:
  targetNamespace: builds-demo
YAML

With Shipwright we can easily switch between multiple build strategies on Kubernetes, and on OpenShift as well. For example, on OpenShift we can use a built-in source-to-image (S2I) strategy, while on Kubernetes e.g. Kaniko or Cloud Native Buildpacks.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: sample-spring-kotlin-build
  namespace: builds-demo
spec:
  output:
    image: quay.io/pminkows/sample-kotlin-spring:1.0-shipwright
    pushSecret: pminkows-piomin-pull-secret
  source:
    git:
      url: https://github.com/piomin/sample-spring-kotlin-microservice.git
  strategy:
    name: source-to-image
    kind: ClusterBuildStrategy
YAML

Final Thoughts

Migration from Kubernetes to Openshift is not a painful process. Many popular Helm charts support OpenShift restricted-v2 SCC. Thanks to that, in some cases, you don’t need to change anything. However, sometimes it’s worth switching to the version of the particular tool supported by Red Hat.

The post Migrate from Kubernetes to OpenShift in the GitOps Way appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/04/15/migrate-from-kubernetes-to-openshift-in-the-gitops-way/feed/ 2 15190
Continuous Delivery on Kubernetes with Database using ArgoCD and Liquibase https://piotrminkowski.com/2021/12/13/continuous-delivery-on-kubernetes-with-database-using-argocd-and-liquibase/ https://piotrminkowski.com/2021/12/13/continuous-delivery-on-kubernetes-with-database-using-argocd-and-liquibase/#comments Mon, 13 Dec 2021 14:14:46 +0000 https://piotrminkowski.com/?p=10320 In this article, you will learn how to design a continuous delivery process on Kubernetes with ArgoCD and Liquibase. We will consider the application that connects to a database and update the schema on a new release. How to do it properly for a cloud-native application? Moreover, how to do it properly on Kubernetes? Fortunately, […]

The post Continuous Delivery on Kubernetes with Database using ArgoCD and Liquibase appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to design a continuous delivery process on Kubernetes with ArgoCD and Liquibase. We will consider the application that connects to a database and update the schema on a new release. How to do it properly for a cloud-native application? Moreover, how to do it properly on Kubernetes?

Fortunately, there are two types of tools that perfectly fit the described process. Firstly, we need a tool that allows us to easily deploy applications in multiple environments. That’s what we may achieve on Kubernetes with ArgoCD. In the next step, we need a tool that automatically updates a database schema on demand. There are many tools for that. Since we need something lightweight and easy to containerize my choice fell on Liquibase. It is not my first article about Liquibase on Kubernetes. You can also read more about the blue-green deployment approach with Liquibase here.

However, in this article, we will focus on a little bit different problem. First, let’s describe it.

Introduction

I think that one of the biggest challenges around continuous delivery is integration to the databases. Consequently, we should treat this integration the same as a standard configuration. It’s time to treat database code like application code. Otherwise, our CI/CD process fails at the database.

Usually, when we consider the CI/CD process for the application we have multiple target environments. According to the cloud-native patterns, each application has its own separated database. Moreover, it is unique per each environment. So each time, we run our delivery pipeline, we have to update the database instance on the particular environment. We should do it just before running a new version application (e.g. with the blue-green approach).

Here’s the visualization of our scenario. We are releasing the Spring Boot application in three different environments. Those environments are just different namespaces on Kubernetes: dev, test and prod. The whole process is managed by ArgoCD and Liquibase. In the dev environment, we don’t use any migration tool. Let’s say we leave it to developers. Our mechanism is active for the test and prod namespaces.

argocd-liquibase-arch

Modern Java frameworks like Spring Boot offer built-in integration with Liquibase. In that concept, we just need to create a Liquibase changelog and set its location in Spring configuration. Our framework is running such a script on application startup. Since it is a very useful approach in development, I would not recommend it for production deployment. Especially if you deploy your application on Kubernetes. Why? You will find a detailed explanation in the article I have already mentioned in the first paragraph.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂

Also, one thing before we start. Here I described the whole process of building and deploying applications on Kubernetes with ArgoCD and Tekton. Typically there is a continuous integration phase, which is realized with Tekton. In this phase, we are just building and pushing the image. We are not releasing changes to the database. Especially, that we have multiple environments.

The picture visible below illustrates our approach in that context. ArgoCD synchronizes configuration and applies the changes into the Kubernetes cluster and a database using Liquibase. It doesn’t matter if a database is running on Kubernetes or not. However, in our case, we assume it is deployed on the namespace as the application.

argocd-liquibase-pipeline

Docker Image with Liquibase

There is an official Liquibase image on Docker Hub. We need to execute the update command using this image. To do that I prepared a custom image based on the official Liquibase image. You can see the Dockerfile below. But you can as well pull the image I published in my Docker registry docker.io/piomin/liquibase:latest.

FROM liquibase/liquibase
ENV URL=jdbc:postgresql://postgresql:5432/test
ENV USERNAME=postgres
ENV PASSWORD=postgres
ENV CHANGELOGFILE=changelog.xml
CMD ["sh", "-c", "docker-entrypoint.sh --url=${URL} --username=${USERNAME} --password=${PASSWORD} --classpath=/liquibase/changelog --changeLogFile=${CHANGELOGFILE} update"]

We will run that image as the init container inside the pod with our application. Thanks to that approach we can be sure that it updates database schema just before starting the container with the application.

Sample Spring Boot application

We will use one of my sample Spring Boot applications in this exercise. You may find it in my GitHub repository here. It connects to the PostgreSQL database:

spring:
  application:
    name: person-service
  datasource:
    url: jdbc:postgresql://person-db:5432/${DATABASE_NAME}
    username: ${DATABASE_USER}
    password: ${DATABASE_PASSWORD}

You can clone the application source code by yourself. But you can as well pull the ready image located here: quay.io/pminkows/person-app. The application is compiled with Java 17 and uses Spring Data JPA as an ORM layer to integrate with the database.

<properties>
  <java.version>17</java.version>
</properties>
<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
  </dependency>
  <dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <scope>runtime</scope>
  </dependency>
</dependencies>

Use Liquibase with ArgoCD and Kustomize

In our scenario, the Liquibase init container should be included in the Deployment only for test and prod namespaces. On the dev environment, the pod should just contain a single container with the Spring Boot application. In order to implement this behavior with ArgoCD, we may use Kustomize. Kustomize has the concepts of bases and overlays. A base is a directory with a kustomization.yaml, which contains a set of resources and associated customization. Thanks to overlays, we may include additional elements into the base manifests. So, here’s the structure of our configuration repository in GitHub:

Here’s our base Deployment file. It’s pretty simple. The only thing we need to do is to inject database credentials from secrets:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sample-spring
  template:
    metadata:
      labels:
        app: sample-spring
    spec:
      containers:
        - name: sample-spring
          image: quay.io/pminkows/person-app:1.0
          ports:
            - containerPort: 8080
          env:
            - name: DATABASE_USER
              valueFrom:
                secretKeyRef:
                  name: postgres
                  key: database-user
            - name: DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres
                  key: database-password
            - name: DATABASE_NAME
              valueFrom:
                secretKeyRef:
                  name: postgres
                  key: database-name

We also have the Liquibase changeLog.sql file in the base directory. We should place it inside Kubernetes ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: changelog-cm
data:
  changeLog.sql: |
    --liquibase formatted sql
    --changeset piomin:1
    create table person (
      id serial primary key,
      name varchar(255),
      gender varchar(255),
      age int,
      externalId int
    );
    insert into person(name, age, gender) values('John Smith', 25, 'MALE');
    insert into person(name, age, gender) values('Paul Walker', 65, 'MALE');
    insert into person(name, age, gender) values('Lewis Hamilton', 35, 'MALE');
    insert into person(name, age, gender) values('Veronica Jones', 20, 'FEMALE');
    insert into person(name, age, gender) values('Anne Brown', 60, 'FEMALE');
    insert into person(name, age, gender) values('Felicia Scott', 45, 'FEMALE');

Also, let’s take look at the base/kustomization.yaml file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - deployment.yaml
  - changelog.yaml

In the overlays/liquibase/liquibase-container.yaml file we are defining the init container that should be included in our base Deployment. There are four parameters available to override. Therefore, we will set the address of a target database, username, password, and location of the Liquibase changelog file. The changeLog.sql file is available to the container as a mounted volume under location /liquibase/changelog. Of course, I’m using the image described in the Docker Image with Liquibase section.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-deployment
spec:
  template:
    spec:
      initContainers:
        - name: liquibase
          image: docker.io/piomin/liquibase:latest
          env:
            - name: URL
              value: jdbc:postgresql://person-db:5432/sampledb
            - name: USERNAME
              valueFrom:
                secretKeyRef:
                  name: postgres
                  key: database-user
            - name: PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres
                  key: database-password
            - name: CHANGELOGFILE
              value: changeLog.sql
          volumeMounts:
            - mountPath: /liquibase/changelog
              name: changelog
      volumes:
        - name: changelog
          configMap:
            name: changelog-cm

And last manifest in the repository. Here’s the overlay kustomization.yaml file. It uses the whole structure from the base catalog and includes the init container to the application Deployment:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - ../../base
patchesStrategicMerge:
  - liquibase-container.yaml

Create ArgoCD applications

Since our configuration is ready, we may proceed to the last step. Let’s create ArgoCD applications responsible for synchronization between Git repository and both Kubernetes cluster and a target database. In order to create the ArgoCD application, we need to apply the following manifest. It refers to the Kustomize overlay defined in the /overlays/liquibase directory. The declaration for test or prod environment looks as shown below:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: liquibase-prod
spec:
  destination:
    namespace: prod
    server: 'https://kubernetes.default.svc'
  project: default
  source:
    path: overlays/liquibase
    repoURL: 'https://github.com/piomin/sample-argocd-liquibase-kustomize.git'
    targetRevision: HEAD

On the other hand, the dev environment doesn’t require an init container with Liquibase. Therefore, it uses the base directory:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: liquibase-dev
spec:
  destination:
    namespace: dev
    server: 'https://kubernetes.default.svc'
  project: default
  source:
    path: base
    repoURL: 'https://github.com/piomin/sample-argocd-liquibase-kustomize.git'
    targetRevision: HEAD

Since ArgoCD supports Kustomize, we just need to create applications. Alternatively, we could have created them using ArgoCD UI. Finally, here’s a list of ArgoCD applications responsible for applying changes to the Postgres database. Here’s the UI view for all the environments after synchronization (Sync button).

We can also verify the logs printed by the Liquibase init container after ArgoCD synchronization:

We may also verify a list of running pods in one of the target namespace, e.g. prod.

$ kubectl get pod                                               
NAME                                        READY   STATUS    RESTARTS   AGE
person-db-1-vn8kw                           1/1     Running   0          82m
sample-spring-deployment-7695d64c54-9hf75   1/1     Running   0          7m16s

And also print out Liquibase logs using kubectl:

$ kubectl logs sample-spring-deployment-7695d64c54-9hf75 -c liquibase

The post Continuous Delivery on Kubernetes with Database using ArgoCD and Liquibase appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/12/13/continuous-delivery-on-kubernetes-with-database-using-argocd-and-liquibase/feed/ 4 10320