liquibase kubernetes Archives - Piotr's TechBlog https://piotrminkowski.com/tag/liquibase-kubernetes/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 13 Dec 2021 14:14:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 liquibase kubernetes Archives - Piotr's TechBlog https://piotrminkowski.com/tag/liquibase-kubernetes/ 32 32 181738725 Continuous Delivery on Kubernetes with Database using ArgoCD and Liquibase https://piotrminkowski.com/2021/12/13/continuous-delivery-on-kubernetes-with-database-using-argocd-and-liquibase/ https://piotrminkowski.com/2021/12/13/continuous-delivery-on-kubernetes-with-database-using-argocd-and-liquibase/#comments Mon, 13 Dec 2021 14:14:46 +0000 https://piotrminkowski.com/?p=10320 In this article, you will learn how to design a continuous delivery process on Kubernetes with ArgoCD and Liquibase. We will consider the application that connects to a database and update the schema on a new release. How to do it properly for a cloud-native application? Moreover, how to do it properly on Kubernetes? Fortunately, […]

The post Continuous Delivery on Kubernetes with Database using ArgoCD and Liquibase appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to design a continuous delivery process on Kubernetes with ArgoCD and Liquibase. We will consider the application that connects to a database and update the schema on a new release. How to do it properly for a cloud-native application? Moreover, how to do it properly on Kubernetes?

Fortunately, there are two types of tools that perfectly fit the described process. Firstly, we need a tool that allows us to easily deploy applications in multiple environments. That’s what we may achieve on Kubernetes with ArgoCD. In the next step, we need a tool that automatically updates a database schema on demand. There are many tools for that. Since we need something lightweight and easy to containerize my choice fell on Liquibase. It is not my first article about Liquibase on Kubernetes. You can also read more about the blue-green deployment approach with Liquibase here.

However, in this article, we will focus on a little bit different problem. First, let’s describe it.

Introduction

I think that one of the biggest challenges around continuous delivery is integration to the databases. Consequently, we should treat this integration the same as a standard configuration. It’s time to treat database code like application code. Otherwise, our CI/CD process fails at the database.

Usually, when we consider the CI/CD process for the application we have multiple target environments. According to the cloud-native patterns, each application has its own separated database. Moreover, it is unique per each environment. So each time, we run our delivery pipeline, we have to update the database instance on the particular environment. We should do it just before running a new version application (e.g. with the blue-green approach).

Here’s the visualization of our scenario. We are releasing the Spring Boot application in three different environments. Those environments are just different namespaces on Kubernetes: dev, test and prod. The whole process is managed by ArgoCD and Liquibase. In the dev environment, we don’t use any migration tool. Let’s say we leave it to developers. Our mechanism is active for the test and prod namespaces.

argocd-liquibase-arch

Modern Java frameworks like Spring Boot offer built-in integration with Liquibase. In that concept, we just need to create a Liquibase changelog and set its location in Spring configuration. Our framework is running such a script on application startup. Since it is a very useful approach in development, I would not recommend it for production deployment. Especially if you deploy your application on Kubernetes. Why? You will find a detailed explanation in the article I have already mentioned in the first paragraph.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions πŸ™‚

Also, one thing before we start. Here I described the whole process of building and deploying applications on Kubernetes with ArgoCD and Tekton. Typically there is a continuous integration phase, which is realized with Tekton. In this phase, we are just building and pushing the image. We are not releasing changes to the database. Especially, that we have multiple environments.

The picture visible below illustrates our approach in that context. ArgoCD synchronizes configuration and applies the changes into the Kubernetes cluster and a database using Liquibase. It doesn’t matter if a database is running on Kubernetes or not. However, in our case, we assume it is deployed on the namespace as the application.

argocd-liquibase-pipeline

Docker Image with Liquibase

There is an official Liquibase image on Docker Hub. We need to execute the update command using this image. To do that I prepared a custom image based on the official Liquibase image. You can see the Dockerfile below. But you can as well pull the image I published in my Docker registry docker.io/piomin/liquibase:latest.

FROM liquibase/liquibase
ENV URL=jdbc:postgresql://postgresql:5432/test
ENV USERNAME=postgres
ENV PASSWORD=postgres
ENV CHANGELOGFILE=changelog.xml
CMD ["sh", "-c", "docker-entrypoint.sh --url=${URL} --username=${USERNAME} --password=${PASSWORD} --classpath=/liquibase/changelog --changeLogFile=${CHANGELOGFILE} update"]

We will run that image as the init container inside the pod with our application. Thanks to that approach we can be sure that it updates database schema just before starting the container with the application.

Sample Spring Boot application

We will use one of my sample Spring Boot applications in this exercise. You may find it in my GitHub repository here. It connects to the PostgreSQL database:

spring:
  application:
    name: person-service
  datasource:
    url: jdbc:postgresql://person-db:5432/${DATABASE_NAME}
    username: ${DATABASE_USER}
    password: ${DATABASE_PASSWORD}

You can clone the application source code by yourself. But you can as well pull the ready image located here: quay.io/pminkows/person-app. The application is compiled with Java 17 and uses Spring Data JPA as an ORM layer to integrate with the database.

<properties>
  <java.version>17</java.version>
</properties>
<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
  </dependency>
  <dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <scope>runtime</scope>
  </dependency>
</dependencies>

Use Liquibase with ArgoCD and Kustomize

In our scenario, the Liquibase init container should be included in the Deployment only for test and prod namespaces. On the dev environment, the pod should just contain a single container with the Spring Boot application. In order to implement this behavior with ArgoCD, we may use Kustomize. Kustomize has the concepts of bases and overlays. A base is a directory with a kustomization.yaml, which contains a set of resources and associated customization. Thanks to overlays, we may include additional elements into the base manifests. So, here’s the structure of our configuration repository in GitHub:

Here’s our base Deployment file. It’s pretty simple. The only thing we need to do is to inject database credentials from secrets:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sample-spring
  template:
    metadata:
      labels:
        app: sample-spring
    spec:
      containers:
        - name: sample-spring
          image: quay.io/pminkows/person-app:1.0
          ports:
            - containerPort: 8080
          env:
            - name: DATABASE_USER
              valueFrom:
                secretKeyRef:
                  name: postgres
                  key: database-user
            - name: DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres
                  key: database-password
            - name: DATABASE_NAME
              valueFrom:
                secretKeyRef:
                  name: postgres
                  key: database-name

We also have the Liquibase changeLog.sql file in the base directory. We should place it inside Kubernetes ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: changelog-cm
data:
  changeLog.sql: |
    --liquibase formatted sql
    --changeset piomin:1
    create table person (
      id serial primary key,
      name varchar(255),
      gender varchar(255),
      age int,
      externalId int
    );
    insert into person(name, age, gender) values('John Smith', 25, 'MALE');
    insert into person(name, age, gender) values('Paul Walker', 65, 'MALE');
    insert into person(name, age, gender) values('Lewis Hamilton', 35, 'MALE');
    insert into person(name, age, gender) values('Veronica Jones', 20, 'FEMALE');
    insert into person(name, age, gender) values('Anne Brown', 60, 'FEMALE');
    insert into person(name, age, gender) values('Felicia Scott', 45, 'FEMALE');

Also, let’s take look at the base/kustomization.yaml file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - deployment.yaml
  - changelog.yaml

In the overlays/liquibase/liquibase-container.yaml file we are defining the init container that should be included in our base Deployment. There are four parameters available to override. Therefore, we will set the address of a target database, username, password, and location of the Liquibase changelog file. The changeLog.sql file is available to the container as a mounted volume under location /liquibase/changelog. Of course, I’m using the image described in the Docker Image with Liquibase section.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-deployment
spec:
  template:
    spec:
      initContainers:
        - name: liquibase
          image: docker.io/piomin/liquibase:latest
          env:
            - name: URL
              value: jdbc:postgresql://person-db:5432/sampledb
            - name: USERNAME
              valueFrom:
                secretKeyRef:
                  name: postgres
                  key: database-user
            - name: PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgres
                  key: database-password
            - name: CHANGELOGFILE
              value: changeLog.sql
          volumeMounts:
            - mountPath: /liquibase/changelog
              name: changelog
      volumes:
        - name: changelog
          configMap:
            name: changelog-cm

And last manifest in the repository. Here’s the overlay kustomization.yaml file. It uses the whole structure from the base catalog and includes the init container to the application Deployment:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - ../../base
patchesStrategicMerge:
  - liquibase-container.yaml

Create ArgoCD applications

Since our configuration is ready, we may proceed to the last step. Let’s create ArgoCD applications responsible for synchronization between Git repository and both Kubernetes cluster and a target database. In order to create the ArgoCD application, we need to apply the following manifest. It refers to the Kustomize overlay defined in the /overlays/liquibase directory. The declaration for test or prod environment looks as shown below:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: liquibase-prod
spec:
  destination:
    namespace: prod
    server: 'https://kubernetes.default.svc'
  project: default
  source:
    path: overlays/liquibase
    repoURL: 'https://github.com/piomin/sample-argocd-liquibase-kustomize.git'
    targetRevision: HEAD

On the other hand, the dev environment doesn’t require an init container with Liquibase. Therefore, it uses the base directory:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: liquibase-dev
spec:
  destination:
    namespace: dev
    server: 'https://kubernetes.default.svc'
  project: default
  source:
    path: base
    repoURL: 'https://github.com/piomin/sample-argocd-liquibase-kustomize.git'
    targetRevision: HEAD

Since ArgoCD supports Kustomize, we just need to create applications. Alternatively, we could have created them using ArgoCD UI. Finally, here’s a list of ArgoCD applications responsible for applying changes to the Postgres database. Here’s the UI view for all the environments after synchronization (Sync button).

We can also verify the logs printed by the Liquibase init container after ArgoCD synchronization:

We may also verify a list of running pods in one of the target namespace, e.g. prod.

$ kubectl get pod                                               
NAME                                        READY   STATUS    RESTARTS   AGE
person-db-1-vn8kw                           1/1     Running   0          82m
sample-spring-deployment-7695d64c54-9hf75   1/1     Running   0          7m16s

And also print out Liquibase logs using kubectl:

$ kubectl logs sample-spring-deployment-7695d64c54-9hf75 -c liquibase

The post Continuous Delivery on Kubernetes with Database using ArgoCD and Liquibase appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/12/13/continuous-delivery-on-kubernetes-with-database-using-argocd-and-liquibase/feed/ 4 10320
Blue-green deployment with a database on Kubernetes https://piotrminkowski.com/2021/02/18/blue-green-deployment-with-a-database-on-kubernetes/ https://piotrminkowski.com/2021/02/18/blue-green-deployment-with-a-database-on-kubernetes/#comments Thu, 18 Feb 2021 11:03:33 +0000 https://piotrminkowski.com/?p=9450 In this article, you will learn how to use a blue-green deployment strategy on Kubernetes to propagate changes in the database. The database change process is an essential task for every software project. If your application connects to a database, the code with a database schema is as important as the application source code. Therefore, […]

The post Blue-green deployment with a database on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use a blue-green deployment strategy on Kubernetes to propagate changes in the database. The database change process is an essential task for every software project. If your application connects to a database, the code with a database schema is as important as the application source code. Therefore, you should store it in your version control system. It also should be a part of your CI/CD process. How Kubernetes may help in this process?

First of all, you may easily implement various deployment strategies on Kubernetes. One of them is a blue-green deployment. In this approach, you maintain two copies of your production environment: blue and green. As a result, this technique reduces risk and minimizes downtime. Also, it perfectly fits in the database schema and the application model change process. Databases can often be a challenge, particularly if you need to change the schema to support a new version of the software.

However, our scenario will be very simple. After changing the data model we release the second version of our application. Of course, the whole traffic is still forwarded to the first version of the application (“blue”). Then, we will migrate a database to a new version. Finally, we switch the whole traffic to the latest version (“green”).

Before starting with this article it is a good idea to read a little bit more about Istio. You can find some interesting information about technologies used in this example like Istio, Kubernetes, or Spring Boot in this article.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions πŸ™‚

Tools used for blue-green deployment and database changes

We need two tools to perform a blue-green deployment with the database on Kubernetes. The first of them is Liquibase. It automates database schema changes management. It also allows versioning of those changes. Moreover, with Liquibase we can easily roll back all the previously performed modifications of your schema.

The second essential tool is Istio. It allows us to easily switch TCP traffic between various versions of deployments.

Should we use framework integration with Liquibase?

Modern Java frameworks like Spring Boot or Quarkus offer built-in integration with Liquibase. In that concept, we just need to create a Liquibase changelog and set its location. Our framework is able to run such a script on application startup. Since it is a very useful approach in development, I would not recommend it for production deployment. Especially if you deploy your application on Kubernetes. Why?

Firstly, you are not able to determine how long does it take to run such a script on your database. It depends on the size of a database, the number of changes, or the current load. It makes it difficult to set an initial delay on the liveness probe. Defining liveness probe for a deployment is obviously a good practice on Kubernetes. But if you set a too long initial delay value it will slow down your redeployment process. On the other hand, if you set a too low value Kubernetes may kill your pod before the application starts.

Even if everything goes well you may have downtime after applying changes to a database and before a new version of the application starts. Ok, so what’s the best solution in that case? We should include a Liquibase script into our deployment pipeline. It has to be executed after deploying the latest version of the application, but just before switching traffic to that version (“green”).

Prerequisites

Before proceeding to the further steps you need to:

  1. Start your Kubernetes cluster (local or remote)
  2. Install Istio on Kubernetes with that guide
  3. Run Postgresql on Kubernetes. You can use that script from my GitHub repository.
  4. Prepare a Docker image with Liquibase update command (instructions below)

Prepare a Docker image with Liquibase

In the first step, we need to create a Docker image with Liquibase that may be easily run on Kubernetes. It needs to execute the update command. We will use an official Liquibase image as a base image in our Dockerfile. There are four parameters that might be overridden. We will set the address of a target database, username, password, and location of the Liquibase changelog file. Here’s our Dockerfile.

FROM liquibase/liquibase
ENV URL=jdbc:postgresql://postgresql:5432/test
ENV USERNAME=postgres
ENV PASSWORD=postgres
ENV CHANGELOGFILE=changelog.xml
CMD ["sh", "-c", "docker-entrypoint.sh --url=${URL} --username=${USERNAME} --password=${PASSWORD} --classpath=/liquibase/changelog --changeLogFile=${CHANGELOGFILE} update"]

Then, we just need to build it. However, you can omit that step. I have already pushed that version of the image to my Docker public repository.

$ docker build -t piomin/liquibase .

Step 1. Create a table in the database with Liquibase

Let’s create a first version of the database schema for our application. To do that we need to define a Liquibase script. We will put that script inside the Kubernetes ConfigMap with the liquibase-changelog-v1 name. It is a simple CREATE TABLE SQL command.

apiVersion: v1
kind: ConfigMap
metadata:
  name: liquibase-changelog-v1
data:
  changelog.sql: |-
    --liquibase formatted sql

    --changeset piomin:1
    create table person (
      id serial primary key,
      firstname varchar(255),
      lastname varchar(255),
      age int
    );
    --rollback drop table person;

Then, let’s create a Kubernetes Job that loads the ConfigMap created in the previous step. The job is performed just after creation with the kubectl apply command.

apiVersion: batch/v1
kind: Job
metadata:
  name: liquibase-job-v1
spec:
  template:
    spec:
      containers:
        - name: liquibase
          image: piomin/liquibase
          env:
            - name: URL
              value: jdbc:postgresql://postgres:5432/bluegreen
            - name: USERNAME
              value: bluegreen
            - name: PASSWORD
              value: bluegreen
            - name: CHANGELOGFILE
              value: changelog.sql
          volumeMounts:
            - name: config-vol
              mountPath: /liquibase/changelog
      restartPolicy: Never
      volumes:
        - name: config-vol
          configMap:
            name: liquibase-changelog-v1

Finally, we can verify if the job has been successfully executed. To do that we need to check out the logs from the pod created by the liquibase-job-v1 job.

blue-green-deployment-on-kubernetes-liquibase

Step 2. Deploy the first version of the application

In the next step, we are proceeding to the deployment of our application. This simple Spring Boot application exposes a REST API and connects to a PostgresSQL database. Here’s the entity class that corresponds to the previously created database schema.

@Entity
@Getter
@Setter
@NoArgsConstructor
public class Person {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    @Column(name = "firstname")
    private String firstName;
    @Column(name = "lastname")
    private String lastName;
    private int age;
}

In short, that is the first version (v1) of our application. Let’s take a look at the Deployment manifest. We need to inject database connection settings with environment variables. We will also expose liveness and readiness probes using Spring Boot Actuator.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: person-v1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: person
      version: v1
  template:
    metadata:
      labels:
        app: person
        version: v1
    spec:
      containers:
      - name: person
        image: piomin/person-service
        ports:
        - containerPort: 8080
        env:
          - name: DATABASE_USER
            valueFrom:
              configMapKeyRef:
                key: POSTGRES_USER
                name: postgres-config
          - name: DATABASE_NAME
            valueFrom:
              configMapKeyRef:
                key: POSTGRES_DB
                name: postgres-config
          - name: DATABASE_PASSWORD
            valueFrom:
              secretKeyRef:
                key: POSTGRES_PASSWORD
                name: postgres-secret
        livenessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/liveness
        readinessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/readiness

The readiness health check exposed by the application includes a status of a connection with the PostgresSQL database. Therefore, you may be sure that it works properly.

spring:
  application:
    name: person-service
  datasource:
    url: jdbc:postgresql://postgres:5432/${DATABASE_NAME}
    username: ${DATABASE_USER}
    password: ${DATABASE_PASSWORD}

management:
  endpoint:
    health:
      show-details: always
      group:
        readiness:
          include: db
      probes:
        enabled: true

We are running 2 instances of our application.

blue-green-deployment-on-kubernetes-kubectl

Just to conclude. Here’s our current status after Step 2.

blue-green-deployment-on-kubernetes-picture-arch

Step 3. Deploy the second version of the application with a blue-green strategy

Firstly, we perform a very trivial modification in our entity model class. We will change the name of two columns in the database using @Column annotation. We replace firstname with first_name, and lastname with last_name as shown below.

@Entity
@Getter
@Setter
@NoArgsConstructor
public class Person {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    @Column(name = "first_name")
    private String firstName;
    @Column(name = "last_name")
    private String lastName;
    private int age;
}

The Deployment manifest is very similar to the previous version of our application. Of course, the only difference is in the version label.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: person-v2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: person
      version: v2
  template:
    metadata:
      labels:
        app: person
        version: v2
    spec:
      containers:
      - name: person
        image: piomin/person-service
        ports:
        - containerPort: 8080
        env:
          - name: DATABASE_USER
            valueFrom:
              configMapKeyRef:
                key: POSTGRES_USER
                name: postgres-config
          - name: DATABASE_NAME
            valueFrom:
              configMapKeyRef:
                key: POSTGRES_DB
                name: postgres-config
          - name: DATABASE_PASSWORD
            valueFrom:
              secretKeyRef:
                key: POSTGRES_PASSWORD
                name: postgres-secret
        livenessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/liveness
        readinessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/readiness

However, before deploying that version of the application we need to apply Istio rules. Istio should forward the whole traffic to the person-v1. Firstly, let’s define a DestinationRule with two subsets related to versions v1 and v2.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: person-destination
spec:
  host: person
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

Then, we will apply the following Istio rule. It forwards the 100% of incoming traffic to the person pods labelled with version=v1.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: person-virtualservice
spec:
  hosts:
    - person
  http:
    - route:
      - destination:
          host: person
          subset: v1
        weight: 100
      - destination:
          host: person
          subset: v2
        weight: 0

Here’s our current status after applying changes in Step 3.

blue-green-deployment-on-kubernetes-picture-arch-new

Also, let’s verify a current list of deployments.

Step 4. Modify database and switch traffic to the latest version

Currently, both versions of our application are running. So, if we modify the database schema and then we switch traffic to the latest version, we achieve a zero-downtime deployment. So, the same as before we will create a ConfigMap that contains the Liquibase changelog file.

apiVersion: v1
kind: ConfigMap
metadata:
  name: liquibase-changelog-v2
data:
  changelog.sql: |-
    --liquibase formatted sql

    --changeset piomin:2
    alter table rename column firstName to first_name;
    alter table rename column lastName to last_name;

Then, we create Kubernetes Job that uses a changelog file from the liquibase-changelog-v2 ConfigMap.

apiVersion: batch/v1
kind: Job
metadata:
  name: liquibase-job-v2
spec:
  template:
    spec:
      containers:
        - name: liquibase
          image: piomin/liquibase
          env:
            - name: URL
              value: jdbc:postgresql://postgres:5432/bluegreen
            - name: USERNAME
              value: bluegreen
            - name: PASSWORD
              value: bluegreen
            - name: CHANGELOGFILE
              value: changelog.sql
          volumeMounts:
            - name: config-vol
              mountPath: /liquibase/changelog
      restartPolicy: Never
      volumes:
        - name: config-vol
          configMap:
            name: liquibase-changelog-v2

Once, the Kubernetes job is finished we just need to update Istio VirtualService to forward the whole traffic to the v2 version of the application.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: person-virtualservice
spec:
  hosts:
    - person
  http:
    - route:
      - destination:
          host: person
          subset: v1
        weight: 0
      - destination:
          host: person
          subset: v2
        weight: 100

That is the last step in our blue-green deployment process on Kubernetes. The picture visible below illustrates the current status. The database schema has been updated. Also, the whole traffic is now sent to the v2 version of person-service.

Also, here’s a current list of deployments.

Testing Blue-green deployment on Kubernetes

In order to easily test our blue-green deployment process, and I created the second application caller-service. It calls endpoint GET /persons/{id} exposed by the person-service.

@RestController
@RequestMapping("/caller")
public class CallerController {

    private RestTemplate restTemplate;

    public CallerController(RestTemplate restTemplate) {
        this.restTemplate = restTemplate;
    }

    @GetMapping("/call")
    public String call() {
        ResponseEntity<String> response = restTemplate
                .getForEntity("http://person:8080/persons/1", String.class);
        if (response.getStatusCode().is2xxSuccessful())
            return response.getBody();
        else
            return "Error: HTTP " + response.getStatusCodeValue();
    }
}

Before testing, you should add at least a single person to the database. You can use POST /persons for that. In this example, I’m using the port forwarding feature.

$ curl http://localhost:8081/persons -H "Content-Type: application/json" -d '{"firstName":"John","lastName":"Smith","age":33}'

Ok, so here’s a list of deployments you need to start with Step 4. You see that every application has two containers inside a pod (except PostgreSQL). It means that Istio sidecar has been injected into those pods.

Finally, just before executing Step 4 run the following script that calls the caller-service endpoint. The same as before I’m using a port forwarding feature.

$ siege -r 200 -c 1 http://localhost:8080/caller/call

Conclusion

In this article, I described step-by-step how to update your database and application data model on Kubernetes using a blue-green deployment strategy. I chose a scenario with conflicting changes like modification of table column name.

The post Blue-green deployment with a database on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/02/18/blue-green-deployment-with-a-database-on-kubernetes/feed/ 6 9450