renovate Archives - Piotr's TechBlog https://piotrminkowski.com/tag/renovate/ Java, Spring, Kotlin, microservices, Kubernetes, containers Thu, 12 Jan 2023 11:38:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 renovate Archives - Piotr's TechBlog https://piotrminkowski.com/tag/renovate/ 32 32 181738725 Manage Multiple GitHub Repositories with Renovate and CircleCI https://piotrminkowski.com/2023/01/12/manage-multiple-github-repositories-with-renovate-and-circleci/ https://piotrminkowski.com/2023/01/12/manage-multiple-github-repositories-with-renovate-and-circleci/#comments Thu, 12 Jan 2023 11:37:55 +0000 https://piotrminkowski.com/?p=13895 In this article, you will learn how to automatically update your GitHub repositories with Renovate and CircleCI. The problem we will try to solve today is strictly related to my blogging. As I always attach code examples to my posts, I have a lot of repositories to manage. I know that sometimes it is more […]

The post Manage Multiple GitHub Repositories with Renovate and CircleCI appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to automatically update your GitHub repositories with Renovate and CircleCI. The problem we will try to solve today is strictly related to my blogging. As I always attach code examples to my posts, I have a lot of repositories to manage. I know that sometimes it is more convenient to have a single repo per all the demos, but I do not prefer it. My repo is always related to the particular technology or even to the case it is showing. 

Let’s consider what’s the problem with that approach. I’m usually sharing the same repository across multiple articles if they are closely related to each other. But despite that, I have more than 100 repositories with code examples. Once I create a repository I usually don’t have time to keep it up to date. I need a tool that will do that automatically for me. This, however, forces me to improve automatic tests. If I configure a tool that automatically updates a code in GitHub repositories, I need to verify that the change is valid and will not break the demo app.

There is another problem related to that. Classics to the genre – lack of automation tests… I was always focusing on creating the example app to show the use case described in the post, but not on building the valuable tests. It’s time to fix that! This is my first New Year’s resolution πŸ™‚ As you probably guessed, my work is still in progress. But even now, I can show you which tools I’m using for that and how to configure them. I will also share some first thoughts. Let’s begin!

First Problem: Not Maintained Repositories

Did you ever try to run the app from the source code created some years ago? In theory, everything should go fine. But in practice, several things may have changed. I can use a different version of e.g. Java or Maven than before. Even if have automated tests, they may not work fine especially since I didn’t use any tool to run the build and tests remotely. Of course, I don’t have such many old, unmaintained repositories. Sometimes, I was updating them manually, in particular, those more popular and shared across several articles.

Let’s just take a look at this example. It is from the following repository. I’m trying to generate a class definition from the Protocol Buffers schema file. As you see, the plugin used for that is not able to find the protoc executable. Honestly, I don’t remember how it worked before. Maybe I installed something on my laptop… Anyway, the solution was to use another plugin that doesn’t require any additional executables. Of course, I need to do it manually.

Let’s analyze another example. This time it fails during integration tests from another repository. The test is trying to connect to the Docker container. The problem here is that I was using Windows some years ago and Docker Toolbox was, by default, available under the 192.168.99.100 address. I should not leave such an address in the test. However, once again, I was just running all the tests locally, and at that time they finished successfully.

By the way, moving such a test to the CircleCI pipeline is not a simple thing to do. In order to run some containers (pact-broker with posgtresql) before the pipeline I decided to use Docker Compose. To run containers with Docker Compose I had to enable remote docker for CicrleCI as described here.

Second Problem: Updating Dependencies

If you manage application repositories that use several libraries, you probably know that an update is sometimes not just a formality. Even if that’s a patch or a minor update. Although my applications are usually not very complicated, the update of the Spring Boot version may be challenging. In the following example of Netflix DGS usage (GraphQL framework), I tried to update from the 2.4.2 to the 2.7.7 version. Here’s the result.

In that particular case, my app was initiating the H2 database with some data from the data.sql file. But since one of the 2.4.X Spring Boot version, the records from the data.sql are loaded before database schema initialization. The solution is to replace that file with the import.sql script or add the property spring.jpa.defer-datasource-initialization=true to the application properties. After choosing the second option we solved the problem… and then another one occurs. This time it is related to Netflix DGS and GraphQL Java libraries as described here.

Currently, according to the comments, there is no perfect solution to that problem with Maven. Probably I will have to wait for the next release of Netflix DGS or wait until they will propose the right solution.

Let’s analyze another example – once again with the Spring Boot update. This time it is related to the Spring Data and Embedded Mongo. The case is very interesting since it fails just on the remote builder. When I’m running the test on my local machine everything works perfectly fine.

A similar issue has been described here. However, the described solution doesn’t help me anymore. Probably I will decide to migrate my tests to the Testcontainers. By the way, it is also a very interesting example, since it has an impact only on the tests. So, even with a high level of automation, you will still need to do manual work.

Third Problem: Lack of Automated Tests

It is some kind of paradox – although I’m writing a lot about continuous delivery or tests I have a lot of repositories without any tests. Of course, when I was creating real applications for several companies I was adding many tests to ensure they will fine on production. But even for simple demo apps it is worth adding several tests that verify if everything works fine. In that case, I don’t have many small unit tests but rather a test that runs a whole app and verifies e.g. all the endpoints. Fortunately, the frameworks like Spring Boot or Quarkus provide intuitive tools for that. There are helpers for almost all popular solutions. Here’s my @SprignBootTest for GraphQL queries.

@SpringBootTest(webEnvironment = 
      SpringBootTest.WebEnvironment.RANDOM_PORT)
public class EmployeeQueryResolverTests {

    @Autowired
    GraphQLTestTemplate template;

    @Test
    void employees() throws IOException {
        Employee[] employees = template
           .postForResource("employees.graphql")
           .get("$.data.employees", Employee[].class);
        Assertions.assertTrue(employees.length > 0);
    }

    @Test
    void employeeById() throws IOException {
        Employee employee = template
           .postForResource("employeeById.graphql")
           .get("$.data.employee", Employee.class);
        Assertions.assertNotNull(employee);
        Assertions.assertNotNull(employee.getId());
    }

    @Test
    void employeesWithFilter() throws IOException {
        Employee[] employees = template
           .postForResource("employeesWithFilter.graphql")
           .get("$.data.employeesWithFilter", Employee[].class);
        Assertions.assertTrue(employees.length > 0);
    }
}

In the previous test, I’m using an in-memory H2 database in the background. If I want to test smth with the “real” database I can use Testcontainers. This tool runs the required container on Docker during the test. In the following example, we run PostgreSQL. After that, the Spring Boot application automatically connects to the database thanks to the @DynamicPropertySource annotation that sets the generated URL as the Spring property.

@SpringBootTest(webEnvironment = 
      SpringBootTest.WebEnvironment.RANDOM_PORT)
@Testcontainers
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class PersonControllerTests {

   @Autowired
   TestRestTemplate restTemplate;

   @Container
   static PostgreSQLContainer<?> postgres = 
      new PostgreSQLContainer<>("postgres:15.1")
           .withExposedPorts(5432);

   @DynamicPropertySource
   static void registerMySQLProperties(DynamicPropertyRegistry registry) {
       registry.add("spring.datasource.url", 
          postgres::getJdbcUrl);
       registry.add("spring.datasource.username", 
          postgres::getUsername);
       registry.add("spring.datasource.password", 
          postgres::getPassword);
   }

   @Test
   @Order(1)
   void add() {
       Person person = Instancio.of(Person.class)
               .ignore(Select.field("id"))
               .create();
       person = restTemplate
          .postForObject("/persons", person, Person.class);
       Assertions.assertNotNull(person);
       Assertions.assertNotNull(person.getId());
   }

   @Test
   @Order(2)
   void updateAndGet() {
       final Integer id = 1;
       Person person = Instancio.of(Person.class)
               .set(Select.field("id"), id)
               .create();
       restTemplate.put("/persons", person);
       Person updated = restTemplate
          .getForObject("/persons/{id}", Person.class, id);
       Assertions.assertNotNull(updated);
       Assertions.assertNotNull(updated.getId());
       Assertions.assertEquals(id, updated.getId());
   }

   @Test
   @Order(3)
   void getAll() {
       Person[] persons = restTemplate
          .getForObject("/persons", Person[].class);
       Assertions.assertEquals(1, persons.length);
   }

   @Test
   @Order(4)
   void deleteAndGet() {
       restTemplate.delete("/persons/{id}", 1);
       Person person = restTemplate
          .getForObject("/persons/{id}", Person.class, 1);
       Assertions.assertNull(person);
   }

}

In some cases, we may have multiple applications (or microservices) communicating with each other. We can mock that communication with the libraries like Mockito. On the other, we can simulate real HTTP traffic with the libraries like Hoverfly or Wiremock. Here’s the example with Hoverfly and the Spring Boot Test module.

@SpringBootTest(properties = { "POD_NAME=abc", "POD_NAMESPACE=default"}, 
   webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ExtendWith(HoverflyExtension.class)
public class CallerControllerTests {

   @LocalServerPort
   int port;
   @Autowired
   TestRestTemplate restTemplate;

   @Test
   void ping(Hoverfly hoverfly) {
      String msg = "callme-service v1.0-SNAPSHOT (id=1): abc in default";
      hoverfly.simulate(dsl(
            service("http://callme-service.serverless.svc.cluster.local")
               .get("/callme/ping")
               .willReturn(success(msg, "text/plain"))));

      String response = restTemplate
         .getForObject("/caller/ping", String.class);
      assertNotNull(response);

      String c = "caller-service(id=1): abc in default is calling " + msg;
      assertEquals(c, response);
   }
}

Of course, these are just examples of tests. There are a lot of different tests and technologies used in all my repositories. Some others would be added in the near future πŸ™‚ Now, let’s go to the point.

Choosing the Right Tools

As mentioned in the introduction, I will use CircleCI and Renovate for managing my GitHub repositories. CircleCI is probably the most popular choice for running builds of open-source projects stored in GitHub repositories. GitHub also provides a tool for updating dependencies called Dependabot. However, Renovate has some significant advantages over Dependabot. It provides a lot of configuration options, may be run anywhere (including Kubernetes – more details here), and can integrate also with GitLab or Bitbucket. We will also use SonarCloud for a static code quality analysis.

Renovate is able to analyze not only the descriptors of traditional package managers like npm, Maven, or Gradle but also e.g. CircleCI configuration files or Docker image tags. Here’s a list of my requirements that the following tool needs to meet:

  1. It should be able to perform different actions depending on the dependency update type (major, patch, or minor)
  2. It needs to create PR on change and auto-merge it only if the build performed by CircleCI finishes successfully. Therefore it needs to wait for the status of that build
  3. Auto-merge should not be enabled for major updates. They require approval from the repository admin

Renovate meets all these requirements. We can also easily install Renovate on GitHub and use it to update CircleCI configuration files inside repositories. In order to install Renovate on GitHub you need to go to the marketplace. After you install it, go the Settings, and then the Applications menu item. In order to set the list of repositories enabled for Renovate click the Configure button.

Then in the Repository Access section, you can enable all your repositories or choose several from the whole list.

github-renovate-circleci-conf

Configure Renovate and CircleCI inside the GitHub Repository

Each GitHub repository has to contain CircleCI and Renovate configuration files. Renovate tries to detect the renovate.json file in the repository root directory. We don’t provide many configuration settings to achieve the expected results. By default, Renovate creates a pull request once it detects a new version of dependency but does not auto-merge it. We want to auto-merge all non-major changes. Therefore, we need to set a list of all update types merged automatically (minor, patch, pin, and digest).

By default, Renovate creates PR just after it creates a branch with a new version of the dependency. Because we are auto-merging all non-major PRs we need to force Renovate to create them only after the build on CircleCI finishes successfully. Once, all the tests on the newly created branch will be passed, Renovate creates PR and auto-merge if it does not contain major changes. Otherwise, it leaves the PR for approval. To achieve it, we to set the property prCreation to not-pending. Here’s the renovate.json file I’m using for all my GitHub repositories.

{
  "$schema": "https://docs.renovatebot.com/renovate-schema.json",
  "extends": [
    "config:base",":dependencyDashboard"
  ],
  "packageRules": [
    {
      "matchUpdateTypes": ["minor", "patch", "pin", "digest"],
      "automerge": true
    }
  ],
  "prCreation": "not-pending"
}

The CircleCI configuration is stored in the .circleci/config.yml file. I mostly use Maven as a build tool. Here’s a typical CircleCI configuration file for my repositories. It defines two jobs: a standard maven/test job for building the project and running unit tests and a job for running SonarQube analysis.

version: 2.1

jobs:
  analyze:
    docker:
      - image: 'cimg/openjdk:17.0'
    steps:
      - checkout
      - run:
          name: Analyze on SonarCloud
          command: mvn verify sonar:sonar

executors:
  j17:
    docker:
      - image: 'cimg/openjdk:17.0'

orbs:
  maven: circleci/maven@1.4.0

workflows:
  maven_test:
    jobs:
      - maven/test:
          executor: j17
      - analyze:
          context: SonarCloud

By default, CicrcleCI runs builds on Docker containers. However, this approach is not suitable everywhere. For Testcontainers we need a machine executor that has full access to the Docker process. Thanks to that, it is able to run additional containers during tests with e.g. databases.

version: 2.1

jobs:
  analyze:
    docker:
      - image: 'cimg/openjdk:11.0'
    steps:
      - checkout
      - run:
          name: Analyze on SonarCloud
          command: mvn verify sonar:sonar -DskipTests

orbs:
  maven: circleci/maven@1.3.0

executors:
  machine_executor_amd64:
    machine:
      image: ubuntu-2204:2022.04.2
    environment:
      architecture: "amd64"
      platform: "linux/amd64"

workflows:
  maven_test:
    jobs:
      - maven/test:
          executor: machine_executor_amd64
      - analyze:
          context: SonarCloud

Finally, the last part of configuration – an integration between CircleCI and SonarCloud. We need to add some properties to Maven pom.xml to enable SonarCloud context.

<properties>
  <sonar.projectKey>piomin_sample-spring-redis</sonar.projectKey>
  <sonar.organization>piomin</sonar.organization>
  <sonar.host.url>https://sonarcloud.io</sonar.host.url>
</properties>

How It Works

Let’s verify how it works. Once you provide the required configuration for Renovate, CircleCI, and SonarCloud in your GitHub repository the process starts. Renovate initially detects a list of required dependency updates. Since I enabled the dependency dashboard, Renovate immediately creates an issue with a list of changes as shown below. It just provides a summary view showing a list of changes in the dependencies.

github-renovate-circleci-dashboard

Here’s a list of detected package managers in this repository. Besides Maven and CircleCI, there is also Dockerfile and Gitlab CI configuration file there.

Some pull requests has already been automerged by Renovate, if the build on CircleCI has finished successfully.

github-renovate-circleci-pr

Some other pull requests are still waiting in the Open state – waiting for approval (a major update from Java 11 to Java 17) or for a fix because the build on CicrcleCI failed.

We can go into the details of the selected PR. Let’s do that for the first PR (#11) on the list visible above. Renovate is trying to update Spring Boot from 2.6.1 to the latest 2.7.7. It created the branch renovate/spring-boot that contains the required changes.

github-renovate-circleci-pr-details

The PR could be merged automatically. However the build failed, so it didn’t happen.

github-renovate-circleci-pr-checks

We can go to the details of the build. As you see in the CircleCI dashboard all the tests failed. In this particular case, I have already tried to fix the by updating the version of embedded Mongo. However, it didn’t solve the problem.

Here’s a list of commits in the master branch. As you see Renovate is automatically updating the repository after the build of the particular branch finishes successfully.

As you see, each time a new branch is created CircleCI runs a build to verify if it does not break the tests.

github-renovate-circleci-builds

Conclusion

I have some conclusions after making the described changes in my repository:

  • 1) Include automated tests in your projects even if you are creating an app for demo showcase, not for production usage. It will help you back to a project after some time. It will also ensure that everything works fine in your demo and helps other people when using it.
  • 2) All these tools like Renovate, CircleCI, or SonarCloud can be easily used with your GitHub project for free. You don’t need to spend a lot of time configuring them, but the effect can be significant.
  • 3) Keeping the repositories up to date is important. Sometimes people wrote to me that something doesn’t work properly in my examples. Even now, I found some small bugs in the code logic. Thanks to the described approach, I hope to give you a better quality of my examples – as you are my blog followers.

If you have something like that in my repository main site, it means that I already have reviewed the project and added all the described mechanisms in that article.

The post Manage Multiple GitHub Repositories with Renovate and CircleCI appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/01/12/manage-multiple-github-repositories-with-renovate-and-circleci/feed/ 1 13895
Continuous Development on Kubernetes with GitOps Approach https://piotrminkowski.com/2022/06/06/continuous-development-on-kubernetes-with-gitops-approach/ https://piotrminkowski.com/2022/06/06/continuous-development-on-kubernetes-with-gitops-approach/#respond Mon, 06 Jun 2022 08:53:30 +0000 https://piotrminkowski.com/?p=11609 In this article, you will learn how to design your apps continuous development process on Kubernetes with the GitOps approach. In order to deliver the application to stage or production, we should use a standard CI/CD process and tools. It requires separation between the source code and the configuration code. It may result in using […]

The post Continuous Development on Kubernetes with GitOps Approach appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to design your apps continuous development process on Kubernetes with the GitOps approach. In order to deliver the application to stage or production, we should use a standard CI/CD process and tools. It requires separation between the source code and the configuration code. It may result in using dedicated tools for the building phase, and for the deployment phase. We are talking about a similar way to the one described in the following article, where we use Tekton as a CI tool and Argo CD as a delivery tool. With that approach, each time you want to release a new version of the image, you should commit it to the repository with configuration. Then, the tool responsible for the CD process applies changes to the cluster. Consequently, it performs a deployment of the new version.

I’ll describe here four possible approaches. Here’s the list of topics:

If you would like to try this exercise yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. There is a sample Spring Boot application there. You can also access the following repository with the configuration for that app. Go to the apps/simple for a plain Deployment object example, and to the apps/helm for the Helm chart. After that, you should just follow my instructions. Let’s begin.

Approach 1: Use the same tag and make a rollout

The first approach is probably the simplest way to achieve our goal. However, not the best one πŸ™‚ Let’s start with the Kubernetes Deployment. That fragment of YAML is a part of the configuration, so the Argo CD manages it. There are two important things here. We use dev-latest as the image tag (1). It won’t be changed when we are deploying a new version of the image. We also need to pull the latest version of the image each time we will do the Deployment rollout. Therefore, we set imagePullPolicy to Always (2).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-kotlin-microservice
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: sample-spring-kotlin-microservice
  template:
    metadata:
      labels:
        app.kubernetes.io/name: sample-spring-kotlin-microservice
    spec:
      containers:
      - image: piomin/sample-spring-kotlin:dev-latest # (1)
        name: sample-spring-kotlin-microservice
        ports:
        - containerPort: 8080
          name: http
        imagePullPolicy: Always # (2)

Let’s assume we have a pipeline e.g. in GitLab CI triggered by every push to the dev branch. We use Maven to build an image from the source code (using jib-maven-plugin) and then push it to the container registry. In the last step (reload-app) we are restarting the application in order to run the latest version of the image tagged with dev-latest. In order to do that, we should execute the kubectl restart deploy sample-spring-kotlin-microservice command.

image: maven:latest

stages:
  - compile
  - image-build
  - reload-app

build:
  stage: compile
  script:
    - mvn compile

image-build:
  stage: image-build
  script:
    - mvn -s .m2/settings.xml compile jib:build

reload-app:
  image: bitnami/kubectl:latest
  stage: deploy
  only:
    - dev
  script:
    - kubectl restart deploy sample-spring-kotlin-microservice -n dev

We still have all the versions available in the registry. But just a single one is tagged as dev-latest. Of course, we can use any other convention of image tagging, e.g. based on timestamp or git commit id.

In this approach, we still use GitOps to manage the app configuration on Kubernetes. The CI pipeline pushes the latest version to the registry and triggers reload on Kubernetes.

Approach 2: Commit the latest tag to the repository managed by the CD tool

Configuration

Let’s consider a slightly different approach than the previous one. Argo CD automatically synchronizes changes pushed to the config repository with the Kubernetes cluster. Once the pipeline pushes a changed version of the image tag, Argo CD performs a rollout. Here’s the Argo CD Application manifest.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: sample-spring-kotlin-simple
  namespace: argocd
spec:
  destination:
    namespace: apps
    server: https://kubernetes.default.svc
  project: default
  source:
    path: apps/simple
    repoURL: https://github.com/piomin/openshift-cluster-config
    targetRevision: HEAD
  syncPolicy:
    automated: {}

Here’s the first version of our Deployment. As you see we are deploying the image with the 1.0.0 tag.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-kotlin-microservice
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: sample-spring-kotlin-microservice
  template:
    metadata:
      labels:
        app.kubernetes.io/name: sample-spring-kotlin-microservice
    spec:
      containers:
      - image: piomin/sample-spring-kotlin:1.0.0
        name: sample-spring-kotlin-microservice
        ports:
        - containerPort: 8080
          name: http

Now, our pipeline should build a new image, override the image tag in YAML and push the latest version to the Git repository. I won’t create a pipeline, but just show you step-by-step what should be done. Let’s begin with the tool. Our pipeline may use Skaffold to build, push and override image tags in YAML. Skaffold is a CLI tool very useful for simplifying development on Kubernetes. However, we can also use it for building CI/CD blocks or templating Kubernetes manifests for the GitOps approach. Here’s the Skaffold configuration file. It is very simple. The same as for the previous example, we use Jib for building and pushing an image. Skaffold supports multiple tag policies for tagging images. We may define e.g. a tagger that uses the current date and time.

apiVersion: skaffold/v2beta22
kind: Config
build:
  artifacts:
  - image: piomin/sample-spring-kotlin
    jib: {}
  tagPolicy:
    dateTime: {}

Skaffold in CI/CD

In the first step, we are going to build and push the image. Thanks to the --file-output Skaffold will export the info about a build to the file.

$ skaffold build --file-output='/Users/pminkows/result.json' --push

The file with the result is located under the /Users/pminkows/result.json path. It contains the basic information about a build including the image name and tag.

{"builds":[{"imageName":"piomin/sample-spring-kotlin","tag":"piomin/sample-spring-kotlin:2022-06-03_13-53-43.988_CEST@sha256:287572a319ee7a0caa69264936063d003584e026eefeabb828e8ecebca8678a7"}]}

This image has also been pushed to the registry.

Now, let’s run the skaffold render command to override the image tag in the YAML manifest. Assuming we are running it in the next pipeline stage we can just set the /Users/pminkows/result.json file as in input. The output apps/simple/deployment.yaml is a location inside the Git repository managed by Argo CD. We don’t want to include the namespace name, therefore we should set the parameter --offline=true.

$ skaffold render -a /Users/pminkows/result.json \
    -o apps/simple/deployment.yaml \
    --offline=true

Here’s the final version of our YAML manifest.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-kotlin-microservice
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: sample-spring-kotlin-microservice
  template:
    metadata:
      labels:
        app.kubernetes.io/name: sample-spring-kotlin-microservice
    spec:
      containers:
      - image: piomin/sample-spring-kotlin:2022-06-03_13-53-43.988_CEST@sha256:287572a319ee7a0caa69264936063d003584e026eefeabb828e8ecebca8678a7
        name: sample-spring-kotlin-microservice
        ports:
        - containerPort: 8080
          name: http

Finally, our pipeline needs to commit the latest version of our manifest to the Git repository. Argo CD will automatically deploy it to the Kubernetes cluster.

Approach 3: Detects the latest version of the image and update automatically with Renovate

Concept

Firstly, let’s visualize the current approach. Our pipeline pushes the latest image to the container registry. Renovate is continuously monitoring the tags of the image in the container registry to detect a change. Once it detects that the image has been updated it creates a pull request to the Git repository with configuration. Then Argo CD detects a new image tag committed into the registry and synchronizes it with the current Kubernetes cluster.

kubernetes-gitops-renovate-arch

Configuration

Renovate is a very interesting tool. It continuously runs and detects the latest available versions of dependencies. These can be e.g. Maven dependencies. But in our case, we can use it for monitoring container images in a registry.

In the first step, we will prepare a configuration for Renovate. It accepts JSON format. There are several things we need to set:

(1) platform – our repository is located on GitHub

(2) repository – the location of the configuration repository. Renovate monitors the whole repository and detects files matching filtering criteria

(3) enabledManagers – we are going to monitor Kubernetes YAML manifests and Helm value files. For every single manager, we should set file filtering rules.

(4) packageRules – our goal is to automatically update the configuration repository with the latest tag. Since Renovate creates a pull request after detecting a change we would like to enable PR auto-merge on GitHub. Auto-merge should be performed only for patches (e.g. update from 1.0.0 to 1.0.1) or minor updates (e.g. from 1.0.1 to 1.0.5). For other types of updates, PR needs to be approved manually.

(5) ignoreTests – we need to enable it to perform PR auto-merge. Otherwise Renovate will require at least one test in the repository to perform PR auto-approve.

{
  "platform": "github",
  "repositories": [
    {
      "repository": "piomin/openshift-cluster-config",
      "enabledManagers": ["kubernetes", "helm-values"],
      "kubernetes" : {
        "fileMatch": ["\\.yaml$"]
      },
      "helm-values": {
        "fileMatch": ["(.*)values.yaml$"]
      },
      "packageRules": [
        {
          "matchUpdateTypes": ["minor", "patch"],
          "automerge": true
        }
      ],
      "ignoreTests": true
    }
  ]
}

In order to create a pull request, Renovate needs to have write access to the GitHub repository. Let’s create a Kubernetes Secret containing the GitHub access token.

apiVersion: v1
kind: Secret
metadata:
  name: renovate-secrets
  namespace: renovate
data:
  RENOVATE_TOKEN: <BASE64_TOKEN>
type: Opaque

Installation

Now we can install Renovate on Kubernetes. The best way for that is through the Helm chart. Let’s add the Helm repository:

$ helm repo add renovate https://docs.renovatebot.com/helm-charts
$ helm repo update

Then we may install it using the previously prepared config.json file. We also need to pass the name of the Secret containing the GitHub token and set the cron job scheduling interval. We will run the job responsible for detecting changes and creating PR once per minute.

$ helm install --generate-name \
    --set-file renovate.config=config.json \
    --set cronjob.schedule='*/1 * * * *' \
    --set existingSecret=renovate-secrets \
    renovate/renovate -n renovate

After installation, you should see a CronJob in the renovate namespace:

$ kubectl get cj -n renovate
NAME                  SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
renovate-1653648026   */1 * * * *    False     0        1m19s           2m19s

Use Case

Let’s consider the following Helm values.yaml. It needs to have a proper structure, i.e. image.repository, image.tag and image.registry fields.

app:
  name: sample-kotlin-spring
  replicas: 1

image:
  repository: 'pminkows/sample-kotlin-spring'
  tag: 1.4.20
  registry: quay.io

Let’s the image pminkows/sample-kotlin-spring with the tag 1.4.21 to the registry.

Once Renovate detected a new image tag in the container registry it created a PR with auto-approval enabled:

kubernetes-gitops-renovate-pr

Finally, the following Argo CD Application will apply changes automatically to the Kubernetes cluster:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: sample-spring-kotlin-helm
  namespace: argocd
spec:
  destination:
    namespace: apps
    server: https://kubernetes.default.svc
  project: default
  source:
    path: apps/helm
    repoURL: https://github.com/piomin/openshift-cluster-config
    targetRevision: HEAD
    helm:
      valueFiles:
        - values.yaml 
  syncPolicy:
    automated: {}

Approach 4: Use Argo CD Image Updater

Finally, we may proceed to the last proposition in this article to implement the development process in Kubernetes with GitOps. That option is available only for the container images managed by Argo CD. Let me show you a tool called Argo CD Image Update. The concept around this tool is pretty similar to Renovate. It can check for new versions of the container images deployed on Kubernetes and automatically update them. You can read more about it here.

Argo CD Image Updater can work in two modes. Once it detects a new version of the image in the registry it can update the image version in the Git repository (git) or directly inside Argo CD Application (argocd). We will use the argocd mode, which is a default option. Firstly, let’s install Argo CD Image Updater on Kubernetes in the same namespace as Argo CD. We can use the Helm chart for that:

$ helm repo add argo https://argoproj.github.io/argo-helm
$ helm install argocd-image-updater argo/argocd-image-updater -n argocd

After that, the only thing we need to do is to annotate the Argo CD Application with argocd-image-updater.argoproj.io/image-list. The value of the annotation is the list of images to monitor. Assuming there is the same Argo CD Application as in the previous section it looks as shown below:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: sample-spring-kotlin-helm
  namespace: argocd
  annotations: 
    argocd-image-updater.argoproj.io/image-list: quay.io/pminkows/sample-spring-kotlin
spec:
  destination:
    namespace: apps
    server: https://kubernetes.default.svc
  project: default
  source:
    path: apps/helm
    repoURL: https://github.com/piomin/openshift-cluster-config
    targetRevision: HEAD
    helm:
      valueFiles:
        - values.yaml 
  syncPolicy:
    automated: {}

Once the Argo CD Image Update detects a new version of the image quay.io/pminkows/sample-spring-kotlin it adds two parameters (or just updates a value of the image.tag parameter) to the Argo CD Application. In fact, it leverages the feature of Argo CD that allows overriding the parameters of the Argo CD Application. You can read more about that feature in their documentation. After that, Argo CD will automatically deploy the image with the tag taken from image.tag parameter.

Final Thoughts

The main goal of this article is to show you how to design your apps development process on Kubernetes in the era of GitOps. I assumed you use Argo CD for GitOps on Kubernetes, but in fact, only the last described approach requires it. Our goal was to build a development pipeline that builds an image after the source code change and pushes it into the registry. Then with the GitOps model, we are running such an image on Kubernetes in the development environment. I showed how you can use tools like Skaffold, Renovate, or Argo CD Image Updater to implement the required behavior.

The post Continuous Development on Kubernetes with GitOps Approach appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/06/06/continuous-development-on-kubernetes-with-gitops-approach/feed/ 0 11609