Other Archives - Piotr's TechBlog https://piotrminkowski.com/category/other/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 13 Jun 2025 11:38:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Other Archives - Piotr's TechBlog https://piotrminkowski.com/category/other/ 32 32 181738725 Backstage Dynamic Plugins with Red Hat Developer Hub https://piotrminkowski.com/2025/06/13/backstage-dynamic-plugins-with-red-hat-developer-hub/ https://piotrminkowski.com/2025/06/13/backstage-dynamic-plugins-with-red-hat-developer-hub/#respond Fri, 13 Jun 2025 11:38:24 +0000 https://piotrminkowski.com/?p=15718 This article will teach you how to create Backstage dynamic plugins and install them smoothly in Red Hat Developer Hub. One of the most significant pain points in Backstage is the installation of plugins. If you want to run Backstage on Kubernetes, for example, you have to rebuild the project and create a new image […]

The post Backstage Dynamic Plugins with Red Hat Developer Hub appeared first on Piotr's TechBlog.

]]>
This article will teach you how to create Backstage dynamic plugins and install them smoothly in Red Hat Developer Hub. One of the most significant pain points in Backstage is the installation of plugins. If you want to run Backstage on Kubernetes, for example, you have to rebuild the project and create a new image containing the added plugin. Red Hat Developer solves this problem by using dynamic plugins. In an earlier article about Developer Hub, I demonstrated how to activate selected plugins from the list of built-in extensions. These extensions are available inside the image and only require activation. However, creating your plugin and adding it to Developer Hub requires a different approach. This article will focus on just such a case.

For comparison, please refer to the article where I demonstrate how to prepare a Backstage instance for running on Kubernetes step-by-step. Today, for the sake of clarity, we will be working in a Developer Hub instance running on OpenShift using an operator. However, we can also easily install Developer Hub on vanilla Kubernetes using a Helm chart.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. We will also use another repository with a sample Backstage plugin. This time, I won’t create a plugin myself, but I will use an existing one. The plugin provides a collection of scaffolder actions for interacting with Kubernetes on Backstage, including apply and delete. Once you clone both of those repositories, you should only follow my instructions.

Prerequisites

You must have an OpenShift cluster with the Red Hat Developer Hub installed and configured with a Kubernetes plugin. I will briefly explain it in the next section without getting more into the details. You can find more information in the already mentioned article about Red Hat Developer Hub.

You must also have Node.js, NPM, and Yarn installed and configured on your laptop. It is used for plugin compilation and building. The npm package @janus-idp/cli used for developing and exporting Backstage plugins as dynamic plugins, requires Podman when working in image mode.

Motivation

Red Hat Developer Hub comes with a curated set of plugins preinstalled on its container image. It is easy to enable such a plugin just by changing the configuration available in Kubernetes ConfigMap. The situation becomes complicated when we attempt to install and configure a third-party plugin. In this case, I would like to extend Backstage with a set of actions that enable the creation and management of Kubernetes resources directly, rather than applying them via Argo CD. To do that, we must install the Backstage Scaffolder Actions for Kubernetes plugin. To use this plugin in Red Hat Developer Hub without rebuilding its image, you must export plugins as derived dynamic plugin packages. This is our goal.

Convert to a dynamic Backstage plugin

The backstage-k8s-scaffolder-actions is a backend plugin. It meets all the requirements to be converted to a dynamic plugin form. It has a valid package.json file in its root directory, containing all required metadata and dependencies. That plugin is compatible with the new Backstage backend system, which means that it was created using createBackendPlugin() or createBackendModule(). Let’s clone its repository first:

$ git clone https://github.com/kirederik/backstage-k8s-scaffolder-actions.git
$ cd backstage-k8s-scaffolder-actions
ShellSession

Then we must install the @janus-idp/cli npm package with the following command:

yarn add @janus-idp/cli
ShellSession

After that, you should run both those commands inside the plugin directory:

$ yarn install
$ yarn build
ShellSession

If the commands were successful, you can proceed with the plugin conversion procedure. The plugin defines some shared dependencies that must be explicitly specified with the --shared-package flag.

Here’s the command used to convert our plugin to a dynamic form supported by Red Hat Developer Hub:

npx @janus-idp/cli@latest package export-dynamic-plugin \
  --shared-package '!@backstage/cli-common' \
  --shared-package '!@backstage/cli-node' \
  --shared-package '!@backstage/config-loader' \
  --shared-package '!@backstage/config' \
  --shared-package '!@backstage/errors' \
  --shared-package '!@backstage/types'
ShellSession

Package and publish Backstage dynamic plugins

After exporting a third-party plugin, you can package the derived package into one of the following supported formats:

  • Open Container Initiative (OCI) image (recommended)
  • TGZ file
  • JavaScript package

Since the OCI image option is recommended, we will proceed accordingly. However, first you must ensure that podman is running on your laptop and is logged in to your container registry.

podman login quay.io
ShellSession

Then you can run the following @janus-idp/cli command with npx. It must specify the target image repository address, including image name and tag. The target image address is quay.io/pminkows/backstage-k8s-scaffolder-actions:v0.5. It is tagged with the latest version of the plugin.

npx @janus-idp/cli@latest package package-dynamic-plugins \
  --tag quay.io/pminkows/backstage-k8s-scaffolder-actions:v0.5
ShellSession

Here’s the command output. Ultimately, it provides instructions on how to install and enable the plugin within the Red Hat Developer configuration. Copy that statement for future use.

backstage-dynamic-plugins-package-cli

The previous command packages the plugin and builds its image.

Finally, let’s push the image with our plugin to the target registry:

podman push quay.io/pminkows/backstage-k8s-scaffolder-actions:v0.5
ShellSession

Install and enable the Backstage dynamic plugins in Developer Hub

My instance of Developer Hub is running in the backstage namespace. The operator manages it.

backstage-dynamic-plugins-developer-hub

Here’s the Backstage CR object responsible for creating a Developer Hub instance. The dynamicPluginsConfigMapName property specifies the name of the ConfigMap that stores the plugins’ configuration.

apiVersion: rhdh.redhat.com/v1alpha3
kind: Backstage
metadata:
  name: developer-hub
  namespace: backstage
spec:
  application:
    appConfig:
      configMaps:
        - name: app-config-rhdh
      mountPath: /opt/app-root/src
    dynamicPluginsConfigMapName: dynamic-plugins-rhdh
    extraEnvs:
      secrets:
        - name: app-secrets-rhdh
    extraFiles:
      mountPath: /opt/app-root/src
    replicas: 1
    route:
      enabled: true
  database:
    enableLocalDb: true
YAML

Then, we must modify the dynamic-plugins-rhdh ConfigMap to register our plugin in Red Hat Developer Hub. You must paste the previously copied two lines of code generated by the npx @janus-idp/cli@latest package package-dynamic-plugins command.

kind: ConfigMap
apiVersion: v1
metadata:
  name: dynamic-plugins-rhdh
  namespace: backstage
data:
  dynamic-plugins.yaml: |-
    plugins:
      # ... other plugins

      - package: oci://quay.io/pminkows/backstage-k8s-scaffolder-actions:v0.5!devangelista-backstage-scaffolder-kubernetes
        disabled: false
YAML

That’s all! After the change is applied to ConfigMap, the operator should restart the pod with Developer Hub. It can take some time, as the pod will be restarted, since all plugins must be enabled during pod startup.

$ oc get pod
NAME                                      READY   STATUS    RESTARTS   AGE
backstage-developer-hub-896c5f9d9-vvddb   1/1     Running   0          4m21s
backstage-psql-developer-hub-0            1/1     Running   0          8d
ShellSession

You can verify the logs with the oc logs command. Developer Hub prints a list of available actions provided by the installed plugins. You should see three actions delivered by the Backstage Scaffolder Actions for Kubernetes plugin, starting with the kube: prefix.

backstage-dynamic-plugins-developer-hub-logs

Prepare the Backstage template

Finally, we will test a new plugin by calling the kube:apply action from our Backstage template. It uses the kube:apply to create a Secret in the specified namespace under a given name. This template is available in the backstage-templates repository.

apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  description: Create a Secret in Kubernetes
  name: create-secret
  title: Create a Secret
spec:
  lifecycle: experimental
  owner: user
  type: example
  parameters:
    - properties:
        name:
          description: The namespace name
          title: Name
          type: string
          ui:autofocus: true
      required:
        - name
      title: Namespace Name
    - properties:
        secretName:
          description: The secret name
          title: Secret Name
          type: string
          ui:autofocus: true
      required:
        - secretName
      title: Secret Name
    - title: Cluster Name
      properties:
        cluster:
          type: string
          enum:
            - ocp
          ui:autocomplete:
            options:
              - ocp
  steps:
    - action: kube:apply
      id: k-apply
      name: Create a Resouce
      input:
        namespaced: true
        clusterName: ${{ parameters.cluster }}
        manifest: |
          kind: Secret
          apiVersion: v1
          metadata:
            name: ${{ parameters.secretName }}
            namespace: ${{ parameters.name }}
          data:
            username: YWRtaW4=
https://github.com/piomin/backstage-templates/blob/master/templates.yaml

You should import the repository with templates into your Developer Hub instance. The app-config-rhdh ConfigMap should contain the full address templates list file in the repository, and the Kubernetes cluster address and connection credentials.

catalog:
  rules:
    - allow: [Component, System, API, Resource, Location, Template]
  locations:
    - type: url
      target: https://github.com/piomin/backstage-templates/blob/master/templates.yaml
      rules:
        - allow: [Template, Location]

kubernetes:
  clusterLocatorMethods:
    - clusters:
      - authProvider: serviceAccount
        name: ocp
        serviceAccountToken: ${OPENSHIFT_TOKEN}
        skipTLSVerify: true
        url: https://api.${DOMAIN}:6443
      type: config
  customResources:
    - apiVersion: v1beta1
      group: tekton.dev
      plural: pipelineruns
    - apiVersion: v1beta1
      group: tekton.dev
      plural: taskruns
    - apiVersion: v1
      group: route.openshift.io
      plural: routes
  serviceLocatorMethod:
    type: multiTenant
YAML

You can access the Developer Hub instance through the OpenShift Route:

$ oc get route
NAME                      HOST/PORT                                                                 PATH   SERVICES                  PORT           TERMINATION     WILDCARD
backstage-developer-hub   backstage-developer-hub-backstage.apps.piomin.ewyw.p1.openshiftapps.com   /      backstage-developer-hub   http-backend   edge/Redirect   None
ShellSession

Then find and use the following template available in the Developer Hub.

You will have to insert the Secret name, namespace, and choose a target OpenShift cluster. Then accept the action.

You should see a similar screen. The Secret has been successfully created.

backstage-dynamic-plugins-ui

Let’s verify if it exists on our OpenShift cluster:

Final Thoughts

Red Hat is steadily developing its Backstage-based product, adding new functionality in future versions. The ability to easily create and install custom plug-ins in the Developer Hub appears to be a key element in building an attractive platform for developers. This article focuses on demonstrating how to convert a standard Backstage plugin into a dynamic form supported by Red Hat Developer Hub.

The post Backstage Dynamic Plugins with Red Hat Developer Hub appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/06/13/backstage-dynamic-plugins-with-red-hat-developer-hub/feed/ 0 15718
IDP on OpenShift with Red Hat Developer Hub https://piotrminkowski.com/2024/07/04/idp-on-openshift-with-red-hat-developer-hub/ https://piotrminkowski.com/2024/07/04/idp-on-openshift-with-red-hat-developer-hub/#comments Thu, 04 Jul 2024 06:38:46 +0000 https://piotrminkowski.com/?p=15316 This article will teach you how to build IDP (Internal Developer Platform) on the OpenShift cluster with the Red Hat Developer Hub solution. Red Hat Developer Hub is a developer portal built on top of the Backstage project. It simplifies the installation and configuration of Backstage in the Kubernetes-native environment through the operator and dynamic […]

The post IDP on OpenShift with Red Hat Developer Hub appeared first on Piotr's TechBlog.

]]>
This article will teach you how to build IDP (Internal Developer Platform) on the OpenShift cluster with the Red Hat Developer Hub solution. Red Hat Developer Hub is a developer portal built on top of the Backstage project. It simplifies the installation and configuration of Backstage in the Kubernetes-native environment through the operator and dynamic plugins. You can compare that process with the open-source Backstage installation on Kubernetes described in my previous article. If you need a quick intro to the Backstage platform you can also read my article Getting Started with Backstage.

A platform team manages an Internal Developer Platform (IDP) to build golden paths and enable developer self-service in the organization. It may consist of many different tools and solutions. On the other hand, Internal Developer Portals serve as the GUI interface through which developers can discover and access internal developer platform capabilities. In the context of OpenShift, Red Hat Developer Hub simplifies the adoption of several cluster services for developers (e.g. Kubernates-native CI/CD tools). Today, you will learn how to integrate Developer Hub with OpenShift Pipelines (Tekton) and OpenShift GitOps (Argo CD). Let’s begin!

Source Code

If you would like to try it by yourself, you may always take a look at my source code. Our sample GitHub repository contains software templates written in the Backstage technology called Skaffolder. In this article, we will analyze a template dedicated to OpenShift available in the templates/spring-boot-basic-on-openshift directory. After cloning this repository, you should just follow my instructions.

Here’s the structure of our repository. It is pretty similar to the template for Spring Boot on Kubernetes described in my previous article about Backstage. Besides the template, it also contains the Argo CD and Tekton templates with YAML deployment manifests to apply on OpenShift.

.
├── README.md
├── backstage-templates.iml
├── skeletons
│   └── argocd
│       ├── argocd
│       │   └── app.yaml
│       └── manifests
│           ├── deployment.yaml
│           ├── pipeline.yaml
│           ├── service.yaml
│           ├── tasks.yaml
│           └── trigger.yaml
├── templates
│   └── spring-boot-basic-on-openshift
│       ├── skeleton
│       │   ├── README.md
│       │   ├── catalog-info.yaml
│       │   ├── devfile.yaml
│       │   ├── k8s
│       │   │   └── deployment.yaml
│       │   ├── pom.xml
│       │   ├── renovate.json
│       │   ├── skaffold.yaml
│       │   └── src
│       │       ├── main
│       │       │   ├── java
│       │       │   │   └── ${{values.javaPackage}}
│       │       │   │       ├── Application.java
│       │       │   │       ├── controller
│       │       │   │       │   └── ${{values.domainName}}Controller.java
│       │       │   │       └── domain
│       │       │   │           └── ${{values.domainName}}.java
│       │       │   └── resources
│       │       │       └── application.yml
│       │       └── test
│       │           ├── java
│       │           │   └── ${{values.javaPackage}}
│       │           │       └── ${{values.domainName}}ControllerTests.java
│       │           └── resources
│       │               └── k6
│       │                   └── load-tests-add.js
│       └── template.yaml
└── templates.yaml
ShellSession

Prerequisites

Before we start the exercise, we need to prepare our OpenShift cluster. We have to install three operators: OpenShift GitOps (Argo CD), OpenShift Pipelines (Tekton), and of course, Red Hat Developer Hub.

Once we install the OpenShift GitOps, it automatically creates an instance of Argo CD in the openshift-gitops namespace. That instance is managed by the openshift-gitops ArgoCD CRD object.

We need to override some default configuration settings there. Then, we will add a new user backstage with privileges for creating applications, and projects and generating API keys. Finally, we change the default TLS termination method for Argo CD Route to reencrypt. It is required to integrate with the Backstage Argo CD plugin. We also add the demo namespace as an additional namespace to place Argo CD applications in.

apiVersion: argoproj.io/v1beta1
kind: ArgoCD
metadata:
  name: openshift-gitops
  namespace: openshift-gitops
spec:
  sourceNamespaces:
    - demo
  server:
    ...
    route:
      enabled: true
      tls:
        termination: reencrypt
  ...
  rbac:
    defaultPolicy: ''
    policy: |
      g, system:cluster-admins, role:admin
      g, cluster-admins, role:admin
      p, backstage, applications, *, */*, allow
      p, backstage, projects, *, *, allow
    scopes: '[groups]'
  extraConfig:
    accounts.backstage: 'apiKey, login'
YAML

In order to generate the apiKey for the backstage user, we need to sign in to Argo CD with the argocd CLI as the admin user. Then, we should run the following command for the backstage account and export the generated token as the ARGOCD_TOKEN env variable:

$ export ARGOCD_TOKEN=$(argocd account generate-token --account backstage)
ShellSession

Finally, let’s obtain the long-lived API token for Kubernetes by creating a secret:

apiVersion: v1
kind: Secret
metadata:
  name: default-token
  namespace: backstage
  annotations:
    kubernetes.io/service-account.name: default
type: kubernetes.io/service-account-token
YAML

Then, we can copy and export it as the OPENSHIFT_TOKEN environment variable with the following command:

$ export OPENSHIFT_TOKEN=$(kubectl get secret default-token -o go-template='{{.data.token | base64decode}}')
ShellSession

Let’s add the ClusterRole view to the Backstage default ServiceAccount:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: default-view-backstage
subjects:
- kind: ServiceAccount
  name: default
  namespace: backstage
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
YAML

Configure Red Hat Developer Hub on OpenShift

After installing the Red Hat Developer Hub operator on OpenShift, we can use the Backstage CRD to create and configure a new instance of the portal. Firstly, we will override some default settings using the app-config-rhdh ConfigMap (1). Then we will provide some additional secrets like tokens to the third-party party tools in the app-secrets-rhdh Secret (2). Finally, we will install and configure several useful plugins with the dynamic-plugins-rhdh ConfigMap (3). Here is the required configuration in the Backstage CRD.

apiVersion: rhdh.redhat.com/v1alpha1
kind: Backstage
metadata:
  name: developer-hub
  namespace: backstage
spec:
  application:
    appConfig:
      configMaps:
        # (1)
        - name: app-config-rhdh
      mountPath: /opt/app-root/src
    # (3)
    dynamicPluginsConfigMapName: dynamic-plugins-rhdh
    extraEnvs:
      secrets:
        # (2)
        - name: app-secrets-rhdh
    extraFiles:
      mountPath: /opt/app-root/src
    replicas: 1
    route:
      enabled: true
  database:
    enableLocalDb: true
YAML

Override Default Configuration Settings

The instance of Backstage will be deployed in the backstage namespace. Since OpenShift exposes it as a Route, the address of the portal on my cluster is https://backstage-developer-hub-backstage.apps.piomin.eastus.aroapp.io (1). Firstly, we need to override that address in the app settings. Then we need to enable authentication through the GitHub OAuth with the GitHub Red Hat Developer Hub application (2). Then, we should set the proxy endpoint to integrate with Sonarqube through the HTTP Request Action plugin (3). Our instance of Backstage should also read templates from the particular URL location (4) and should be able to create repositories in GitHub (5).

kind: ConfigMap
apiVersion: v1
metadata:
  name: app-config-rhdh
  namespace: backstage
data:
  app-config-rhdh.yaml: |

    # (1)
    app:
     baseUrl: https://backstage-developer-hub-backstage.apps.piomin.eastus.aroapp.io

    backend:
      baseUrl: https://backstage-developer-hub-backstage.apps.piomin.eastus.aroapp.io

    # (2)
    auth:
      environment: development
      providers:
        github:
          development:
            clientId: ${GITHUB_CLIENT_ID}
            clientSecret: ${GITHUB_CLIENT_SECRET}

    # (3)
    proxy:
      endpoints:
        /sonarqube:
          target: ${SONARQUBE_URL}/api
          allowedMethods: ['GET', 'POST']
          auth: "${SONARQUBE_TOKEN}:"

    # (4)
    catalog:
      rules:
        - allow: [Component, System, API, Resource, Location, Template]
      locations:
        - type: url
          target: https://github.com/piomin/backstage-templates/blob/master/templates.yaml

    # (5)
    integrations:
      github:
        - host: github.com
          token: ${GITHUB_TOKEN}
          
    sonarqube:
      baseUrl: https://sonarcloud.io
      apiKey: ${SONARQUBE_TOKEN}
YAML

Integrate with GitHub

In order to use GitHub auth we need to register a new app there. Go to the “Settings > Developer Settings > New GitHub App” in your GitHub account. Then, put the address of your Developer Hub instance in the “Homepage URL” field and the callback address in the “Callback URL” field (base URL + /api/auth/github/handler/frame)

Then, let’s edit our GitHub app to generate a new secret as shown below.

The client ID and secret should be saved as the environment variables for future use. Note, that we also need to generate a new personal access token (“Settings > Developer Settings > Personal Access Tokens”).

export GITHUB_CLIENT_ID=<YOUR_GITHUB_CLIENT_ID>
exporg GITHUB_CLIENT_SECRET=<YOUR_GITHUB_CLIENT_SECRET>
export GITHUB_TOKEN=<YOUR_GITHUB_TOKEN>
ShellSession

We already have a full set of required tokens and access keys, so we can create the app-secrets-rhdh Secret to store them on our OpenShift cluster.

$ oc create secret generic app-secrets-rhdh -n backstage \
  --from-literal=GITHUB_CLIENT_ID=${GITHUB_CLIENT_ID} \
  --from-literal=GITHUB_CLIENT_SECRET=${GITHUB_CLIENT_SECRET} \
  --from-literal=GITHUB_TOKEN=${GITHUB_TOKEN} \
  --from-literal=SONARQUBE_TOKEN=${SONARQUBE_TOKEN} \
  --from-literal=SONARQUBE_URL=https://sonarcloud.io \
  --from-literal=ARGOCD_TOKEN=${ARGOCD_TOKEN}
ShellSession

Install and Configure Plugins

Finally, we can proceed to the plugins installation. Do you remember how we can do it with the open-source Backstage on Kubernetes? I described it in my previous article. Red Hat Developer Hub drastically simplifies that process with the idea of dynamic plugins. This approach is based on the Janus IDP project. Developer Hub on OpenShift comes with ~60 preinstalled plugins that allow us to integrate various third-party tools including Sonarqube, Argo CD, Tekton, Kubernetes, or GitHub. Some of them are enabled by default, some others are installed but disabled. We can verify it after signing in to the Backstage UI. We can easily verify it in the “Administration” section:

Let’s take a look at the ConfigMap which contains a list of plugins to activate. It is pretty huge since we also provide configuration for the frontend plugins. Some plugins are optional. From the perspective of our exercise goal we need to activate at least the following list of plugins:

  • janus-idp-backstage-plugin-argocd – to view the status of Argo CD synchronization in the UI
  • janus-idp-backstage-plugin-tekton – to view the status of Tekton pipelines in the UI
  • backstage-plugin-kubernetes-backend-dynamic – to integrate with the Kubernetes cluster
  • backstage-plugin-kubernetes – to view the Kubernetes app pods in the UI
  • backstage-plugin-sonarqube – to view the status of the Sonarqube scan in the UI
  • roadiehq-backstage-plugin-argo-cd-backend-dynamic – to create the Argo CD Application from the template
kind: ConfigMap
apiVersion: v1
metadata:
  name: dynamic-plugins-rhdh
  namespace: backstage
data:
  dynamic-plugins.yaml: |
    includes:
      - dynamic-plugins.default.yaml
    plugins:
      - package: ./dynamic-plugins/dist/roadiehq-backstage-plugin-github-pull-requests
        disabled: true
        pluginConfig:
          dynamicPlugins:
            frontend:
              roadiehq.backstage-plugin-github-pull-requests:
                mountPoints:
                  - mountPoint: entity.page.overview/cards
                    importName: EntityGithubPullRequestsOverviewCard
                    config:
                      layout:
                        gridColumnEnd:
                          lg: "span 4"
                          md: "span 6"
                          xs: "span 12"
                      if:
                        allOf:
                          - isGithubPullRequestsAvailable
                  - mountPoint: entity.page.pull-requests/cards
                    importName: EntityGithubPullRequestsContent
                    config:
                      layout:
                        gridColumn: "1 / -1"
                      if:
                        allOf:
                          - isGithubPullRequestsAvailable
      - package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic'
        disabled: false
        pluginConfig: {}
      - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-argocd'
        disabled: false
        pluginConfig:
          dynamicPlugins:
            frontend:
              janus-idp.backstage-plugin-argocd:
                mountPoints:
                  - mountPoint: entity.page.overview/cards
                    importName: ArgocdDeploymentSummary
                    config:
                      layout:
                        gridColumnEnd:
                          lg: "span 8"
                          xs: "span 12"
                      if:
                        allOf:
                          - isArgocdConfigured
                  - mountPoint: entity.page.cd/cards
                    importName: ArgocdDeploymentLifecycle
                    config:
                      layout:
                        gridColumn: '1 / -1'
                      if:
                        allOf:
                          - isArgocdConfigured
      - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-tekton'
        disabled: false
        pluginConfig:
          dynamicPlugins:
            frontend:
              janus-idp.backstage-plugin-tekton:
                mountPoints:
                  - mountPoint: entity.page.ci/cards
                    importName: TektonCI
                    config:
                      layout:
                        gridColumn: "1 / -1"
                      if:
                        allOf:
                          - isTektonCIAvailable
      - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-topology'
        disabled: false
        pluginConfig:
          dynamicPlugins:
            frontend:
              janus-idp.backstage-plugin-topology:
                mountPoints:
                  - mountPoint: entity.page.topology/cards
                    importName: TopologyPage
                    config:
                      layout:
                        gridColumn: "1 / -1"
                        height: 75vh
                      if:
                        anyOf:
                          - hasAnnotation: backstage.io/kubernetes-id
                          - hasAnnotation: backstage.io/kubernetes-namespace
      - package: './dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-sonarqube-dynamic'
        disabled: false
        pluginConfig: {}
      - package: './dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic'
        disabled: false
        pluginConfig:
          kubernetes:
            customResources:
            - group: 'tekton.dev'
              apiVersion: 'v1beta1'
              plural: 'pipelines'
            - group: 'tekton.dev'
              apiVersion: 'v1beta1'
              plural: 'pipelineruns'
            - group: 'tekton.dev'
              apiVersion: 'v1beta1'
              plural: 'taskruns'
            - group: 'route.openshift.io'
              apiVersion: 'v1'
              plural: 'routes'
            serviceLocatorMethod:
              type: 'multiTenant'
            clusterLocatorMethods:
              - type: 'config'
                clusters:
                  - name: ocp
                    url: https://api.piomin.eastus.aroapp.io:6443
                    authProvider: 'serviceAccount'
                    skipTLSVerify: true
                    skipMetricsLookup: true
                    serviceAccountToken: ${OPENSHIFT_TOKEN}
      - package: './dynamic-plugins/dist/backstage-plugin-kubernetes'
        disabled: false
        pluginConfig:
          dynamicPlugins:
            frontend:
              backstage.plugin-kubernetes:
                mountPoints:
                  - mountPoint: entity.page.kubernetes/cards
                    importName: EntityKubernetesContent
                    config:
                      layout:
                        gridColumn: "1 / -1"
                      if:
                        anyOf:
                          - hasAnnotation: backstage.io/kubernetes-id
                          - hasAnnotation: backstage.io/kubernetes-namespace
      - package: './dynamic-plugins/dist/backstage-plugin-sonarqube'
        disabled: false
        pluginConfig:
          dynamicPlugins:
            frontend:
              backstage.plugin-sonarqube:
                mountPoints:
                  - mountPoint: entity.page.overview/cards
                    importName: EntitySonarQubeCard
                    config:
                      layout:
                        gridColumnEnd:
                          lg: "span 4"
                          md: "span 6"
                          xs: "span 12"
                      if:
                        allOf:
                          - isSonarQubeAvailable
      - package: './dynamic-plugins/dist/backstage-plugin-sonarqube-backend-dynamic'
        disabled: false
        pluginConfig: {}
      - package: './dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd'
        disabled: false
        pluginConfig:
          dynamicPlugins:
            frontend:
              roadiehq.backstage-plugin-argo-cd:
                mountPoints:
                  - mountPoint: entity.page.overview/cards
                    importName: EntityArgoCDOverviewCard
                    config:
                      layout:
                        gridColumnEnd:
                          lg: "span 8"
                          xs: "span 12"
                      if:
                        allOf:
                          - isArgocdAvailable
                  - mountPoint: entity.page.cd/cards
                    importName: EntityArgoCDHistoryCard
                    config:
                      layout:
                        gridColumn: "1 / -1"
                      if:
                        allOf:
                          - isArgocdAvailable
      - package: './dynamic-plugins/dist/roadiehq-scaffolder-backend-argocd-dynamic'
        disabled: false
        pluginConfig: {}
      - package: ./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd-backend-dynamic
        disabled: false
        pluginConfig:
          argocd:
            appLocatorMethods:
              - type: 'config'
                instances:
                  - name: main
                    url: "https://openshift-gitops-server-openshift-gitops.apps.piomin.eastus.aroapp.io"
                    token: "${ARGOCD_TOKEN}"
YAML

Once we provide the whole configuration described above, we are ready to proceed with our Skaffolder template for the sample Spring Boot app.

Prepare Backstage Template for OpenShift

Our template consists of several steps. Firstly, we generate the app source code and push it to the app repository. Then, we register the component in the Backstage catalog and create a configuration repository for Argo CD. It contains app deployment manifests and the definition of the Tekton pipeline and trigger. Trigger is exposed as the Route, and can be called from the GitHub repository through the webhook. Finally, we are creating the project in Sonarcloud, the application in Argo CD, and registering the webhook in the GitHub app repository. Here’s our Skaffolder template.

apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  name: spring-boot-basic-on-openshift-template
  title: Create a Spring Boot app for OpenShift
  description: Create a Spring Boot app for OpenShift
  tags:
    - spring-boot
    - java
    - maven
    - tekton
    - renovate
    - sonarqube
    - openshift
    - argocd
spec:
  owner: piomin
  system: microservices
  type: service

  parameters:
    - title: Provide information about the new component
      required:
        - orgName
        - appName
        - domainName
        - repoBranchName
        - groupId
        - javaPackage
        - apiPath
        - namespace
        - description
        - registryUrl
        - clusterDomain
      properties:
        orgName:
          title: Organization name
          type: string
          default: piomin
        appName:
          title: App name
          type: string
          default: sample-spring-boot-app-openshift
        domainName:
          title: Name of the domain object
          type: string
          default: Person
        repoBranchName:
          title: Name of the branch in the Git repository
          type: string
          default: master
        groupId:
          title: Maven Group ID
          type: string
          default: pl.piomin.services
        javaPackage:
          title: Java package directory
          type: string
          default: pl/piomin/services
        apiPath:
          title: REST API path
          type: string
          default: /api/v1
        namespace:
          title: The target namespace on Kubernetes
          type: string
          default: demo
        description:
          title: Description
          type: string
          default: Spring Boot App Generated by Backstage
        registryUrl:
          title: Registry URL
          type: string
          default: image-registry.openshift-image-registry.svc:5000
        clusterDomain:
          title: OpenShift Cluster Domain
          type: string
          default: .apps.piomin.eastus.aroapp.io
  steps:
    - id: sourceCodeTemplate
      name: Generating the Source Code Component
      action: fetch:template
      input:
        url: ./skeleton
        values:
          orgName: ${{ parameters.orgName }}
          appName: ${{ parameters.appName }}
          domainName: ${{ parameters.domainName }}
          groupId: ${{ parameters.groupId }}
          javaPackage: ${{ parameters.javaPackage }}
          apiPath: ${{ parameters.apiPath }}
          namespace: ${{ parameters.namespace }}

    - id: publish
      name: Publishing to the Source Code Repository
      action: publish:github
      input:
        allowedHosts: ['github.com']
        description: ${{ parameters.description }}
        repoUrl: github.com?owner=${{ parameters.orgName }}&repo=${{ parameters.appName }}
        defaultBranch: ${{ parameters.repoBranchName }}
        repoVisibility: public

    - id: register
      name: Registering the Catalog Info Component
      action: catalog:register
      input:
        repoContentsUrl: ${{ steps.publish.output.repoContentsUrl }}
        catalogInfoPath: /catalog-info.yaml

    - id: configCodeTemplate
      name: Generating the Config Code Component
      action: fetch:template
      input:
        url: ../../skeletons/argocd
        values:
          orgName: ${{ parameters.orgName }}
          appName: ${{ parameters.appName }}
          registryUrl: ${{ parameters.registryUrl }}
          namespace: ${{ parameters.namespace }}
          repoBranchName: ${{ parameters.repoBranchName }}
        targetPath: ./gitops

    - id: publish
      name: Publishing to the Config Code Repository
      action: publish:github
      input:
        allowedHosts: ['github.com']
        description: ${{ parameters.description }}
        repoUrl: github.com?owner=${{ parameters.orgName }}&repo=${{ parameters.appName }}-config
        defaultBranch: ${{ parameters.repoBranchName }}
        sourcePath: ./gitops
        repoVisibility: public

    - id: sonarqube
      name: Create a new project on Sonarcloud
      action: http:backstage:request
      input:
        method: 'POST'
        path: '/proxy/sonarqube/projects/create?name=${{ parameters.appName }}&organization=${{ parameters.orgName }}&project=${{ parameters.orgName }}_${{ parameters.appName }}'
        headers:
          content-type: 'application/json'

    - id: create-argocd-resources
      name: Create ArgoCD Resources
      action: argocd:create-resources
      input:
        appName: ${{ parameters.appName }}
        argoInstance: main
        namespace: ${{ parameters.namespace }}
        repoUrl: https://github.com/${{ parameters.orgName }}/${{ parameters.appName }}-config.git
        path: 'manifests'

    - id: create-webhook
      name: Create GitHub Webhook
      action: github:webhook
      input:
        repoUrl: github.com?repo=${{ parameters.appName }}&owner=${{ parameters.orgName }}
        webhookUrl: https://el-${{ parameters.appName }}-${{ parameters.namespace }}.${{ parameters.clusterDomain }}

  output:
    links:
      - title: Open the Source Code Repository
        url: ${{ steps.publish.output.remoteUrl }}
      - title: Open the Catalog Info Component
        icon: catalog
        entityRef: ${{ steps.register.output.entityRef }}
      - title: SonarQube project URL
        url: ${{ steps['create-sonar-project'].output.projectUrl }}
YAML

Define Templates for OpenShift Pipelines

Compared to the article about Backstage on Kubernetes, we use Tekton instead of CircleCI as a build tool. Let’s take a look at the definition of our pipeline. It consists of four steps. In the final step, we use the OpenShift S2I mechanism to build the app image and push it to the local container registry.

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: ${{ values.appName }}
  labels:
    backstage.io/kubernetes-id: ${{ values.appName }}
spec:
  params:
    - description: branch
      name: git-revision
      type: string
      default: master
  tasks:
    - name: git-clone
      params:
        - name: url
          value: 'https://github.com/${{ values.orgName }}/${{ values.appName }}.git'
        - name: revision
          value: $(params.git-revision)
        - name: sslVerify
          value: 'false'
      taskRef:
        kind: ClusterTask
        name: git-clone
      workspaces:
        - name: output
          workspace: source-dir
    - name: maven
      params:
        - name: GOALS
          value:
            - test
        - name: PROXY_PROTOCOL
          value: http
        - name: CONTEXT_DIR
          value: .
      runAfter:
        - git-clone
      taskRef:
        kind: ClusterTask
        name: maven
      workspaces:
        - name: source
          workspace: source-dir
        - name: maven-settings
          workspace: maven-settings
    - name: sonarqube
      params:
        - name: SONAR_HOST_URL
          value: 'https://sonarcloud.io'
        - name: SONAR_PROJECT_KEY
          value: ${{ values.appName }}
      runAfter:
        - maven
      taskRef:
        kind: Task
        name: sonarqube-scanner
      workspaces:
        - name: source
          workspace: source-dir
        - name: sonar-settings
          workspace: sonar-settings
    - name: get-version
      params:
        - name: CONTEXT_DIR
          value: .
      runAfter:
        - sonarqube
      taskRef:
        kind: Task
        name: maven-get-project-version
      workspaces:
        - name: source
          workspace: source-dir
    - name: s2i-java
      params:
        - name: PATH_CONTEXT
          value: .
        - name: TLSVERIFY
          value: 'false'
        - name: MAVEN_CLEAR_REPO
          value: 'false'
        - name: IMAGE
          value: >-
            ${{ values.registryUrl }}/${{ values.namespace }}/${{ values.appName }}:$(tasks.get-version.results.version)
      runAfter:
        - get-version
      taskRef:
        kind: ClusterTask
        name: s2i-java
      workspaces:
        - name: source
          workspace: source-dir
  workspaces:
    - name: source-dir
    - name: maven-settings
    - name: sonar-settings
YAML

In order to run the pipeline after creating it, we need to apply the PipelineRun object.

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: ${{ values.appName }}-init
spec:
  params:
    - name: git-revision
      value: master
  pipelineRef:
    name: ${{ values.appName }}
  serviceAccountName: pipeline
  workspaces:
    - name: source-dir
      volumeClaimTemplate:
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 1Gi
    - name: sonar-settings
      secret:
        secretName: sonarqube-secret-token
    - configMap:
        name: maven-settings
      name: maven-settings
YAML

In order to call the pipeline via the webhook from the app source repository, we also need to create the Tekton TriggerTemplate object. Once we push a change to the target repository, we trigger the run of the Tekton pipeline on the OpenShift cluster.

apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
  name: ${{ values.appName }}
spec:
  params:
    - default: ${{ values.repoBranchName }}
      description: The git revision
      name: git-revision
    - description: The git repository url
      name: git-repo-url
  resourcetemplates:
    - apiVersion: tekton.dev/v1beta1
      kind: PipelineRun
      metadata:
        generateName: ${{ values.appName }}-run-
      spec:
        params:
          - name: git-revision
            value: $(tt.params.git-revision)
        pipelineRef:
          name: ${{ values.appName }}
        serviceAccountName: pipeline
        workspaces:
          - name: source-dir
            volumeClaimTemplate:
              spec:
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 1Gi
          - name: sonar-settings
            secret:
              secretName: sonarqube-secret-token
          - configMap:
              name: maven-settings
            name: maven-settings
YAML

Deploy the app on OpenShift

Here’s the template for the app Deployment object:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ${{ values.appName }}
  labels:
    app: ${{ values.appName }}
    app.kubernetes.io/name: spring-boot
    backstage.io/kubernetes-id: ${{ values.appName }}
spec:
  selector:
    matchLabels:
      app: ${{ values.appName }}
  template:
    metadata:
      labels:
        app: ${{ values.appName }}
        backstage.io/kubernetes-id: ${{ values.appName }}
    spec:
      containers:
        - name: ${{ values.appName }}
          image: ${{ values.registryUrl }}/${{ values.namespace }}/${{ values.appName }}:1.0
          ports:
            - containerPort: 8080
              name: http
          livenessProbe:
            httpGet:
              port: 8080
              path: /actuator/health/liveness
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              port: 8080
              path: /actuator/health/readiness
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          resources:
            limits:
              memory: 1024Mi
YAML

Here’s the current version of the catalog-info.yaml file.

apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
  name: ${{ values.appName }}
  title: ${{ values.appName }}
  annotations:
    janus-idp.io/tekton: ${{ values.appName }}
    tektonci/build-namespace: ${{ values.namespace }}
    github.com/project-slug: ${{ values.orgName }}/${{ values.appName }}
    sonarqube.org/project-key: ${{ values.orgName }}_${{ values.appName }}
    backstage.io/kubernetes-id: ${{ values.appName }}
    argocd/app-name: ${{ values.appName }}
  tags:
    - spring-boot
    - java
    - maven
    - tekton
    - argocd
    - renovate
    - sonarqube
spec:
  type: service
  owner: piomin
  lifecycle: experimental
YAML

Now, let’s create a new component in Red Hat Developer Hub using our template. In the first step, you should choose the “Create a Spring Boot App for OpenShift” template as shown below.

Then, provide all the parameters in the form. Probably you will have to override the default organization name to your GitHub account name and the address of your OpenShift cluster. Once you make all the required changes click the “Review” button, and then the “Create” button on the next screen. After that, Red Hat Developer Hub creates all the things we need.

After confirmation, Developer Hub redirects to the page with the progress information. There are 8 action steps defined. All of them should be finished successfully. Then, we can just click the “Open the Catalog Info Component” link.

developer-hub-openshift-create

Viewing Component in Red Hat Developer Hub UI

Our app overview tab contains general information about the component registered in Backstage, the status of the Sonarqube scan, and the status of the Argo CD synchronization process. We can switch to the several other available tabs.

developer-hub-openshift-overview

In the “CI” tab, we can see the history of the OpenShift Pipelines runs. We can switch to the logs of each pipeline step by clicking on it.

developer-hub-openshift-ci

If you are familiar with OpenShift, you can recognize that view as a topology view from the OpenShift Console developer perspective. It visualizes all the deployments in the particular namespace.

developer-hub-openshift-topology

In the “CD” tab, we can see the history of Argo CD synchronization operations.

developer-hub-openshift-cd

Final Thoughts

Red Hat Developer Hub simplifies installation and configuration of Backstage in the Kubernetes-native environment. It introduces the idea of dynamic plugins, which can be easily customized in the configuration files. You can compare this approach with my previous article about Backstage on Kubernetes.

The post IDP on OpenShift with Red Hat Developer Hub appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/07/04/idp-on-openshift-with-red-hat-developer-hub/feed/ 2 15316
Backstage on Kubernetes https://piotrminkowski.com/2024/06/28/backstage-on-kubernetes/ https://piotrminkowski.com/2024/06/28/backstage-on-kubernetes/#respond Fri, 28 Jun 2024 15:06:35 +0000 https://piotrminkowski.com/?p=15291 In this article, you will learn how to integrate Backstage with Kubernetes. We will run Backstage in two different ways. Firstly, it will run outside the cluster and connect with Kubernetes via the API. In the second scenario, we will deploy it directly on the cluster using the official Helm chart. Our instance of Backstage […]

The post Backstage on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to integrate Backstage with Kubernetes. We will run Backstage in two different ways. Firstly, it will run outside the cluster and connect with Kubernetes via the API. In the second scenario, we will deploy it directly on the cluster using the official Helm chart. Our instance of Backstage will connect Argo CD and Prometheus deployed on Kubernetes, to visualize the status of Argo CD synchronization and basic metrics related to the app.

This exercise continues the work described in my previous article about Backstage. So, before you start, you should read that article to understand the whole concept. In many places, I will refer to something that was described and done in the previous article. I’m describing there how to configure and run Backstage, and also how to build a basic template for the sample Spring Boot app. You should be familiar with all those basic terms, to fully understand what happens in the current exercise.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. Our sample GitHub repository contains software templates written in the Backstage technology called Skaffolder. In this article, we will analyze a template dedicated to Kubernetes available in the templates/spring-boot-basic-on-kubernetes directory. After cloning this repository, you should just follow my instructions.

Here’s the structure of our repository. Besides the template, it also contains the Argo CD template with YAML deployment manifests to apply on Kubernetes.

.
├── skeletons
│   └── argocd
│       └── manifests
│           ├── deployment.yaml
│           └── service.yaml
├── templates
│   └── spring-boot-basic-on-kubernetes
│       ├── skeleton
│       │   ├── README.md
│       │   ├── catalog-info.yaml
│       │   ├── k8s
│       │   │   ├── deployment.yaml
│       │   │   └── kind-cluster-test.yaml
│       │   ├── pom.xml
│       │   ├── renovate.json
│       │   ├── skaffold.yaml
│       │   └── src
│       │       ├── main
│       │       │   ├── java
│       │       │   │   └── ${{values.javaPackage}}
│       │       │   │       ├── Application.java
│       │       │   │       ├── controller
│       │       │   │       │   └── ${{values.domainName}}Controller.java
│       │       │   │       └── domain
│       │       │   │           └── ${{values.domainName}}.java
│       │       │   └── resources
│       │       │       └── application.yml
│       │       └── test
│       │           ├── java
│       │           │   └── ${{values.javaPackage}}
│       │           │       └── ${{values.domainName}}ControllerTests.java
│       │           └── resources
│       │               └── k6
│       │                   └── load-tests-add.js
│       └── template.yaml
└── templates.yaml
ShellSession

There is also another Git repository related to this article. It contains the modified source code of Backstage with several plugins installed and configured. The process of extending Backstage with plugins is described in detail in this article. So, you can start from scratch and apply my instructions step by step. But you can clone the final version of the code committed inside that repo and run it on your laptop as well.

Run and Prepare Kubernetes

Before we start with Backstage, we need to run and configure our instance of the Kubernetes cluster. It can be, for example, Minikube. Once you have the running cluster, you can obtain its control plane URL by executing the following command. As you see, my Minikube is available under the https://127.0.0.1:55782 address, so I will have to set it in the Backstage configuration later.

$ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:55782
...
$ export K8S_URL=https://127.0.0.1:55782
ShellSession

We need to install Prometheus and Argo CD on our Kubernetes. In order to install Prometheus, we will use the kube-prometheus-stack Helm chart. Firstly, we should add the Prometheus chart repository with the following command:

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
ShellSession

Then, we can run the following command to install Prometheus in the monitoring namespace:

$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
  --version 60.3.0 \
  -n monitoring --create-namespace
ShellSession

The same as with Prometheus, for Argo CD we need to add the chart repository first:

$ helm repo add argo https://argoproj.github.io/argo-helm
ShellSession

For Argo CD we need an additional configuration to be provided inside the values.yaml file. We have to create the user for the Backstage with privileges to call HTTP API with the apiKey authentication. It is required to automatically create an Argo CD Application from the Skaffolder template.

configs:
  cm:
    accounts.backstage: apiKey,login
  rbac:
    policy.csv: |
      p, backstage, applications, *, */*, allow
YAML

Let’s install Argo CD in the argocd namespace using the settings from values.yaml file:

$ helm install argo-cd argo/argo-cd \
  --version 7.2.0 \
  -f values.yaml \
  -n argocd --create-namespace
ShellSession

That’s not all. We still need to generate the apiKey for the backstage user. Firstly, let’s enable port forwarding for both Argo CD and Prometheus services to access their APIs over localhost.

$ kubectl port-forward svc/argo-cd-argocd-server 8443:443 -n argocd
$ kubectl port-forward svc/kube-prometheus-stack-prometheus 9090 -n monitoring
ShellSession

In order to generate the apiKey for the backstage user we need to sign in to Argo CD with the argocd CLI as the admin user. Then, we need to run the following command for the backstage account and export the generated token as the ARGOCD_TOKEN env variable:

$ argocd account generate-token --account backstage
$ export ARGOCD_TOKEN='argocd.token=<generated_token>'
ShellSession

Finally, let’s obtain the long-lived API token for Kubernetes by creating a secret:

apiVersion: v1
kind: Secret
metadata:
  name: default-token
  namespace: default
  annotations:
    kubernetes.io/service-account.name: default
type: kubernetes.io/service-account-token
YAML

Then, we can copy and export it as the K8S_TOKEN environment variable with the following command:

$ export K8S_TOKEN=$(kubectl get secret default-token -o go-template='{{.data.token | base64decode}}')
ShellSession

Just for the testing purposes, we add the cluster-admin role to the default ServiceAccount.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: default-admin
subjects:
- kind: ServiceAccount
  name: default
  namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
YAML

Modify App Source Code Skeleton for Kubernetes

First of all, we will modify several things in the application source code skeleton. In order to build the container image, we include the jib-maven-plugin in the Maven pom.xml. This plugin will be activated under the jib Maven profile.

<profiles>
  <profile>
    <id>jib</id>
    <activation>
      <activeByDefault>false</activeByDefault>
    </activation>
    <build>
      <plugins>
        <plugin>
          <groupId>com.google.cloud.tools</groupId>
          <artifactId>jib-maven-plugin</artifactId>
          <version>3.4.3</version>
          <configuration>
            <from>
              
            </from>
          </configuration>
        </plugin>
      </plugins>
    </build>
  </profile>
</profiles>
XML

Our source code repository will also contain the Skaffold configuration file. With Skaffold we can easily build an image and deploy an app to Kubernetes in a single step. The address of the image depends on the orgName and appName parameters in the Skaffolder template. During the image build we skip the tests and activate the Maven jib profile.

apiVersion: skaffold/v4beta5
kind: Config
metadata:
  name: ${{ values.appName }}
build:
  artifacts:
    - image: ${{ values.orgName }}/${{ values.appName }}
      jib:
        args:
          - -Pjib
          - -DskipTests
manifests:
  rawYaml:
    - k8s/deployment.yaml
deploy:
  kubectl: {}
YAML

In order to deploy the app on Kubernetes, Skaffold is looking for the k8s/deployment.yaml manifest. We will use this deployment manifest only for development and automated test purposes. In the “production” we will keep the YAML manifests in a separate Git repository and apply them through Argo CD. Once we provide a change in the source CircleCI will try to deploy the app on the temporary Kind cluster. Therefore, our Service is exposed as a NodePort under the 30000 port.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ${{ values.appName }}
spec:
  selector:
    matchLabels:
      app: ${{ values.appName }}
  template:
    metadata:
      annotations:
        prometheus.io/path: /actuator/prometheus
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
      labels:
        app: ${{ values.appName }}
    spec:
      containers:
        - name: ${{ values.appName }}
          image: ${{ values.orgName }}/${{ values.appName }}
          ports:
            - containerPort: 8080
          readinessProbe:
            httpGet:
              port: 8080
              path: /actuator/health/readiness
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          resources:
            limits:
              memory: 1024Mi
---
apiVersion: v1
kind: Service
metadata:
  name: ${{ values.appName }}
spec:
  type: NodePort
  selector:
    app: ${{ values.appName }}
  ports:
    - port: 8080
      nodePort: 30000
YAML

Let’s switch to the CircleCi configuration file. It also contains several changes related to Kubernetes. We need to include the image-build job responsible for building and pushing the app image to the target registry using Jib. We also include the deploy-k8s job to perform a test deployment to the Kind cluster. In this job, we have to install Skaffold and Kind tools on the CircleCI executor machine. Once the Kind cluster is up and ready, we deploy the app there by executing the skaffold run command.

version: 2.1

jobs:
  analyze:
    docker:
      - image: 'cimg/openjdk:21.0.2'
    steps:
      - checkout
      - run:
          name: Analyze on SonarCloud
          command: mvn verify sonar:sonar -DskipTests
  test:
    executor: machine_executor_amd64
    steps:
      - checkout
      - run:
          name: Install OpenJDK 21
          command: |
            java -version
            sudo apt-get update && sudo apt-get install openjdk-21-jdk
            sudo update-alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
            sudo update-alternatives --set javac /usr/lib/jvm/java-21-openjdk-amd64/bin/javac
            java -version
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
      - run:
          name: Maven Tests
          command: mvn test
  deploy-k8s:
    executor: machine_executor_amd64
    steps:
      - checkout
      - run:
          name: Install Kubectl
          command: |
            curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
            chmod +x kubectl
            sudo mv ./kubectl /usr/local/bin/kubectl
      - run:
          name: Install Skaffold
          command: |
            curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64
            chmod +x skaffold
            sudo mv skaffold /usr/local/bin
      - run:
          name: Install Kind
          command: |
            [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
            chmod +x ./kind
            sudo mv ./kind /usr/local/bin/kind
      - run:
          name: Install OpenJDK 21
          command: |
            java -version
            sudo apt-get update && sudo apt-get install openjdk-21-jdk
            sudo update-alternatives --set java /usr/lib/jvm/java-21-openjdk-amd64/bin/java
            sudo update-alternatives --set javac /usr/lib/jvm/java-21-openjdk-amd64/bin/javac
            java -version
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
      - run:
          name: Create Kind Cluster
          command: |
            kind create cluster --name c1 --config k8s/kind-cluster-test.yaml
      - run:
          name: Deploy to K8s
          command: |
            export JAVA_HOME=/usr/lib/jvm/java-21-openjdk-amd64
            skaffold run
      - run:
          name: Delete Kind Cluster
          command: |
            kind delete cluster --name c1
  image-push:
    docker:
      - image: 'cimg/openjdk:21.0.2'
    steps:
      - checkout
      - run:
          name: Build and push image to DockerHub
          command: mvn compile jib:build -Pjib -Djib.to.image=${{ values.orgName }}/${{ values.appName }}:latest -Djib.to.auth.username=${DOCKER_LOGIN} -Djib.to.auth.password=${DOCKER_PASSWORD} -DskipTests

executors:
  machine_executor_amd64:
    machine:
      image: ubuntu-2204:2023.10.1
    environment:
      architecture: "amd64"
      platform: "linux/amd64"

workflows:
  maven_test:
    jobs:
      - test
      - analyze:
          context: SonarCloud
      - deploy-k8s:
          requires:
            - test
      - image-push:
          context: Docker
          requires:
            - deploy-k8s
YAML

Install Backstage Plugins for Kubernetes

In the previous article about Backstage, we learned how to install plugins for GitHub, CircleCI, and Sonarqube integration. We will still use those plugins but also extend our Backstage instance with some additional plugins dedicated mostly to the Kubernetes-native environment. We will install the following plugins: Kubernetes (backend + frontend), HTTP Request Action (backend), Argo CD (frontend), and Prometheus (frontend). Let’s begin with the Kubernetes plugin.

Install the Kubernetes Plugin

In the first step, we install the Kubernetes frontend plugin. It allows us to view the app pods running on Kubernetes in the Backstage UI. In order to install it, we need to execute the following yarn command:

$ yarn --cwd packages/app add @backstage/plugin-kubernetes
ShellSession

Then, we have to make some changes in the packages/app/src/components/catalog/EntityPage.tsx file. We should import the EntityKubernetesContent component, and then include it in the serviceEntityPage object as a new route on the frontend.

import { EntityKubernetesContent } from '@backstage/plugin-kubernetes';

const serviceEntityPage = (
  <EntityLayout>
    ...
    <EntityLayout.Route path="/kubernetes" title="Kubernetes">
      <EntityKubernetesContent refreshIntervalMs={30000} />
    </EntityLayout.Route>
    ...
  </EntityLayout>
);
TypeScript

We also need to install the Kubernetes backend plugin, to make it work on the frontend site. Here’s the required yarn command:

$ yarn --cwd packages/backend add @backstage/plugin-kubernetes-backend
ShellSession

Then, we should register the plugin-kubernetes-backend module in the packages/backend/src/index.ts file.

import { createBackend } from '@backstage/backend-defaults';

const backend = createBackend();

backend.add(import('@backstage/plugin-app-backend/alpha'));
backend.add(import('@backstage/plugin-proxy-backend/alpha'));
backend.add(import('@backstage/plugin-scaffolder-backend/alpha'));
backend.add(import('@backstage/plugin-techdocs-backend/alpha'));
backend.add(import('@backstage/plugin-auth-backend'));
backend.add(import('@backstage/plugin-auth-backend-module-guest-provider'));
backend.add(import('@backstage/plugin-catalog-backend/alpha'));
backend.add(
  import('@backstage/plugin-catalog-backend-module-scaffolder-entity-model'),
);
backend.add(import('@backstage/plugin-permission-backend/alpha'));
backend.add(import('@backstage/plugin-permission-backend-module-allow-all-policy'));
backend.add(import('@backstage/plugin-search-backend/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-catalog/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-techdocs/alpha'));

backend.add(import('@backstage/plugin-scaffolder-backend-module-github'));
backend.add(import('@backstage-community/plugin-sonarqube-backend'));
backend.add(import('@backstage/plugin-kubernetes-backend/alpha'));

backend.start();
TypeScript

Install the Argo CD Plugin

We also integrate our instance of Backstage with Argo CD running on Kubernetes. Firstly, we should execute the following yarn command:

$ yarn --cwd packages/app add @roadiehq/backstage-plugin-argo-cd
ShellSession

Then, we need to update the EntityPage.tsx file. We will add the EntityArgoCDOverviewCard component inside the overviewContent object.

import {
  EntityArgoCDOverviewCard,
  isArgocdAvailable
} from '@roadiehq/backstage-plugin-argo-cd';

const overviewContent = (
  <Grid container spacing={3} alignItems="stretch">
  ...
    <EntitySwitch>
      <EntitySwitch.Case if={e => Boolean(isArgocdAvailable(e))}>
        <Grid item sm={4}>
          <EntityArgoCDOverviewCard />
        </Grid>
      </EntitySwitch.Case>
    </EntitySwitch>
  ...
  </Grid>
);
TSX

Install Prometheus Plugin

The steps for the Prometheus Plugin are pretty similar to those for the Argo CD Plugin. Firstly, we should execute the following yarn command:

$ yarn --cwd packages/app add @roadiehq/backstage-plugin-prometheus
ShellSession

Then, we need to update the EntityPage.tsx file. We will add the EntityPrometheusContent component inside the seerviceEntityPage object.

import {
  EntityPrometheusContent,
} from '@roadiehq/backstage-plugin-prometheus';

const serviceEntityPage = (
  <EntityLayout>
    ...
    <EntityLayout.Route path="/kubernetes" title="Kubernetes">
      <EntityKubernetesContent refreshIntervalMs={30000} />
    </EntityLayout.Route>
    <EntityLayout.Route path="/prometheus" title="Prometheus">
      <EntityPrometheusContent />
    </EntityLayout.Route>
    ...
  </EntityLayout>
);
TSX

Install HTTP Request Action Plugin

This plugin is not related to Kubernetes. It allows us to integrate with third-party solutions through the HTTP API services. As you probably remember, we have already integrated with Sonarcloud and CircleCI in the Backstage UI. However, we didn’t create any projects there. We could just view the history of builds or scans for the previously created projects in Sonarcloud or CircleCI. It’s time to change it in our template! Thanks to the HTTP Request Action plugin we will create the Argo CD Application through the REST API. As always, we need to execute the yarn add command to install the backend plugin:

$ yarn --cwd packages/backend add @roadiehq/scaffolder-backend-module-http-request
ShellSession

Then, we will register it in the index.ts file:

import { createBackend } from '@backstage/backend-defaults';

const backend = createBackend();

backend.add(import('@backstage/plugin-app-backend/alpha'));
backend.add(import('@backstage/plugin-proxy-backend/alpha'));
backend.add(import('@backstage/plugin-scaffolder-backend/alpha'));
backend.add(import('@backstage/plugin-techdocs-backend/alpha'));
backend.add(import('@backstage/plugin-auth-backend'));
backend.add(import('@backstage/plugin-auth-backend-module-guest-provider'));
backend.add(import('@backstage/plugin-catalog-backend/alpha'));
backend.add(
  import('@backstage/plugin-catalog-backend-module-scaffolder-entity-model'),
);
backend.add(import('@backstage/plugin-permission-backend/alpha'));
backend.add(
  import('@backstage/plugin-permission-backend-module-allow-all-policy'),
);
backend.add(import('@backstage/plugin-search-backend/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-catalog/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-techdocs/alpha'));

backend.add(import('@backstage/plugin-scaffolder-backend-module-github'));
backend.add(import('@backstage-community/plugin-sonarqube-backend'));
backend.add(import('@backstage/plugin-kubernetes-backend/alpha'));
backend.add(import('@roadiehq/scaffolder-backend-module-http-request/new-backend'));

backend.start();
TypeScript

After that, we can modify a Skaffolder template used in the previous article with some additional steps.

Prepare Backstage Template for Kubernetes

Once we have all the things in place, we can modify a previous template for the standard Spring Boot app to adapt it to the Kubernetes requirements.

Create Skaffolder Template

First of all, we add a single input parameter that indicates the target namespace in Kubernetes for running our app (1). Then, we include some additional action steps. In the first of them, we generate the repository with the YAML configuration manifests for Argo CD (2). Then, we will publish that repository on GitHub under the ${{parameters.appName}}-gitops name (3).

After that, we will use the HTTP Request Action plugin to automatically follow a new repository in CircleCI (5). Once we create such a repository in the previous step, CircleCI automatically starts a build after detecting it. We also use the HTTP Request Action plugin to create a new repository on Sonarcloud under the same name as the ${{parameters.appName}} (4). Finally, we integrate with Argo CD through the API to create a new Application responsible for applying app Deployment to Kubernetes (6). This Argo CD Application will access the previously published config repository with the -config suffix in the name and apply manifests inside the manifests directory

apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  name: spring-boot-basic-on-kubernetes-template
  title: Create a Spring Boot app for Kubernetes
  description: Create a Spring Boot app for Kubernetes
  tags:
    - spring-boot
    - java
    - maven
    - circleci
    - renovate
    - sonarqube
    - kubernetes
    - argocd
spec:
  owner: piomin
  system: microservices
  type: service

  parameters:
    - title: Provide information about the new component
      required:
        - orgName
        - appName
        - domainName
        - repoBranchName
        - groupId
        - javaPackage
        - apiPath
        - namespace
        - description
      properties:
        orgName:
          title: Organization name
          type: string
          default: piomin
        appName:
          title: App name
          type: string
          default: sample-spring-boot-app-k8s
        domainName:
          title: Name of the domain object
          type: string
          default: Person
        repoBranchName:
          title: Name of the branch in the Git repository
          type: string
          default: master
        groupId:
          title: Maven Group ID
          type: string
          default: pl.piomin.services
        javaPackage:
          title: Java package directory
          type: string
          default: pl/piomin/services
        apiPath:
          title: REST API path
          type: string
          default: /api/v1
        # (1)
        namespace:
          title: The target namespace on Kubernetes
          type: string
          default: demo
        description:
          title: Description
          type: string
          default: Spring Boot App Generated by Backstage
  steps:
    - id: sourceCodeTemplate
      name: Generating the Source Code Component
      action: fetch:template
      input:
        url: ./skeleton
        values:
          orgName: ${{ parameters.orgName }}
          appName: ${{ parameters.appName }}
          domainName: ${{ parameters.domainName }}
          groupId: ${{ parameters.groupId }}
          javaPackage: ${{ parameters.javaPackage }}
          apiPath: ${{ parameters.apiPath }}

    - id: publish
      name: Publishing to the Source Code Repository
      action: publish:github
      input:
        allowedHosts: ['github.com']
        description: ${{ parameters.description }}
        repoUrl: github.com?owner=${{ parameters.orgName }}&repo=${{ parameters.appName }}
        defaultBranch: ${{ parameters.repoBranchName }}
        repoVisibility: public

    - id: register
      name: Registering the Catalog Info Component
      action: catalog:register
      input:
        repoContentsUrl: ${{ steps.publish.output.repoContentsUrl }}
        catalogInfoPath: /catalog-info.yaml

    # (2)
    - id: configCodeTemplate
      name: Generating the Config Code Component
      action: fetch:template
      input:
        url: ../../skeletons/argocd
        values:
          orgName: ${{ parameters.orgName }}
          appName: ${{ parameters.appName }}
        targetPath: ./gitops

    # (3)
    - id: publish
      name: Publishing to the Config Code Repository
      action: publish:github
      input:
        allowedHosts: ['github.com']
        description: ${{ parameters.description }}
        repoUrl: github.com?owner=${{ parameters.orgName }}&repo=${{ parameters.appName }}-config
        defaultBranch: ${{ parameters.repoBranchName }}
        sourcePath: ./gitops
        repoVisibility: public

    # (4)
    - id: sonarqube
      name: Follow new project on Sonarcloud
      action: http:backstage:request
      input:
        method: 'POST'
        path: '/proxy/sonarqube/projects/create?name=${{ parameters.appName }}&organization=${{ parameters.orgName }}&project=${{ parameters.orgName }}_${{ parameters.appName }}'
        headers:
          content-type: 'application/json'

    # (5)
    - id: circleci
      name: Follow new project on CircleCI
      action: http:backstage:request
      input:
        method: 'POST'
        path: '/proxy/circleci/api/project/gh/${{ parameters.orgName }}/${{ parameters.appName }}/follow'
        headers:
          content-type: 'application/json'

    # (6)
    - id: argocd
      name: Create New Application in Argo CD
      action: http:backstage:request
      input:
        method: 'POST'
        path: '/proxy/argocd/api/applications'
        headers:
          content-type: 'application/json'
        body:
          metadata:
            name: ${{ parameters.appName }}
            namespace: argocd
          spec:
            project: default
            source:
              # (7)
              repoURL: https://github.com/${{ parameters.orgName }}/${{ parameters.appName }}-config.git
              targetRevision: master
              path: manifests
            destination:
              server: https://kubernetes.default.svc
              namespace: ${{ parameters.namespace }}
            syncPolicy:
              automated:
                prune: true
                selfHeal: true
              syncOptions:
                - CreateNamespace=true

  output:
    links:
      - title: Open the Source Code Repository
        url: ${{ steps.publish.output.remoteUrl }}
      - title: Open the Catalog Info Component
        icon: catalog
        entityRef: ${{ steps.register.output.entityRef }}
YAML

Create Catalog Component

Our catalog-info.yaml file should contain several additional annotations related to the plugins installed in the previous section. The argocd/app-name annotation indicates the name of the target Argo CD Application responsible for deployment on Kubernetes. The backstage.io/kubernetes-id annotation contains the value of the label used to search the pods on Kubernetes displayed in the Backstage UI. Finally, the prometheus.io/rule annotation contains a comma-separated list of the Prometheus queries. We will create graphs displaying app pod CPU and memory usage.

apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
  name: ${{ values.appName }}
  title: ${{ values.appName }}
  annotations:
    circleci.com/project-slug: github/${{ values.orgName }}/${{ values.appName }}
    github.com/project-slug: ${{ values.orgName }}/${{ values.appName }}
    sonarqube.org/project-key: ${{ values.orgName }}_${{ values.appName }}
    backstage.io/kubernetes-id: ${{ values.appName }}
    argocd/app-name: ${{ values.appName }}
    prometheus.io/rule: container_memory_usage_bytes{pod=~"${{ values.appName }}-.*"}|pod,rate(container_cpu_usage_seconds_total{pod=~"${{ values.appName }}-.*"}[5m])|pod
  tags:
    - spring-boot
    - java
    - maven
    - circleci
    - renovate
    - sonarqube
spec:
  type: service
  owner: piotr.minkowski@gmail.com
  lifecycle: experimental
YAML

Provide Configuration Settings

We need to include several configuration settings inside the app-config.yaml file. It includes the proxy section, which should contain all APIs required by the HTTP Request Action plugin and frontend plugins. We should include proxy addresses for CircleCI (1), Sonarcloud (2), Argo CD (3), and Prometheus (4). After that, we include the address of our Skaffolder template (5). We also have to include the kubernetes section with the address of the Minikube cluster and previously generated service account token (6).

app:
  title: Scaffolded Backstage App
  baseUrl: http://localhost:3000

organization:
  name: piomin

backend:
  baseUrl: http://localhost:7007
  listen:
    port: 7007
  csp:
    connect-src: ["'self'", 'http:', 'https:']
  cors:
    origin: http://localhost:3000
    methods: [GET, HEAD, PATCH, POST, PUT, DELETE]
    credentials: true
  database:
    client: better-sqlite3
    connection: ':memory:'

integrations:
  github:
    - host: github.com
      token: ${GITHUB_TOKEN}

proxy:
  # (1)
  '/circleci/api':
    target: https://circleci.com/api/v1.1
    headers:
      Circle-Token: ${CIRCLECI_TOKEN}
  # (2)
  '/sonarqube':
    target: https://sonarcloud.io/api
    allowedMethods: [ 'GET', 'POST' ]
    auth: "${SONARCLOUD_TOKEN}:"
  # (3)
  '/argocd/api':
    target: https://localhost:8443/api/v1/
    changeOrigin: true
    secure: false
    headers:
      Cookie:
        $env: ARGOCD_TOKEN
  # (4)
  '/prometheus/api':
    target: http://localhost:9090/api/v1/

auth:
  providers:
    guest: {}

catalog:
  import:
    entityFilename: catalog-info.yaml
    pullRequestBranchName: backstage-integration
  rules:
    - allow: [Component, System, API, Resource, Location]
  locations:
    - type: file
      target: ../../examples/entities.yaml

    - type: file
      target: ../../examples/template/template.yaml
      rules:
        - allow: [Template]
    
    # (5)
    - type: url
      target: https://github.com/piomin/backstage-templates/blob/master/templates/spring-boot-basic-on-kubernetes/template.yaml
      rules:
        - allow: [ Template ]

    - type: file
      target: ../../examples/org.yaml
      rules:
        - allow: [User, Group]


sonarqube:
  baseUrl: https://sonarcloud.io
  apiKey: ${SONARCLOUD_TOKEN}

# (6)
kubernetes:
  serviceLocatorMethod:
    type: 'multiTenant'
  clusterLocatorMethods:
    - type: 'config'
      clusters:
        - url: ${K8S_URL}
          name: minikube
          authProvider: 'serviceAccount'
          skipTLSVerify: false
          skipMetricsLookup: true
          serviceAccountToken: ${K8S_TOKEN}
          dashboardApp: standard
          caFile: '/Users/pminkows/.minikube/ca.crt'
YAML

Build Backstage Image

Our source code repository with Backstage contains all the required plugins and the configuration. Now, we will build it using the yarn tool. Here’s a list of required commands to perform a build.

$ yarn clean
$ yarn install
$ yarn tsc
$ yarn build:backend 
ShellSession

The repository with Backstage already contains the Dockerfile. You can find it in the packages/backend directory.

FROM node:18-bookworm-slim

RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
    --mount=type=cache,target=/var/lib/apt,sharing=locked \
    apt-get update && \
    apt-get install -y --no-install-recommends python3 g++ build-essential && \
    yarn config set python /usr/bin/python3

RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
    --mount=type=cache,target=/var/lib/apt,sharing=locked \
    apt-get update && \
    apt-get install -y --no-install-recommends libsqlite3-dev

USER node

WORKDIR /app

ENV NODE_ENV production

COPY --chown=node:node yarn.lock package.json packages/backend/dist/skeleton.tar.gz ./
RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz

RUN --mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000 \
    yarn install --frozen-lockfile --production --network-timeout 300000

COPY --chown=node:node packages/backend/dist/bundle.tar.gz app-config*.yaml ./
RUN tar xzf bundle.tar.gz && rm bundle.tar.gz

CMD ["node", "packages/backend", "--config", "app-config.yaml"]
Dockerfile

In order to build the image using the Dockerfile from the packages/backend directory, we need to run the following command from the project root directory.

$ yarn build-image
ShellSession

If you see a similar result, it means that the build was successfully finished.

The image is available locally as backstage:latest. We can run it on Docker with the following command:

$ docker run -it -p 7007:7007 \
  -e GITHUB_TOKEN=${GITHUB_TOKEN} \
  -e SONARCLOUD_TOKEN=${SONARCLOUD_TOKEN} \
  -e CIRCLECI_TOKEN=${CIRCLECI_TOKEN} \
  -e ARGOCD_TOKEN=${ARGOCD_TOKEN} \
  -e K8S_TOKEN=${K8S_TOKEN} \
  -e K8S_URL=${K8S_URL} \
  -e NODE_ENV=development \
  backstage:latest
ShellSession

However, our main goal today is to run it directly on Kubernetes. You can find our custom Backstage image in my Docker registry: piomin/backstage:latest.

Deploy Backstage on Kubernetes

We will use the official Helm chart for installing Backstage on Kubernetes. In the first step, let’s add the following chart repository:

$ helm repo add backstage https://backstage.github.io/charts
ShellSession

Here’s our values.yaml file for Helm installation. We need to set all the required tokens as extra environment variables inside the Backstage pod. We also changed the default image used in the installation into the previously built custom image. To simplify the exercise, we can disable the external database and use the internal SQLite instance. It is possible to pass extra configuration files by defining them as ConfigMap, without rebuilding the Docker image (my-app-config).

backstage:
  extraEnvVars:
    - name: NODE_ENV
      value: development
    - name: GITHUB_TOKEN
      value: ${GITHUB_TOKEN}
    - name: SONARCLOUD_TOKEN
      value: ${SONARCLOUD_TOKEN}
    - name: CIRCLECI_TOKEN
      value: ${CIRCLECI_TOKEN}
    - name: ARGOCD_TOKEN
      value: ${ARGOCD_TOKEN}
  image:
    registry: docker.io
    repository: piomin/backstage
  extraAppConfig:
    - filename: app-config.extra.yaml
      configMapRef: my-app-config
postgresql:
  enabled: false
YAML

We will change the addresses of the Kubernetes cluster, Argo CD, and Prometheus into the internal cluster locations by modifying the app-config.yaml file.

proxy:
  .
  '/argocd/api':
    target: https://argo-cd-argocd-server.argocd.svc/api/v1/
    changeOrigin: true
    secure: false
    headers:
      Cookie:
        $env: ARGOCD_TOKEN
  '/prometheus/api':
    target: http://kube-prometheus-stack-prometheus.monitoring.svc:9090/api/v1/

catalog:
  locations:
    ...
    - type: url
      target: https://github.com/piomin/backstage-templates/blob/master/templates/spring-boot-basic-on-kubernetes/template.yaml
      rules:
        - allow: [ Template ]
            
kubernetes:
  serviceLocatorMethod:
    type: 'multiTenant'
  clusterLocatorMethods:
    - type: 'config'
      clusters:
        - url: https://kubernetes.default.svc
          name: minikube
          authProvider: 'serviceAccount'
          skipTLSVerify: false
          skipMetricsLookup: true
app-config-kubernetes.yaml

Then, we will create the backstage namespace and extra ConfigMap that contains a new configuration for the Backstage running inside the Kubernetes cluster.

$ kubectl create ns backstage
$ kubectl create configmap my-app-config \
  --from-file=app-config.extra.yaml=app-config-kubernetes.yaml -n backstage
ShellSession

Finally, let’s install our custom instance of Backstage in the backstage namespace by executing the following command:

$ envsubst < values.yaml | helm install backstage backstage/backstage \
  --values - -n backstage
ShellSession

As I result, there is a running Backstage pod on Kubernetes:

$ kubectl get po -n backstage
NAME                         READY   STATUS    RESTARTS   AGE
backstage-7bfbc55647-8cj5d   1/1     Running   0          16m
ShellSession

Let’s enable port forwarding to access the Backstage UI on the http://localhost:7007:

$ kubectl port-forward svc/backstage 7007 -n backstage
ShellSession

This time we increase the privileges for default ServiceAccount in the backstage namespace used by our instance of Backstage:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: default-admin
subjects:
- kind: ServiceAccount
  name: default
  namespace: backstage
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
YAML

Final Test

After accessing Backstage UI we can create a new Spring Boot app from the template. Choose the “Create a Spring Boot app for Kubernetes” template as shown below:

backstage-kubernetes-create

If you would like to try it by yourself, you need to change the organization name to your GitHub account name. Then click “Review” and “Create” on the next page.

There are two GitHub repositories created. The first one contains the sample app source code.

backstage-kubernetes-repo

The second one contains YAML manifests with Deployment for Argo CD.

The Argo CD Application is automatically created. We can verify the synchronization status in the Backstage UI.

backstage-kubernetes-argocd

Our application is running in the demo namespace. We can display a list of pods in the “KUBERNETES” tab.

backstage-kubernetes-pod

We can also verify the detailed status of each pod.

backstage-kubernetes-pod-status

Or take a look at the logs.

Final Thoughts

In this article, we learned how to install and integrate Backstage with Kubernetes-native services like Argo CD or Prometheus. We built the customized image with Backstage and then deployed it on Kubernetes using the Helm chart.

The post Backstage on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/06/28/backstage-on-kubernetes/feed/ 0 15291
Getting Started with Backstage https://piotrminkowski.com/2024/06/13/getting-started-with-backstage/ https://piotrminkowski.com/2024/06/13/getting-started-with-backstage/#comments Thu, 13 Jun 2024 13:47:07 +0000 https://piotrminkowski.com/?p=15266 This article will teach you how to use Backstage in your app development and create software templates to generate a typical Spring Boot app. Backstage is an open-source framework for building developer portals. It allows us to automate the creation of the infrastructure, CI/CD, and operational knowledge needed to run an application or product. It […]

The post Getting Started with Backstage appeared first on Piotr's TechBlog.

]]>
This article will teach you how to use Backstage in your app development and create software templates to generate a typical Spring Boot app. Backstage is an open-source framework for building developer portals. It allows us to automate the creation of the infrastructure, CI/CD, and operational knowledge needed to run an application or product. It offers a centralized software catalog and unifies all infrastructure tooling, services, and documentation within a single and intuitive UI. With Backstage, we can create any new software component, such as a new microservice, just with a few clicks. Developers can choose between several standard templates. Platform engineers will create such templates to meet the organization’s best practices.

From the technical point of view, Backstage is a web frontend designed to run on Node.js. It is mostly written in TypeScript using the React framework. It has an extensible nature. Each time we need to integrate Backstage with some third-party software we need to install a dedicated plugin. Plugins are essentially individually packaged React components. Today you will learn how to install and configure plugins to integrate with GitHub, CircleCI, and Sonarqube.

It is the first article about Backstage on my blog. You can expect more in the future. However, before proceeding with this article, it is worth reading the following post. It explains how I create my repositories on GitHub and what tools I’m using to check the quality of the code and be up-to-date.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. This time, there are two sample Git repositories. The first of them contains software templates written in the Backstage technology called Skaffolder. On the other hand, the second repository contains the source code of the simple Spring Boot generated from the Backstage template. Once you clone both of those repos, you should just follow my further instructions.

Writing Software Templates in Backstage

We can create our own software templates using YAML notation or find existing examples on the web. Such software templates are very similar to the definition of Kubernetes objects. They have the apiVersion, kind (Template) fields, metadata, and spec fields. Each template must define a list of input variables and then a list of actions that the scaffolding service executes.

The Structure of the Repository with Software Templates

Let’s take a look at the structure of the repository containing our Spring Boot template. As you see, there is the template.yaml file with the software template YAML manifest and the skeleton directory with our app source code. Besides Java files, there is the Maven pom.xml, Renovate and CircleCI configuration manifests. The Scaffolder template input parameters determine the name of Java classes or packages. In the Skaffolder template, we set the default base package name and a domain object name. The catalog-info.yaml file contains a definition of the object required to register the app in the software catalog. As you can see, we are also parametrizing the names of the files with the domain object names.

.
├── skeleton
│   ├── .circleci
│   │   └── config.yml
│   ├── README.md
│   ├── catalog-info.yaml
│   ├── pom.xml
│   ├── renovate.json
│   └── src
│       ├── main
│       │   ├── java
│       │   │   └── ${{values.javaPackage}}
│       │   │       ├── Application.java
│       │   │       ├── controller
│       │   │       │   └── ${{values.domainName}}Controller.java
│       │   │       └── domain
│       │   │           └── ${{values.domainName}}.java
│       │   └── resources
│       │       └── application.yml
│       └── test
│           └── java
│               └── ${{values.javaPackage}}
│                   └── ${{values.domainName}}ControllerTests.java
└── template.yaml

13 directories, 11 files
ShellSession

Building Templates with Scaffolder

Now, let’s take a look at the most important element in our repository – the template manifest. It defines several input parameters with default values. We can set the name of our app (appName), choose a default branch name inside the Git repository (repoBranchName), Maven group ID (groupId), the name of the default Java package (javaPackage), or the base REST API controller path (apiPath). All these parameters are then used during code generation. If you need more customization in the templates, you should add other parameters in the manifest.

The steps section in the manifest defines the actions required to create a new app in Backstage. We are doing three things here. In the first step, we need to generate the Spring Boot app source by filling the templates inside the skeleton directory with parameters defined in the Scaffolder manifest. Then, we are publishing the generated code in the newly created GitHub repository. The name of the repository is the same as the app name (the appName parameter). The owner of the GitHub repository is determined by the values of the orgName parameter. Finally, we are registering the new component in the Backstage catalog by calling the catalog:register action.

apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  name: spring-boot-basic-template
  title: Create a Spring Boot app
  description: Create a Spring Boot app
  tags:
    - spring-boot
    - java
    - maven
    - circleci
    - renovate
    - sonarqube
spec:
  owner: piomin
  system: piomin
  type: service

  parameters:
    - title: Provide information about the new component
      required:
        - orgName
        - appName
        - domainName
        - repoBranchName
        - groupId
        - javaPackage
        - apiPath
        - description
      properties:
        orgName:
          title: Organization name
          type: string
          default: piomin
        appName:
          title: App name
          type: string
          default: sample-spring-boot-app
        domainName:
          title: Name of the domain object
          type: string
          default: Person
        repoBranchName:
          title: Name of the branch in the Git repository
          type: string
          default: master
        groupId:
          title: Maven Group ID
          type: string
          default: pl.piomin.services
        javaPackage:
          title: Java package directory
          type: string
          default: pl/piomin/services
        apiPath:
          title: REST API path
          type: string
          default: /api/v1
        description:
          title: Description
          type: string
          default: Sample Spring Boot App
          
  steps:
    - id: sourceCodeTemplate
      name: Generating the Source Code Component
      action: fetch:template
      input:
        url: ./skeleton
        values:
          orgName: ${{ parameters.orgName }}
          appName: ${{ parameters.appName }}
          domainName: ${{ parameters.domainName }}
          groupId: ${{ parameters.groupId }}
          javaPackage: ${{ parameters.javaPackage }}
          apiPath: ${{ parameters.apiPath }}

    - id: publish
      name: Publishing to the Source Code Repository
      action: publish:github
      input:
        allowedHosts: ['github.com']
        description: ${{ parameters.description }}
        repoUrl: github.com?owner=${{ parameters.orgName }}&repo=${{ parameters.appName }}
        defaultBranch: ${{ parameters.repoBranchName }}
        repoVisibility: public

    - id: register
      name: Registering the Catalog Info Component
      action: catalog:register
      input:
        repoContentsUrl: ${{ steps.publish.output.repoContentsUrl }}
        catalogInfoPath: /catalog-info.yaml

  output:
    links:
      - title: Open the Source Code Repository
        url: ${{ steps.publish.output.remoteUrl }}
      - title: Open the Catalog Info Component
        icon: catalog
        entityRef: ${{ steps.register.output.entityRef }}
YAML

Generating Spring Boot Source Code

Here’s the template of the Maven pom.xml. Our app uses the current latest version of the Spring Boot framework and Java 21 for compilation. The Maven groupId and artifactId are taken from the Scaffolder template parameters. The pom.xml file also contains data required to integrate with a specific project on Sonarcloud (sonar.projectKey and sonar.organization).

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>${{ values.groupId }}</groupId>
    <artifactId>${{ values.appName }}</artifactId>
    <version>1.0-SNAPSHOT</version>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>3.3.0</version>
    </parent>

    <properties>
        <sonar.projectKey>${{ values.orgName }}_${{ values.appName }}</sonar.projectKey>
        <sonar.organization>${{ values.orgName }}</sonar.organization>
        <sonar.host.url>https://sonarcloud.io</sonar.host.url>
        <java.version>21</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springdoc</groupId>
            <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
            <version>2.5.0</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-devtools</artifactId>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>org.instancio</groupId>
            <artifactId>instancio-junit</artifactId>
            <version>4.7.0</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <executions>
                    <execution>
                        <goals>
                            <goal>build-info</goal>
                        </goals>
                        <configuration>
                            <additionalProperties>
                                <java.target>${java.version}</java.target>
                                <time>${maven.build.timestamp}</time>
                            </additionalProperties>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>pl.project13.maven</groupId>
                <artifactId>git-commit-id-plugin</artifactId>
                <configuration>
                    <failOnNoGitDirectory>false</failOnNoGitDirectory>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.jacoco</groupId>
                <artifactId>jacoco-maven-plugin</artifactId>
                <version>0.8.12</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>prepare-agent</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>report</id>
                        <phase>test</phase>
                        <goals>
                            <goal>report</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

</project>
XML

There are several Java classes generated during bootstrap. Here’s the @RestController class template. It uses three parameters defined in the Scaffolder template: groupId, domainName and apiPath. It imports the domain object class and exposes REST methods for CRUD operations. As you see, the implementation is very simple. It just uses the in-memory Java List to store the domain objects. However, it perfectly shows the idea behind Scaffolder templates.

package ${{ values.groupId }}.controller;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.*;
import ${{ values.groupId }}.domain.${{ values.domainName }};

import java.util.ArrayList;
import java.util.List;

@RestController
@RequestMapping("${{ values.apiPath }}")
public class ${{ values.domainName }}Controller {

    private final Logger LOG = LoggerFactory.getLogger(${{ values.domainName }}Controller.class);
    private final List<${{ values.domainName }}> objs = new ArrayList<>();

    @GetMapping
    public List<${{ values.domainName }}> findAll() {
        return objs;
    }

    @GetMapping("/{id}")
    public ${{ values.domainName }} findById(@PathVariable("id") Long id) {
        ${{ values.domainName }} obj = objs.stream().filter(it -> it.getId().equals(id))
                .findFirst()
                .orElseThrow();
        LOG.info("Found: {}", obj.getId());
        return obj;
    }

    @PostMapping
    public ${{ values.domainName }} add(@RequestBody ${{ values.domainName }} obj) {
        obj.setId((long) (objs.size() + 1));
        LOG.info("Added: {}", obj);
        objs.add(obj);
        return obj;
    }

    @DeleteMapping("/{id}")
    public void delete(@PathVariable("id") Long id) {
        ${{ values.domainName }} obj = objs.stream().filter(it -> it.getId().equals(id)).findFirst().orElseThrow();
        objs.remove(obj);
        LOG.info("Removed: {}", id);
    }

    @PutMapping
    public void update(@RequestBody ${{ values.domainName }} obj) {
        ${{ values.domainName }} objTmp = objs.stream()
                .filter(it -> it.getId().equals(obj.getId()))
                .findFirst()
                .orElseThrow();
        objs.set(objs.indexOf(objTmp), obj);
        LOG.info("Updated: {}", obj.getId());
    }

}
Java

Then, we can generate a test class to verify @RestController endpoints. The app is starting on the random port during the JUnit tests. In the first test, we are adding a new object into the store. Then we are verifying the GET /{id} endpoint works fine. Finally, we are removing the object from the store by calling the DELETE /{id} endpoint.

package ${{ values.groupId }};

import org.instancio.Instancio;
import org.junit.jupiter.api.MethodOrderer;
import org.junit.jupiter.api.Order;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.TestMethodOrder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.boot.test.web.client.TestRestTemplate;
import ${{ values.groupId }}.domain.${{ values.domainName }};

import static org.junit.jupiter.api.Assertions.*;

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class ${{ values.domainName }}ControllerTests {

    private static final String API_PATH = "${{values.apiPath}}";

    @Autowired
    private TestRestTemplate restTemplate;

    @Test
    @Order(1)
    void add() {
        ${{ values.domainName }} obj = restTemplate.postForObject(API_PATH, Instancio.create(${{ values.domainName }}.class), ${{ values.domainName }}.class);
        assertNotNull(obj);
        assertEquals(1, obj.getId());
    }

    @Test
    @Order(2)
    void findAll() {
        ${{ values.domainName }}[] objs = restTemplate.getForObject(API_PATH, ${{ values.domainName }}[].class);
        assertTrue(objs.length > 0);
    }

    @Test
    @Order(2)
    void findById() {
        ${{ values.domainName }} obj = restTemplate.getForObject(API_PATH + "/{id}", ${{ values.domainName }}.class, 1L);
        assertNotNull(obj);
        assertEquals(1, obj.getId());
    }

    @Test
    @Order(3)
    void delete() {
        restTemplate.delete(API_PATH + "/{id}", 1L);
        ${{ values.domainName }} obj = restTemplate.getForObject(API_PATH + "/{id}", ${{ values.domainName }}.class, 1L);
        assertNull(obj.getId());
    }

}
Java

Integrate with CircleCI and Renovate

Once I create a new repository on GitHub I want to automatically integrate it with CircleCI builds. I also want to update Maven dependencies versions automatically with Renovate to keep the project up to date. Since my GitHub account is connected to the CircleCI account and the Renovate app is installed there I just need to provide two configuration manifests inside the generated repository. Here’s the CircleCI config.yaml file. It runs the Maven build with JUnit tests and performs the Sonarqube scan in the sonarcloud.io portal.

version: 2.1

jobs:
  analyze:
    docker:
      - image: 'cimg/openjdk:21.0.2'
    steps:
      - checkout
      - run:
          name: Analyze on SonarCloud
          command: mvn verify sonar:sonar

executors:
  jdk:
    docker:
      - image: 'cimg/openjdk:21.0.2'

orbs:
  maven: circleci/maven@1.4.1

workflows:
  maven_test:
    jobs:
      - maven/test:
          executor: jdk
      - analyze:
          context: SonarCloud
YAML

Here’s the renovate.json manifest. Renovate will create a PR in GitHub each time it detects a new version of Maven dependency.

{
  "$schema": "https://docs.renovatebot.com/renovate-schema.json",
  "extends": [
    "config:base",":dependencyDashboard"
  ],
  "packageRules": [
    {
      "matchUpdateTypes": ["minor", "patch", "pin", "digest"],
      "automerge": true
    }
  ],
  "prCreation": "not-pending"
}
YAML

Register a new Component in the Software Catalog

Once we generate the whole code and publish it as the GitHub repository, we need to register a new component in the Backstage catalog. In order to achieve this, our repository needs to contain the Component manifest as shown below. Once again, we need to fill it with the parameter values during the bootstrap phase. It contains a reference to the CircleCI and Sonarqube projects and a generated GitHub repository in the annotations section. Here’s the catalog-info.yaml template:

apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
  name: ${{ values.appName }}
  title: ${{ values.appName }}
  annotations:
    circleci.com/project-slug: github/${{ values.orgName }}/${{ values.appName }}
    github.com/project-slug: ${{ values.orgName }}/${{ values.appName }}
    sonarqube.org/project-key: ${{ values.orgName }}_${{ values.appName }}
  tags:
    - spring-boot
    - java
    - maven
    - circleci
    - renovate
    - sonarqube
spec:
  type: service
  owner: piotr.minkowski@gmail.com
  lifecycle: experimental
YAML

Running Backstage

Once we have the whole template ready, we can proceed to run the Backstage on our local machine. As I mentioned before, Backstage is a Node.js app, so we need to have several tools installed to be able to run it. By the way, the list of prerequisites is pretty large. You can find it under the following link. First of all, I had to downgrade the version of Node.js from the latest 22 to 18. We also need to install yarn and npx. If you have the following versions of those tools, you shouldn’t have any problems with running Backstage according to further instructions.

$ node --version
v18.20.3

$ npx --version
10.7.0

$ yarn --version
1.22.22
ShellSession

Running a Standalone Server Locally

We are going to run Backstage locally in the development mode as a standalone server. In order to achieve this, we first need to run the following command. It will create a new directory with a Backstage app inside. 

$ npx @backstage/create-app@latest
ShellSession

This may take some time. But if you see a similar result, it means that your instance is ready. However, before we start it we need to install some plugins and include some configuration settings. In that case, the name of our instance is backstage1.

Firstly, we should go to the backstage1 directory and take a look at the project structure. The most important elements for us are: the app-config.yaml file with configuration and the packages directory with the source code of the backend and frontend (app) modules.

├── README.md
├── app-config.local.yaml
├── app-config.production.yaml
├── app-config.yaml
├── backstage.json
├── catalog-info.yaml
├── dist-types
├── examples
├── lerna.json
├── node_modules
├── package.json
├── packages
│   ├── app
│   └── backend
├── playwright.config.ts
├── plugins
├── tsconfig.json
└── yarn.lock
ShellSession

The app is not configured according to our needs yet. However, we can run it with the following command just to try it out:

$ yarn dev
ShellSession

We can visit the UI available under the http://localhost:3000:

Provide Configuration and Install Plugins

In the first, we will analyze the app-config.yaml and add some configuration settings there. I will focus only on the aspects important to our exercise. The default configuration comes with enabled built-in integration with GitHub. We just need to generate a personal access token in GitHub and provide it as the GITHUB_TOKEN environment variable (1). Then, we need to integrate with Sonarqube also through the access token (2). Our portal will also display a list of CircleCI builds. Therefore, we need to include the CircleCI token as well (3). Finally, we should include the URL address of our custom Scaffolder template in the catalog section (4). It is located in our sample GitHub repository: https://github.com/piomin/backstage-templates/blob/master/templates/spring-boot-basic/skeleton/catalog-info.yaml.

app:
  title: Scaffolded Backstage App
  baseUrl: http://localhost:3000

organization:
  name: piomin

backend:
  baseUrl: http://localhost:7007
  listen:
    port: 7007
  csp:
    connect-src: ["'self'", 'http:', 'https:']
  cors:
    origin: http://localhost:3000
    methods: [GET, HEAD, PATCH, POST, PUT, DELETE]
    credentials: true
  database:
    client: better-sqlite3
    connection: ':memory:'

# (1)
integrations:
  github:
    - host: github.com
      token: ${GITHUB_TOKEN}

# (2)
sonarqube:
  baseUrl: https://sonarcloud.io
  apiKey: ${SONARCLOUD_TOKEN}

# (3)
proxy:
  '/circleci/api':
    target: https://circleci.com/api/v1.1
    headers:
      Circle-Token: ${CIRCLECI_TOKEN}
      
auth:
  providers:
    guest: {}

catalog:
  import:
    entityFilename: catalog-info.yaml
    pullRequestBranchName: backstage-integration
  rules:
    - allow: [Component, System, API, Resource, Location]
  locations:`
    - type: file
      target: ../../examples/entities.yaml
    - type: file
      target: ../../examples/template/template.yaml
      rules:
        - allow: [Template]
    # (4)
    - type: url
      target: https://github.com/piomin/backstage-templates/blob/master/templates/spring-boot-basic/template.yaml
      rules:
        - allow: [ Template ]
    - type: file
      target: ../../examples/org.yaml
      rules:
        - allow: [User, Group]
YAML

So, before starting Backstage we need to export both GitHub and Sonarqube tokens.

$ export GITHUB_TOKEN=<YOUR_GITHUB_TOKEN>
$ export SONARCLOUD_TOKEN=<YOUR_SONARCLOUD_TOKEN>
$ export CIRCLECI_TOKEN=<YOUR_CIRCLECI_TOKEN>
ShellSession

For those of you who didn’t generate Sonarcloud tokens before:

And similar operation for CicleCI:

Unfortunately, that is not all. Now, we need to install several required plugins. To be honest with you, plugin installation in Backstage is quite troublesome. Usually, we not only need to install such a plugin with yarn but also provide some changes in the packages directory. Let’s begin!

Enable GitHub Integration

Although integration with GitHub is enabled by default in the configuration settings we still need to install the plugin to be able to make some actions related to repositories. Firstly, from your Backstage instance root directory, you need to execute the following command:

$ yarn --cwd packages/backend add @backstage/plugin-scaffolder-backend-module-github
ShellSession

Then, go to the packages/backend/app/index.ts file and add a single highlighted line there. It imports the @backstage/plugin-scaffolder-backend-module-github module to the backend.

import { createBackend } from '@backstage/backend-defaults';

const backend = createBackend();

backend.add(import('@backstage/plugin-app-backend/alpha'));
backend.add(import('@backstage/plugin-proxy-backend/alpha'));
backend.add(import('@backstage/plugin-scaffolder-backend/alpha'));
backend.add(import('@backstage/plugin-techdocs-backend/alpha'));
backend.add(import('@backstage/plugin-auth-backend'));
backend.add(import('@backstage/plugin-auth-backend-module-guest-provider'));
backend.add(import('@backstage/plugin-catalog-backend/alpha'));
backend.add(
  import('@backstage/plugin-catalog-backend-module-scaffolder-entity-model'),
);
backend.add(import('@backstage/plugin-permission-backend/alpha'));
backend.add(
  import('@backstage/plugin-permission-backend-module-allow-all-policy'),
);
backend.add(import('@backstage/plugin-search-backend/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-catalog/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-techdocs/alpha'));

backend.add(import('@backstage/plugin-scaffolder-backend-module-github'));
backend.add(import('@backstage-community/plugin-sonarqube-backend'));

backend.start();
TypeScript

After that, it will be possible to call the publish:github action defined in our template, which is responsible for creating a new GitHub repository with the Spring Boot app source code.

Enable Sonarqube Integration

In the next step, we need to install and configure the Sonarqube plugin. This time, we need to install both backend and frontend modules. Let’s begin with a frontend part. Firstly, we have to execute the following yarn command from the project root directory:

$ yarn --cwd packages/app add @backstage-community/plugin-sonarqube
ShellSession

Then, we need to edit the packages/app/src/components/catalog/EntityPage.tsx file to import the EntitySonarQubeCard object from the @backstage-community/plugin-sonarqube plugin. After that, we can include EntitySonarQubeCard component to the frontend page. For example, it can be placed as a part of the overview content.

import { EntitySonarQubeCard } from '@backstage-community/plugin-sonarqube';

// ... other imports
// ... other contents

const overviewContent = (
  <Grid container spacing={3} alignItems="stretch">
    {entityWarningContent}
    <Grid item md={6}>
      <EntityAboutCard variant="gridItem" />
    </Grid>
    <Grid item md={6} xs={12}>
      <EntityCatalogGraphCard variant="gridItem" height={400} />
    </Grid>
    <Grid item md={6}>
      <EntitySonarQubeCard variant="gridItem" />
    </Grid>
    <Grid item md={4} xs={12}>
      <EntityLinksCard />
    </Grid>
    <Grid item md={8} xs={12}>
      <EntityHasSubcomponentsCard variant="gridItem" />
    </Grid>
  </Grid>
);
TypeScript

Then, we can proceed with the backend plugin. Once again, we are installing with the yarn command:

$ yarn --cwd packages/backend add @backstage-community/plugin-sonarqube-backend
ShellSession

Finally, the same as for the GitHub plugin, go to the packages/backend/app/index.ts file and add a single highlighted line there to import the @backstage-community/plugin-sonarqube-backend to the backend module.

import { createBackend } from '@backstage/backend-defaults';

const backend = createBackend();

backend.add(import('@backstage/plugin-app-backend/alpha'));
backend.add(import('@backstage/plugin-proxy-backend/alpha'));
backend.add(import('@backstage/plugin-scaffolder-backend/alpha'));
backend.add(import('@backstage/plugin-techdocs-backend/alpha'));
backend.add(import('@backstage/plugin-auth-backend'));
backend.add(import('@backstage/plugin-auth-backend-module-guest-provider'));
backend.add(import('@backstage/plugin-catalog-backend/alpha'));
backend.add(
  import('@backstage/plugin-catalog-backend-module-scaffolder-entity-model'),
);
backend.add(import('@backstage/plugin-permission-backend/alpha'));
backend.add(
  import('@backstage/plugin-permission-backend-module-allow-all-policy'),
);
backend.add(import('@backstage/plugin-search-backend/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-catalog/alpha'));
backend.add(import('@backstage/plugin-search-backend-module-techdocs/alpha'));

backend.add(import('@backstage/plugin-scaffolder-backend-module-github'));
backend.add(import('@backstage-community/plugin-sonarqube-backend'));

backend.start();
TypeScript

Note that previously, we added the sonarqube section with the access token to the app-config.yaml file and we included annotation with the SonarCloud project key into the Backstage Component manifest. Thanks to that, we don’t need to do anything more in this part.

Enable CircleCI Integration

In order to install the CircleCI plugin, we need to execute the following yarn command:

$ yarn add --cwd packages/app @circleci/backstage-plugin
ShellSession

Then, we have to edit the packages/app/src/components/catalog/EntityPage.tsx file. The same as before we need to include the import section and choose a place on the frontend page to display the content.

// ... other imports

import {
  EntityCircleCIContent,
  isCircleCIAvailable,
} from '@circleci/backstage-plugin';

// ... other contents

const cicdContent = (
  <EntitySwitch>
    <EntitySwitch.Case if={isCircleCIAvailable}>
      <EntityCircleCIContent />
    </EntitySwitch.Case>
    <EntitySwitch.Case>
      <EmptyState
        title="No CI/CD available for this entity"
        missing="info"
        description="You need to add an annotation to your component if you want to enable CI/CD for it. You can read more about annotations in Backstage by clicking the button below."
        action={
          <Button
            variant="contained"
            color="primary"
            href="https://backstage.io/docs/features/software-catalog/well-known-annotations"
          >
            Read more
          </Button>
        }
      />
    </EntitySwitch.Case>
  </EntitySwitch>
);
TypeScript

Final Run

That’s all we need to configure before running the Backstage instance. Once again, we need to start the instance with the yarn dev command. After running the app, we should go to the “Create…” section in the left menu pane. You should see our custom template under the name “Create a Spring Boot app”. Click the “CHOOSE” button to create a new component from that template.

backstage-templates

Then, we will see the form with several input parameters. I will just change the app name to the sample-spring-boot-app-backstage and leave the default values everywhere else.

backstage-app-create

Then, let’s just click the “CREATE” button on the next page.

After that, Backstage will generate all the required things from our sample template.

backstage-process

We can go to the app page in the Backstage catalog. As you see it contains the “Code Quality” section with the latest Sonrqube report for our newly generated app.

backstage-overview

We can also switch to the “CI/CD” tab to see the history of the app builds in the CircleCI.

backstage-cicd

If you want to visit the example repository generated from the sample template discussed today, you can find it here.

Final Thoughts

Backstage is an example of a no-code IDP (Internal Developer Portal). IDP is an important part of the relatively new trend in software development called “Platform Engineering”. In this article, I showed you how to create “Golden Path Templates” using the technology called Scaffolder. Then, you could see how to run Backstage on the local machine and how to create an app source from the template. We installed some useful plugins to integrate our portal with GitHub, CircleCI, and Sonarqube. Plugins installation may cause some problems, especially for people without experience in Node.js and React. Hope it helps!

The post Getting Started with Backstage appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/06/13/getting-started-with-backstage/feed/ 6 15266
Interesting Facts About Java Streams and Collections https://piotrminkowski.com/2024/04/25/interesting-facts-about-java-streams-and-collections/ https://piotrminkowski.com/2024/04/25/interesting-facts-about-java-streams-and-collections/#comments Thu, 25 Apr 2024 11:25:07 +0000 https://piotrminkowski.com/?p=15228 This article will show some interesting features of Java Streams and Collections you may not heard about. We will look at both the latest API enhancements as well as the older ones that have existed for years. That’s my private list of features I used recently or I just came across while reading articles about […]

The post Interesting Facts About Java Streams and Collections appeared first on Piotr's TechBlog.

]]>
This article will show some interesting features of Java Streams and Collections you may not heard about. We will look at both the latest API enhancements as well as the older ones that have existed for years. That’s my private list of features I used recently or I just came across while reading articles about Java. If you are interested in Java you can find some similar articles on my blog. In one of them, you can find a list of less-known but useful Java libraries.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. Once you clone the repository, you can switch to the jdk22 branch. Then you should just follow my instructions.

Mutable or Immutable

The approach to the collections immutability in Java can be annoying for some of you. How do you know if a Java Collection is mutable or immutable? Java does not provide dedicated interfaces and implementations for mutable or immutable collections like e.g. Kotlin. Of course, you can switch to the Eclipse Collections library, which provides a clear differentiation of types between readable, mutable, and immutable. However, if you are still with a standard Java Collections let’s analyze the situation by the example of the java.util.List interface. Some years ago Java 16 introduced a new Stream.toList() method to convert between streams and collections. Probably, you are using it quite often 🙂 It is worth mentioning that this method returns an unmodifiable List and allows nulls.

var l = Stream.of(null, "Green", "Yellow").toList();
assertEquals(3, l.size());
assertThrows(UnsupportedOperationException.class, () -> l.add("Red"));
assertThrows(UnsupportedOperationException.class, () -> l.set(0, "Red"));
Java

So, the Stream.toList() method is not just a replacement for the older approach based on the Collectors.toList(). On the other hand, an older Collectors.toList() method returns a modifiable List and also allows nulls.

var l = Stream.of(null, "Green", "Yellow").collect(Collectors.toList());
l.add("Red");
assertEquals(4, l.size());
Java

Finally, let’s see how to achieve the next possible option here. We need an unmodifiable List and does not allow nulls. In order to achieve it, we have to use the Collectors.toUnmodifiableList() method as shown below.

assertThrows(NullPointerException.class, () ->
        Stream.of(null, "Green", "Yellow")
                .collect(Collectors.toUnmodifiableList()));
Java

Grouping and Aggregations with Java Streams

Java Streams introduces several useful methods that allow us to group and aggregate collections using different criteria. We can find those methods in the java.util.stream.Collectors class. Let’s create the Employee record for testing purposes.

public record Employee(String firstName, 
                       String lastName, 
                       String position, 
                       int salary) {}
Java

Now, let’s assume we have a stream of employees and we want to calculate the sum of salary grouped by the position. In order to achieve this, we should combine two methods from the Collectors class: groupingBy and summingInt. If you would like to count the average salary by the position, you can just replace the summingInt method with the averagingInt method.

Stream<Employee> s1 = Stream.of(
    new Employee("AAA", "BBB", "developer", 10000),
    new Employee("AAB", "BBC", "architect", 15000),
    new Employee("AAC", "BBD", "developer", 13000),
    new Employee("AAD", "BBE", "tester", 7000),
    new Employee("AAE", "BBF", "tester", 9000)
);

var m = s1.collect(Collectors.groupingBy(Employee::position, 
   Collectors.summingInt(Employee::salary)));
assertEquals(3, m.size());
assertEquals(m.get("developer"), 23000);
assertEquals(m.get("architect"), 15000);
assertEquals(m.get("tester"), 16000);
Java

For more simple grouping we can the partitioningBy method. It always returns a map with two entries, one where the predicate is true and the second one where it is false. So, for the same stream as in the previous example, we can use partitioningBy to divide employees into those with salaries higher than 10000, and lower or equal to 10000.

var m = s1.collect(Collectors.partitioningBy(emp -> emp.salary() > 10000));
assertEquals(2, m.size());
assertEquals(m.get(true).size(), 2);
assertEquals(m.get(false).size(), 3);
Java

Let’s take a look at another example. This time, we will count the number of times each element occurs in the collection. Once again, we can use the groupingBy method, but this time in conjunction with the Collectors.counting() method.

var s = Stream.of(2, 3, 4, 2, 3, 5, 1, 3, 4, 4)
   .collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));
assertEquals(5, m.size());
assertEquals(m.get(4), 3);
Java

The Map.merge() Method

In the previous examples, we use methods provided by Java Streams to perform grouping and aggregations. Now, the question is if we can do the same thing with the standard Java collections without converting them to streams. The answer is yes – we can easily do it with the Map.merge() method. It is probably the most versatile operation among all the Java key-value methods. The Map.merge() method either puts a new value under the given key (if absent) or updates the existing key with a given value. Let’s rewrite the previous examples, to switch from Java streams to collections. Here’s the implementation for counting the number of times each element occurs in the collection.

var map = new HashMap<Integer, Integer>();
var nums = List.of(2, 3, 4, 2, 3, 5, 1, 3, 4, 4);
nums.forEach(num -> map.merge(num, 1, Integer::sum));
assertEquals(5, map.size());
assertEquals(map.get(4), 3);
Java

Then, we can implement the operation of calculating the sum of salary grouped by the position. So, we are grouping by the emp.position() and calculating the total salary by summing the previous value with the value taken from the current Employee in the list. The results are the same as in the examples from the previous section.

var s1 = List.of(
   new Employee("AAA", "BBB", "developer", 10000),
   new Employee("AAB", "BBC", "architect", 15000),
   new Employee("AAC", "BBD", "developer", 13000),
   new Employee("AAD", "BBE", "tester", 7000),
   new Employee("AAE", "BBF", "tester", 9000)
);
var map = new HashMap<String, Integer>();
s1.forEach(emp -> map.merge(emp.position(), emp.salary(), Integer::sum));
assertEquals(3, map.size());
assertEquals(map.get("developer"), 23000);
Java

Use EnumSet for Java Enum

If you are storing enums inside Java collections you should use EnumSet instead of e.g. more popular HashSet. The EnumSet and EnumMap collections are specialized versions of Set and Map that are built for enums. Those abstracts guarantee less memory consumption and much better performance. They are also providing some methods dedicated to simplifying integration with Java Enum. In order to compare processing time between EnumSet and a standard Set we can prepare a simple test. In this test, I’m creating a subset of Java Enum inside the EnumSet and then checking out if all the values exist in the target EnumSet.

var x = EnumSet.of(
    EmployeePosition.SRE,
    EmployeePosition.ARCHITECT,
    EmployeePosition.DEVELOPER);
long beg = System.nanoTime();
for (int i = 0; i < 100_000_000; i++) {
   var es = EnumSet.allOf(EmployeePosition.class);
   es.containsAll(x);
}
long end = System.nanoTime();
System.out.println(x.getClass() + ": " + (end - beg)/1e9);
Java

Here’s a similar test without EnumSet:

var x = Set.of(
    EmployeePosition.SRE,
    EmployeePosition.ARCHITECT,
    EmployeePosition.DEVELOPER);
long beg = System.nanoTime();
for (int i = 0; i < 100_000_000; i++) {
   var hs = Set.of(EmployeePosition.values());
   hs.containsAll(x);
}
long end = System.nanoTime();
System.out.println(x.getClass() + ": " + (end - beg)/1e9);
Java

The difference in time consumption before both two variants visible above is pretty significant.

class java.util.ImmutableCollections$SetN: 8.577672411
class java.util.RegularEnumSet: 0.184956851
ShellSession

Java 22 Stream Gatherers

The latest JDK 22 release introduces a new addition to Java streams called gatherers. Gatherers enrich the Java Stream API with capabilities for custom intermediate operations. Thanks to that we can transform data streams in ways that were previously complex or not directly supported by the existing API. First of all, this is a preview feature, so we need to explicitly enable it in the compiler configuration. Here’s the modification in the Maven pom.xml:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-compiler-plugin</artifactId>
  <version>3.13.0</version>
  <configuration>
    <release>22</release>
    <compilerArgs>--enable-preview</compilerArgs>
  </configuration>
</plugin>
XML

The aim of that article is not to provide you with a detailed explanation of Stream Gatherers. So, if you are looking for an intro it’s worth reading the following article. However, just to give you an example of how we can leverage gatherers I will provide a simple implementation of a circuit breaker with the slidingWindow() method. In order to show you, what exactly happens inside I’m logging the intermediate elements with the peek() method. Our implementation will open the circuit breaker if there are more than 10 errors in the specific period.

var errors = Stream.of(2, 0, 1, 3, 4, 2, 3, 0, 3, 1, 0, 0, 1)
                .gather(Gatherers.windowSliding(4))
                .peek(a -> System.out.println(a))
                .map(x -> x.stream().collect(summing()) > 10)
                .toList();
System.out.println(errors);
Java

The size of our sliding window is 4. Therefore, each time the slidingWindow() method creates a list with 4 subsequent elements. For each intermediate list, I’m summing the values and checking if the total number of errors is greater than 10. Here’s the output. As you see only the circuit breaker is opened for the [3, 4, 2, 3] fragment of the source stream.

[2, 0, 1, 3]
[0, 1, 3, 4]
[1, 3, 4, 2]
[3, 4, 2, 3]
[4, 2, 3, 0]
[2, 3, 0, 3]
[3, 0, 3, 1]
[0, 3, 1, 0]
[3, 1, 0, 0]
[1, 0, 0, 1]
[false, false, false, true, false, false, false, false, false, false]
Java

Use the Stream.reduce() Method

Finally, the last feature in our article. I believe that the Stream.reduce() method is not a very well-known and widely used stream method. However, it is very interesting. For example, we can use it to sum all the numbers in the Java List. The first parameter of the reduce method is an initial value, while the second is an accumulator algorithm.

var listOfNumbers = List.of(1, 2, 3, 4, 5);
var sum = listOfNumbers.stream().reduce(0, Integer::sum);
assertEquals(15, sum);
Java

Final Thoughts

Of course, it is just a small list of interesting facts about Java streams and collections. If you have some other favorite features you can put them in the article comments.

The post Interesting Facts About Java Streams and Collections appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/04/25/interesting-facts-about-java-streams-and-collections/feed/ 3 15228
Java Flight Recorder on Kubernetes https://piotrminkowski.com/2024/02/13/java-flight-recorder-on-kubernetes/ https://piotrminkowski.com/2024/02/13/java-flight-recorder-on-kubernetes/#respond Tue, 13 Feb 2024 07:44:13 +0000 https://piotrminkowski.com/?p=14957 In this article, you will learn how to continuously monitor apps on Kubernetes with Java Flight Recorder and Cryostat. Java Flight Recorder (JFR) is a tool for collecting diagnostic and profiling data generated by the Java app. It is designed for use even in heavily loaded production environments since it causes almost no performance overhead. […]

The post Java Flight Recorder on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to continuously monitor apps on Kubernetes with Java Flight Recorder and Cryostat. Java Flight Recorder (JFR) is a tool for collecting diagnostic and profiling data generated by the Java app. It is designed for use even in heavily loaded production environments since it causes almost no performance overhead. We can say that Java Flight Recorder acts similarly to an airplane’s black box. Even if the JVM crashes, we can analyze the diagnostic data collected just before the failure. This fact makes JFR especially usable in an environment with many running apps – like Kubernetes.

Assuming that we are running many Java apps on Kubernetes, we should interested in the tool that helps to automatically gather data generated by Java Flight Recorder. Here comes Cryostat. It allows us to securely manage JFR recordings for the containerized Java workloads. With the built-in discovery mechanism, it can detect all the apps that expose JFR data. Depending on the use case, we can store and analyze recordings directly on the Kubernetes cluster Cryostat Dashboard or export recorded data to perform a more in-depth analysis.

If you are interested in more topics related to Java apps on Kubernetes, you can take a look at some other posts on my blog. The following article describes a list of best practices for running Java apps Kubernetes. You can also read e.g. on how to resize CPU limit to speed up Java startup on Kubernetes here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you need to go to the callme-service directory. After that, you should just follow my instructions. Let’s begin.

Install Cryostat on Kubernetes

In the first step, we install Cryostat on Kubernetes using its operator. In order to use and manage operators on Kubernetes, we should have the Operator Lifecycle Manager (OLM) installed on the cluster. The operator-sdk binary provides a command to easily install and uninstall OLM:

$ operator-sdk olm install

Alternatively, you can use Helm chart for Cryostat installation on Kubernetes. Firstly, let’s add the following repository:
$ helm repo add openshift https://charts.openshift.io/

Then, install the chart with the following command:
$ helm install my-cryostat openshift/cryostat --version 0.4.0

Once the OLM is running on our cluster, we can proceed to the Cryostat installation. We can find the required YAML manifest with the Subscription declaration in the Operator Hub. Let’s just apply the manifest to the target with the following command:

$ kubectl create -f https://operatorhub.io/install/cryostat-operator.yaml

By default, this operator will be installed in the operators namespace and will be usable from all namespaces in the cluster. After installation, we can verify if the operator works fine by executing the following command:

$ kubectl get csv -n operators

In order to simplify the Cryostat installation process, we can use OpenShift. With OpenShift we don’t need to install OLM, since it is already there. We just need to find the “Red Hat build of Cryostat” operator in the Operator Hub and install it using OpenShift Console. By default, the operator is available in the openshift-operators namespace.

Then, let’s create a namespace dedicated to running Cryostat and our sample app. The name of the namespace is demo-jfr.

$ kubectl create ns demo-jfr

Cryostat recommends using a cert-manager for traffic encryption. In our exercise, we disable that integration for simplification purposes. However, in the production environment, you should install “cert-manager” unless you do not use another solution for encrypting traffic. In order to run Cryostat in the selected namespace, we need to create the Cryostat object. The parameter spec.enableCertManager should be set to false.

apiVersion: operator.cryostat.io/v1beta1
kind: Cryostat
metadata:
  name: cryostat-sample
  namespace: demo-jfr
spec:
  enableCertManager: false
  eventTemplates: []
  minimal: false
  reportOptions:
    replicas: 0
  storageOptions:
    pvc:
      annotations: {}
      labels: {}
      spec: {}
  trustedCertSecrets: []

If everything goes fine, you should see the following pod in the demo-jfr namespace:

$ kubectl get po -n demo-jfr
NAME                               READY   STATUS    RESTARTS   AGE
cryostat-sample-5c57c9b8b8-smzx9   3/3     Running   0          60s

Here’s a list of Kubernetes Services. The Cryostat Dashboard is exposed by the cryostat-sample Service under the 8181 port.

$ kubectl get svc -n demo-jfr
NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
cryostat-sample           ClusterIP   172.31.56.83    <none>        8181/TCP,9091/TCP   70m
cryostat-sample-grafana   ClusterIP   172.31.155.26   <none>        3000/TCP            70m

We can access the Cryostat dashboard using the Kubernetes Ingress or OpenShift Route. Currently, there are no apps to monitor.

Create Sample Java App

We build a sample Java app using the Spring Boot framework. Our app exposes a single REST endpoint. As you see the endpoint implementation is very simple. The pingWithRandomDelay() method adds a random delay between 0 and 3 seconds and returns the string. However, there is one interesting thing inside that method. We are creating the ProcessingEvent object (1). Then, we call its begin method just before sleeping the thread (2). After the method is resumed we call the commit method on the ProcessingEvent object (3). In this inconspicuous way, we are generating our first custom JFR event. This event aims to monitor the processing time of our method.

@RestController
@RequestMapping("/callme")
public class CallmeController {

   private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);

   private Random random = new Random();
   private AtomicInteger index = new AtomicInteger();

   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping-with-random-delay")
   public String pingWithRandomDelay() throws InterruptedException {
      int r = new Random().nextInt(3000);
      int i = index.incrementAndGet();
      ProcessingEvent event = new ProcessingEvent(i); // (1)
      event.begin(); // (2)
      LOGGER.info("Ping with random delay: id={}, name={}, version={}, delay={}", i,
             buildProperties.isPresent() ? buildProperties.get().getName() : "callme-service", version, r);
      Thread.sleep(r);
      event.commit(); // (3)
      return "I'm callme-service " + version;
   }

}

Let’s switch to the ProcessingEvent implementation. Our custom event needs to extend the jdk.jfr.Event abstract class. It contains a single parameter id. We can use some additional labels to improve the event presentation in the JFR graphical tools. The event will be visible under the name set in the @Name annotation and under the category set in the @Category annotation. We also need to annotate the parameter @Label to make it visible as part of the event.

@Name("ProcessingEvent")
@Category("Custom Events")
@Label("Processing Time")
public class ProcessingEvent extends Event {
    @Label("Event ID")
    private Integer id;

    public ProcessingEvent(Integer id) {
        this.id = id;
    }

    public Integer getId() {
        return id;
    }

    public void setId(Integer id) {
        this.id = id;
    }
}

Of course, our app will generate a lot of standard JFR events useful for profiling and monitoring. But we could also monitor our custom event.

Build App Image and Deploy on Kubernetes

Once we finish the implementation, we may build the container image of our Spring Boot app. Spring Boot comes with a feature for building container images based on the Cloud Native Buildpacks. In the Maven pom.xml you will find a dedicated profile under the build-image id. Once you activate such a profile, it will build the image using the Paketo builder-jammy-base image.

<profile>
  <id>build-image</id>
  <build>
    <plugins>
      <plugin>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-maven-plugin</artifactId>
        <configuration>
          
        </configuration>
        <executions>
          <execution>
            <goals>
              <goal>build-image</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</profile>

Before running the build we should start Docker on the local machine. After that, we should execute the following Maven command:

$ mvn clean package -Pbuild-image -DskipTests

With the build-image profile activated, Spring Boot Maven Plugin builds the image of our app. You should have a similar result as shown below. In my case, the image tag is piomin/callme-service:1.2.1.

By default, Paketo Java Buildpacks uses BellSoft Liberica JDK. With the Paketo BellSoft Liberica Buildpack, we can easily enable Java Flight Recorder for the container using the BPL_JFR_ENABLED environment variable. In order to expose data for Cryostat, we also need to enable the JMX port. In theory, we could use BPL_JMX_ENABLED and BPL_JMX_PORT environment variables for that. However, that option includes some additional configuration to the java command parameters that break the Cryostat discovery. This issue has been already described here. Therefore we will use the JAVA_TOOL_OPTIONS environment variable to set the required JVM parameters directly on the running command.

Instead of exposing the JMX port for discovery, we can include the Cryostat agent in the app dependencies. In that case, we should set the address of the Cryostat API in the Kubernetes Deployment manifest. However, I prefer an approach that doesn’t require any changes on the app side.

Now, let’s back to the Cryostat app discovery. Cryostat is able to automatically detect pods with a JMX port exposed. It requires the concrete configuration of the Kubernetes Service. We need to set the name of the port to jfr-jmx. In theory, we can expose JMX on any port we want, but for me anything other than 9091 caused discovery problems on Cryostat. In the Deployment definition, we have to set the BPL_JFR_ENABLED env to true, and the JAVA_TOOL_OPTIONS to -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9091.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
  template:
    metadata:
      labels:
        app: callme-service
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service:1.2.1
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
            - containerPort: 9091
          env:
            - name: VERSION
              value: "v1"
            - name: BPL_JFR_ENABLED
              value: "true"
            - name: JAVA_TOOL_OPTIONS
              value: "-Dcom.sun.management.jmxremote.port=9091 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  - port: 9091
    name: jfr-jmx
  selector:
    app: callme-service

Let’s apply our deployment manifest to the demo-jfr namespace:

$ kubectl apply -f k8s/deployment-jfr.yaml -n demo-jfr

Here’s a list of pods of our callme-service app:

$ kubectl get po -n demo-jfr -l app=callme-service -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP            NODE
callme-service-6bc5745885-kvqfr   1/1     Running   0          31m   10.134.0.29   worker-cluster-lvsqq-1

Using Cryostat with JFR

View Default Dashboards

Cryostat automatically detects all the pods related to the Kubernetes Service that expose the JMX port. Once we switch to the Cryostat Dashboard, we will see the name of our pod in the “Target” dropdown. The default dashboard shows diagrams illustrating CPU load, heap memory usage, and a number of running Java threads.

java-flight-recorder-kubernetes-dashboard

Then, we can go to the “Recordings” section. It shows a list of active recordings made by Java Flight Recorder for our app running on Kubernetes. By default, Cryostat creates and starts a single recording per each detected target.

We can expand the selected record to see a detailed view. It provides a summarized panel divided into several different categories like heap, memory leak, or exceptions. It highlights warnings with a yellow color and problems with a red color.

java-flight-recorder-kubernetes-panel

We can display a detailed description of each case. We just need to click on the selected field with a problem name. The detailed description will appear in the context menu.

java-flight-recorder-kubernetes-description

Create and Use a Custom Event Template

We can create a custom recording strategy by defining a new event template. Firstly, we need to go to the “Events” section, and then to the “Event Templates” tab. There are three built-in templates. We can use each of them as a base for our custom template. After deciding which of them to choose we can download it to our laptop. The default file extension is *.jfc.

java-flight-recorder-kubernetes-event-templates

In order to edit the *.jfc files we need a special tool called JDK Mission Control. Each vendor provides such a tool for their distribution of JDK. In our case, it is BellSoft Liberica. Once we download and install Liberica Mission Control on the laptop we should go to Window -> Flight Recording Template Manager.

java-flight-recorder-kubernetes-mission-control

With the Flight Recording Template Manager, we can import and edit an exported event template. I choose the higher monitoring for “Garbage Collection”, “Allocation Profiling”, “Compiler”, and “Thread Dump”.

java-flight-recorder-kubernetes-template-manager

Once a new template is ready, we should save it under the selected name. For me, it is the “Continuous Detailed” name. After that, we need to export the template to the file.

Then, we need to switch to the Cryostat Dashboard. We have to import the newly created template exported to the *.jfc file.

Once you import the template, you should see a new strategy in the “Event Templates” section.

We can create a recording based on our custom “Continuous_Detailed” template. After some time, Cryostat should gather data generated by the Java Flight Recorder for the app running on Kubernetes. However, this time we want to make some advanced analysys using Liberica Mission Control rather than just with the Cryostat Dashboard. Therefore we will export the recording to the *.jfr file. Such a file may be then imported to the JDK Mission Control tool.

Use the JDK Mission Control Tool

Let’s open the exported *.jfr file with Liberica Mission Control. Once we do it, we can analyze all the important aspects related to the performance of our Java app. We can display a table with memory allocation per the object type.

We can display a list of running Java threads.

Finally, we go to the “Event Browser” section. In the “Custom Events” category we should find our custom event under the name determined by the @Label annotation on the ProcessingEvent class. We can see the history of all generated JFR events together with the duration, start time, and the name of the processing thread.

Final Thoughts

Cryostat helps you to manage the Java Flight Recorder on Kubernetes at scale. It provides a graphical dashboard that allows to monitoring of all the Java workloads that expose JFR data over JMX. The important thing is that even after an app crash we can export the archived monitoring report and analyze it using advanced tools like JDK Mission Control.

The post Java Flight Recorder on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/02/13/java-flight-recorder-on-kubernetes/feed/ 0 14957
Which JDK to Choose on Kubernetes https://piotrminkowski.com/2023/02/17/which-jdk-to-choose-on-kubernetes/ https://piotrminkowski.com/2023/02/17/which-jdk-to-choose-on-kubernetes/#comments Fri, 17 Feb 2023 13:53:19 +0000 https://piotrminkowski.com/?p=14015 In this article, we will make a performance comparison between several most popular JDK implementations for the app running on Kubernetes. This post also answers some questions and concerns about my Twitter publication you see below. I compared Oracle JDK with Eclipse Temurin. The result was quite surprising for me, so I decided to tweet […]

The post Which JDK to Choose on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, we will make a performance comparison between several most popular JDK implementations for the app running on Kubernetes. This post also answers some questions and concerns about my Twitter publication you see below. I compared Oracle JDK with Eclipse Temurin. The result was quite surprising for me, so I decided to tweet to get some opinions and feedback.

jdk-kubernetes-tweet

Unfortunately, those results were wrong. Or maybe I should say, were not averaged well enough. After this publication, I also received interesting materials presented on London Java Community. It compares the performance of the Payara application server running on various JDKs. Here’s the link to that presentation (~1h). The results showed there seem to confirm my results. Or at least they confirm the general rule – there are some performance differences between Open JDK implementations. Let’s check it out.

This time I’ll do a very accurate comparison with several repeats to get reproducible results. I’ll test the following JVM implementations:

  • Adoptium Eclipse Temurin
  • Alibaba Dragonwell
  • Amazon Corretto
  • Azul Zulu
  • BellSoft Liberica
  • IBM Semeru OpenJ9
  • Oracle JDK
  • Microsoft OpenJDK

For all the tests I’ll use Paketo Java buildpack. We can easily switch between several JVM implementations with Paketo. I’ll test a simple Spring Boot 3 app that uses Spring Data to interact with the Mongo database. Let’s proceed to the details!

If you have already built images with Dockerfile it is possible that you were using the official OpenJDK base image from the Docker Hub. However, currently, the announcement on the image site says that it is officially deprecated and all users should find suitable replacements. In this article, we will compare all the most popular replacements, so I hope it may help you to make a good choice 🙂

Testing Environment

Before we run tests it is important to have a provisioned environment. I’ll run all the tests locally. In order to build images, I’m going to use Paketo Buildpacks. Here are some details of my environment:

  1. Machine: MacBook Pro 32G RAM Intel 
  2. OS: macOS Ventura 13.1
  3. Kubernetes (v1.25.2) on Docker Desktop: 14G RAM + 4vCPU

We will use Java 17 for app compilation. In order to run load tests, I’m going to leverage the k6 tool. Our app is written in Spring Boot. It connects to the Mongo database running on the same instance of Kubernetes. Each time I’m testing a new JVM provider I’m removing the previous version of the app and database. Then I’m deploying the new, full configuration once again. We will measure the following parameters:

  1. App startup time (the best  
  2. result and average) – we will read it directly from the Spring Boot logs 
  3. Throughput – with k6 we will simulate 5 and 10 virtual users. It will measure the number of processing requests 
  4. The size of the image
  5. The RAM memory consumed by the pod during the load tests. Basically, we will execute the kubectl top pod command

We will also set the memory limit for the container to 1G. In our load tests, the app will insert data into the Mongo database. It is exposing the REST endpoint invoked during the tests. To measure startup time as accurately as possible I’ll restart the app several times.

Let’s take a look at the Deployment YAML manifest. It injects credentials to the Mongo database and set the memory limit to 1G (as I already mentioned):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-boot-on-kubernetes-deployment
spec:
  selector:
    matchLabels:
      app: sample-spring-boot-on-kubernetes
  template:
    metadata:
      labels:
        app: sample-spring-boot-on-kubernetes
    spec:
      containers:
      - name: sample-spring-boot-on-kubernetes
        image: piomin/sample-spring-boot-on-kubernetes
        ports:
        - containerPort: 8080
        env:
          - name: MONGO_DATABASE
            valueFrom:
              configMapKeyRef:
                name: mongodb
                key: database-name
          - name: MONGO_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-user
          - name: MONGO_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: database-password
          - name: MONGO_URL
            value: mongodb
        readinessProbe:
          httpGet:
            port: 8080
            path: /readiness
            scheme: HTTP
          timeoutSeconds: 1
          periodSeconds: 10
          successThreshold: 1
          failureThreshold: 3
        resources:
          limits:
            memory: 1024Mi

Source Code and Images

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. You will also find all the images in my Docker Hub repository piomin/sample-spring-boot-on-kubernetes. Every single image is tagged with the vendor’s name.

Our Spring Boot app exposes several endpoints, but I’ll test the POST /persons endpoint for inserting data into Mongo. In the integration with Mongo, I’m using the Spring Data MongoDB project and its CRUD repository pattern.

// controller

@RestController
@RequestMapping("/persons")
public class PersonController {

   private PersonRepository repository;

   PersonController(PersonRepository repository) {
      this.repository = repository;
   }

   @PostMapping
   public Person add(@RequestBody Person person) {
      return repository.save(person);
   }

   // other endpoints implementation
}


// repository

public interface PersonRepository extends CrudRepository<Person, String> {

   Set<Person> findByFirstNameAndLastName(String firstName, 
                                          String lastName);
   Set<Person> findByAge(int age);
   Set<Person> findByAgeGreaterThan(int age);

}

The Size of the Image

The size of the image is the simplest option to measure. If you would like to check what is exactly inside the image you can use the dive tool. The difference in the size between vendors results from the number of java tools and binaries included inside. From my perspective, the smaller the size the better. I’d rather not use anything that is inside the image. Of course, except all the staff required to run my app successfully. But you may have a different case. Anyway, here’s the content of the app for the Oracle JDK after executing the dive piomin/sample-spring-boot-on-kubernetes:oracle command. As you see, JDK takes up most of the space.

jdk-kubernetes-dive

On the other hand, we can analyze the smallest image. I think it explains the differences in image size since Zulu contains JRE, not the whole JDK.

Here are the result ordered from the smallest image to the biggest.

  • Azul Zulu: 271MB
  • IBM Semeru OpenJ9: 275MB
  • Eclipse Temurin: 286MB
  • BellSoft Liberica: 286MB
  • Oracle OpenJDK: 446MB
  • Alibaba Dragonwell: 459MB
  • Microsoft OpenJDK: 461MB
  • Amazon Corretto: 463MB

Let’s visualize our first results. I think it excellent shows which image contains JDK and which JRE.

jdk-kubernetes-memory

Startup Time

Honestly, it is not very easy to measure a startup time, since the difference between the vendors is not large. Also, the subsequent results for the same provider may differ a lot. For example, on the first try the app starts in 5.8s and after the pod restart 8.4s. My methodology was pretty simple. I restarted the app several times for each JDK provider to measure the average startup time and the fastest startup in the series. Then I repeated the same exercise again to verify if the results are repeatable. The proportions between the first and second series of startup time between corresponding vendors were similar. In fact, the difference between the fastest and the slowest average startup time is not large. I get the best result for Eclipse Temurin (7.2s) and the worst for IBM Semeru OpenJ9 (9.05s).

Let’s see the full list of results. It shows the average startup time of the application from the fastest one.

  • Eclipse Temurin: 7.20s
  • Oracle OpenJDK: 7.22s
  • Amazon Corretto: 7.27s
  • BellSoft Liberica: 7.44s
  • Oracle OpenJDK: 7.77s
  • Alibaba Dragonwell: 8.03s
  • Microsoft OpenJDK: 8.18s
  • IBM Semeru OpenJ9: 9.05s

Once again, here’s the graphical representation of our results. The differences between vendors are sometimes rather cosmetic. Maybe, if the same exercise once again from the beginning the results would be quite different.

jdk-kubernetes-startup

As I mentioned before, I also measured the fastest attempt. This time the best top 3 are Eclipse Temurin, Amazon Corretto, and BellSoft Liberica.

  • Eclipse Temurin: 5.6s
  • Amazon Corretto: 5.95s
  • BellSoft Liberica: 6.05s
  • Oracle OpenJDK: 6.1s
  • Azul Zulu: 6.2s
  • Alibaba Dragonwell: 6.45s
  • Microsoft OpenJDK: 6.9s
  • IBM Semero OpenJ9: 7.85s

Memory

I’m measuring the memory usage of the app under the heavy load with a test simulating 10 users continuously sending requests. It gives me a really large throughput at the level of the app – around 500 requests per second. The results are in line with the expectations. Almost all the vendors have very similar memory usage except IBM Semeru, which uses OpenJ9 JVM. In theory, OpenJ9 should also give us a better startup time. However, in my case, the significant difference is just in the memory footprint. For IBM Semeru the memory usage is around 135MB, while for other vendors it varies in the range of 210-230MB.

  • IBM Semero OpenJ9: 135M
  • Oracle OpenJDK: 211M
  • Azul Zulu: 215M
  • Alibaba DragonwellOracle OpenJDK: 216M
  • BellSoft Liberica: 219M
  • Microsoft OpenJDK: 219M
  • Amazon Corretto: 220M
  • Eclipse Temurin: 230M

Here’s the graphical visualization of our results:

Throughput

In order to generate high incoming traffic to the app I used the k6 tool. It allows us to create tests in JavaScript. Here’s the implementation of our test. It is calling the HTTP POST /persons endpoint with input data in JSON. Then it verifies if the request has been successfully processed on the server side.

import http from 'k6/http';
import { check } from 'k6';

export default function () {

  const payload = JSON.stringify({
      firstName: 'aaa',
      lastName: 'bbb',
      age: 50,
      gender: 'MALE'
  });

  const params = {
    headers: {
      'Content-Type': 'application/json',
    },
  };

  const res = http.post(`http://localhost:8080/persons`, payload, params);

  check(res, {
    'is status 200': (res) => res.status === 200,
    'body size is > 0': (r) => r.body.length > 0,
  });
}

Here’s the k6 command for running our test. It is possible to define the duration and number of simultaneous virtual users. In the first step, I’m simulating 5 virtual users:

$ k6 run -d 90s -u 5 load-tests.js

Then, I’m running the tests for 10 virtual users twice per vendor.

$ k6 run -d 90s -u 10 load-tests.js

Here are the sample results printed after executing the k6 test:

I repeated the exercise per the JDK vendor. Here are the throughput results for 5 virtual users:

  • BellSoft Liberica: 451req/s
  • Amazon Corretto: 433req/s
  • IBM Semeru OpenJ9: 432req/s
  • Oracle OpenJDK: 420req/s
  • Microsoft OpenJDK: 418req/s
  • Azul Zulu: 414req/s
  • Eclipse Temurin: 407req/s
  • Alibaba Dragonwell: 405req/s

Here are the throughput results for 10 virtual users:

  • Eclipse Temurin: 580req/s
  • Azul Zulu: 567req/s
  • Microsoft OpenJDK: 561req/s
  • Oracle OpenJDK: 561req/s
  • IBM Semeru OpenJ9: 552req/s
  • Amazon Corretto: 552req/s
  • Alibaba Dragonwell: 551req/s
  • BellSoft Liberica: 540req/s

Final Thoughts

After repeating the load tests several times I need to admit that there are no significant differences in performance between all JDK vendors. We were using the same JVM settings for testing (set by the Paketo Buildpack). Probably, the more tests I will run, the results between different vendors would be even more similar. So, in summary, the results from my tweet have not been confirmed. Ok, so let’s back to the question – which JDK to choose on Kubernetes?

Probably it somehow depends on where you are running your cluster. If for example, it’s Kubernetes EKS on AWS it’s worth using Amazon Corretto. However, if you are looking for the smallest image size you should choose between Azul Zulu, IBM Semeru, BellSoft Liberica, and Adoptium Eclipse Temurin. Additionally, IBM Semeru will consume significantly less memory than other distributions, since it is built on top of OpenJ9.

Don’t forget about best practices when deploying Java apps on Kubernetes. Here’s my article about it.

The post Which JDK to Choose on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/02/17/which-jdk-to-choose-on-kubernetes/feed/ 18 14015
Useful & Unknown Java Libraries https://piotrminkowski.com/2023/01/30/useful-unknown-java-libraries/ https://piotrminkowski.com/2023/01/30/useful-unknown-java-libraries/#comments Mon, 30 Jan 2023 09:39:23 +0000 https://piotrminkowski.com/?p=13954 This article will teach you about some not famous but useful Java libraries. This is the second article in the “useful & unknown” series. The previous one described several attractive, but not well-known Java features. You can read more about it here. Today we will focus on Java libraries. Usually, we use several external libraries […]

The post Useful & Unknown Java Libraries appeared first on Piotr's TechBlog.

]]>
This article will teach you about some not famous but useful Java libraries. This is the second article in the “useful & unknown” series. The previous one described several attractive, but not well-known Java features. You can read more about it here.

Today we will focus on Java libraries. Usually, we use several external libraries in our projects – even if we do not include them directly. For example, Spring Boot comes with a defined set of dependencies included by starters. Assuming we include e.g. spring-boot-starter-test we include libraries like mockito, junit-jupiter or hamcrest at the same time. Of course, these are well-known libraries for the community.

In fact, there are a lot of different Java libraries. Usually, I don’t need to use many of them (or even I need none of them) when working with the frameworks like Spring Boot or Quarkus. However, there are some very interesting libraries that may be useful everywhere. I’m writing about them because you might not hear about any of them. I’m going to introduce 5 of my favorite “useful & unknown” Java libraries. Let’s begin!

Source Code

If you would like to try it by yourself, you may always take a look at my source code. To do that you need to clone my GitHub repository. Yo can also find the example Then you should just follow my instructions.

Instancio

On the first fire will go Instancio. How do you generate test data in your unit tests? Instancio will help us with that. It aims to reduce the time and lines of code spent on manual data setup in unit tests. It instantiates and populates objects with random data, making our tests more dynamic. We can generate random data with Instancio but at the same, we can set custom data in a particular field.

Before we start with Instancio, let’s discuss our data model. Here’s the first class – Person:

public class Person {

   private Long id;
   private String name;
   private int age;
   private Gender gender;
   private Address address;

   // getters and setters ...

}

Our class contains three simple fields (id, name, age), a single enum Gender , and the instance of the Address class. Gender is just a simple enum containing MALE and FEMALE values. Here’s the implementation of the Address class:

public class Address {

   private String country;
   private String city;
   private String street;
   private int houseNumber;
   private int flatNumber;

   // getters and setters ...

}

Now, let’s create a test to check whether the Person service will successfully add and obtain objects from the store. We want to generate random data for all the fields except the id field, which is set by the service. Here’s our test:

@Test
void addAndGet() {
   Person person = Instancio.of(Person.class)
             .ignore(Select.field(Person::getId))
             .create();
   person = personService.addPerson(person);
   Assertions.assertNotNull(person.getId());
   person = personService.findById(person.getId());
   Assertions.assertNotNull(person);
   Assertions.assertNotNull(person.getAddress());
}

The values generated for my test run are visible below. As you see, the id field equals null. Other fields contain random values generated per the field type (String or int).

Person(id=null, name=ATDLCA, age=2619, gender=MALE, 
address=Address(country=FWOFRNT, city=AIRICCHGGG, street=ZZCIJDZ, houseNumber=5530, flatNumber=1671))

Let’s see how we can generate several objects with Instancio. Assuming we need 5 objects in the list for our test, we can do that in the following way. We will also set a constant value for the city fields inside the Address object. Then we would like to test the method for searching objects by the city name.

@Test
void addListAndGet() {
   final int numberOfObjects = 5;
   final String city = "Warsaw";
   List<Person> persons = Instancio.ofList(Person.class)
           .size(numberOfObjects)
           .set(Select.field(Address::getCity), city)
           .create();
   personService.addPersons(persons);
   persons = personService.findByCity(city);
   Assertions.assertEquals(numberOfObjects, persons.size());
}

Let’s take a look at the last example. The same as before, we are generating a list of objects – this time 100. We can easily specify the additional criteria for generated values. For example, I would like to set a value for the age field between 18 and 65.

@Test
void addGeneratorAndGet() {
   List<Person> persons = Instancio.ofList(Person.class)
            .size(100)
            .ignore(Select.field(Person::getId))
            .generate(Select.field(Person::getAge), 
                      gen -> gen.ints().range(18, 65))
            .create();
   personService.addPersons(persons);
   persons = personService.findAllGreaterThanAge(40);
   Assertions.assertTrue(persons.size() > 0);
}

That’s just a small set of customizations, that Instancio offers, for test data generation. You can read more about other options in their docs.

Datafaker

The next library we will discuss today is Datafaker. The purpose of this library is quite similar to the previous one. We need to generate random data. However, this time we need data that looks like real data. From my perspective, it is useful for demo presentations or examples running somewhere.

Datafaker creates fake data for your JVM programs within minutes, using our wide range of more than 100 data providers. This can be very helpful when generating test data to fill a database, generating data for a stress test, or anonymizing data from production services. Let’s include it in our dependencies.

<dependency>
  <groupId>net.datafaker</groupId>
  <artifactId>datafaker</artifactId>
  <version>1.7.0</version>
</dependency>

We will expand our sample model a little. Here’s a new class definition. The Contact class contains two fields email and phoneNumber. We will validate both these fields using the jakarta.validation module.

public class Contact {

   @Email
   private String email;
   @Pattern(regexp="\\d{2}-\\d{3}-\\d{2}-\\d{2}")
   private String phoneNumber;

   // getters and setters ...

}

Here’s a new version of our Person class that contains the Contant object instance:

public class Person {

   private Long id;
   private String name;
   private int age;
   private Gender gender;
   private Address address;
   @Valid
   private Contact contact;

   // getters and setters ...

}

Now, let’s generate fake data for the Person object. We can create localized data just by setting the Locale object in the Faker constructor (1). For me, it is Poland 🙂 There are a lot of providers for the standard values. To set email we need to use the Internet provider (2). There is a dedicated provider for generating phone numbers (3), addresses (4), and person names (5). You can see a full list of available providers here. After creating test data, we can run the test that adds a new Person verified on the server side.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class PersonsControllerTests {

   @Autowired
   private TestRestTemplate restTemplate;

   @Test
   void add() {
      Faker faker = new Faker(Locale.of("pl")); // (1)
      Contact contact = new Contact();
      contact.setEmail(faker.internet().emailAddress()); // (2)
      contact.setPhoneNumber(faker.phoneNumber().cellPhone()); // (3)
      Address address = new Address();
      address.setCity(faker.address().city()); // (4)
      address.setCountry(faker.address().country());
      address.setStreet(faker.address().streetName());
      int number = Integer
         .parseInt(faker.address().streetAddressNumber());
      address.setHouseNumber(number);
      number = Integer.parseInt(faker.address().buildingNumber());
      address.setFlatNumber(number);
      Person person = new Person();
      person.setName(faker.name().fullName()); // (5)
      person.setContact(contact);
      person.setAddress(address);
      person.setGender(Gender.valueOf(
         faker.gender().binaryTypes().toUpperCase()));
      person.setAge(faker.number().numberBetween(18, 65));

      person = restTemplate
         .postForObject("/persons", person, Person.class);
      Assertions.assertNotNull(person);
      Assertions.assertNotNull(person.getId());
   }

}

Here’s the data generated during my test. I think you can find one inconsistency here (the country field) 😉

Person(id=null, name=Stefania Borkowski, age=51, gender=FEMALE, address=Address(country=Ekwador, city=Sępopol, street=al. Chudzik, houseNumber=882, flatNumber=318), contact=Contact{email='gilbert.augustyniak@gmail.com', phoneNumber='69-733-43-77'})

Sometimes you need to generate a more predictable random result. It’s possible to provide a seed value in the Faker constructor. When providing a seed, the instantiation of Fake objects will always happen in a predictable way, which can be handy for generating results multiple times. Here’s a new version of my Faker object declaration:

Faker faker = new Faker(Locale.of("pl"), new Random(0));

JPA Streamer

Our next library is related to JPA queries. If you like to use Java streams and you are building apps that interact with databases through JPA or Hibernate, the JPA Streamer library may be an interesting choice. It is a library for expressing JPA/Hibernate/Spring queries using standard Java streams. JPA Streamer instantly gives Java developers type-safe, expressive and intuitive means of obtaining data in database applications. Moreover, you can easily integrate it with Spring Boot and Quarkus. Firstly, let’s include JPA Streamer in our dependencies:

<dependency>
  <groupId>com.speedment.jpastreamer</groupId>
  <artifactId>jpastreamer-core</artifactId>
  <version>1.1.2</version>
</dependency>

If you want to integrate it with Spring Boot you need to add one additional dependency:

<dependency>
  <groupId>com.speedment.jpastreamer.integration.spring</groupId>
  <artifactId>spring-boot-jpastreamer-autoconfigure</artifactId>
  <version>1.1.2</version>
</dependency>

In order to test JPA Streamer, we need to create an example entities model.

@Entity
public class Employee {

   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Integer id;
   private String name;
   private String position;
   private int salary;
   @ManyToOne(fetch = FetchType.LAZY)
   private Department department;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;

   // getters and setters ...

}

There are also two other entities: Organization and Department. Here are their definitions:

@Entity
public class Department {
   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "department")
   private Set<Employee> employees;
   @ManyToOne(fetch = FetchType.LAZY)
   private Organization organization;

   // getters and setters ...
}

@Entity
public class Organization {
   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Integer id;
   private String name;
   @OneToMany(mappedBy = "organization")
   private Set<Department> departments;
   @OneToMany(mappedBy = "organization")
   private Set<Employee> employees;

   // getters and setters ...
}

Now, we can prepare some queries using the Java streams pattern. In the following fragment of code, we are searching an entity by id and then joining two relations. By default, it is LEFT JOIN, but we can customize it when calling the joining() method. In the following fragment of code, we join Department and Organization, which are in @ManyToOne a relationship with the Employee entity. Then we filter the result, convert the object to the DTO and pick the first result.

@GetMapping("/{id}")
public EmployeeWithDetailsDTO findById(@PathVariable("id") Integer id) {
   return streamer.stream(of(Employee.class)
           .joining(Employee$.department)
           .joining(Employee$.organization))
        .filter(Employee$.id.equal(id))
        .map(EmployeeWithDetailsDTO::new)
        .findFirst()
        .orElseThrow();
}

Of course, we can call many other Java stream methods. In the following fragment of code, we count the number of employees assigned to the particular department.

@GetMapping("/{id}/count-employees")
public long getNumberOfEmployees(@PathVariable("id") Integer id) {
   return streamer.stream(Department.class)
         .filter(Department$.id.equal(id))
         .map(Department::getEmployees)
         .mapToLong(Set::size)
         .sum();
}

If you are looking for a detailed explanation and more examples with JPA Streamer you can my article dedicated to that topic.

Blaze Persistence

Blaze Persistence is another library from the JPA and Hibernate area. It allows you to write complex queries with a consistent builder API with rich Criteria API for JPA providers. That’s not all. You can also use the Entity-View module dedicated to DTO mapping. Of course, you can easily integrate with Spring Boot or Quarkus. If you want to use all Blaze Persistence modules in your app it is worth adding the dependencyManagement section in your Maven pom.xml:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>com.blazebit</groupId>
            <artifactId>blaze-persistence-bom</artifactId>
            <version>1.6.8</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>    
    </dependencies>
</dependencyManagement>

Personally, I’m using Blaze Persistence for DTO mapping. Thanks to the integration with Spring Boot we can replace Spring Data Projections with Blaze Persistence Entity-Views. It will be especially useful for more advanced mappings since Blaze Persistence offers more features and better performance for that. You can read a detailed comparison in the following article. If we want to integrate Blaze Persistence Entity-Views with Spring Data we should add the following dependencies:

<dependency>
  <groupId>com.blazebit</groupId>
  <artifactId>blaze-persistence-integration-spring-data-2.7</artifactId>
</dependency>
<dependency>
  <groupId>com.blazebit</groupId>
  <artifactId>blaze-persistence-integration-hibernate-5.6</artifactId>
  <scope>runtime</scope>
</dependency>
<dependency>
  <groupId>com.blazebit</groupId>
  <artifactId>blaze-persistence-entity-view-processor</artifactId>
</dependency>

Then, we need to create an interface with getters for mapped fields. It should be annotated with the @EntityView that refers to the target entity class. In the following example, we are mapping two entity fields firstName and lastName to the single fields inside the PersonDTO object. In order to map the entity’s primary key we should use the @IdMapping annotation.

@EntityView(Person.class)
public interface PersonView {

   @IdMapping
   Integer getId();
   void setId(Integer id);

   @Mapping("CONCAT(firstName,' ',lastName)")
   String getName();
   void setName(String name);

}

We can still take advantage of the Spring Data repository pattern. Our repository interface needs to extend the EntityViewRepository interface.

@Transactional(readOnly = true)
public interface PersonViewRepository 
    extends EntityViewRepository<PersonView, Integer> {

    PersonView findByAgeGreaterThan(int age);

}

We also need to provide some additional configuration and enable Blaze Persistence in the main or the configuration class:

@SpringBootApplication
@EnableBlazeRepositories
@EnableEntityViews
public class PersonApplication {

   public static void main(String[] args) {
      SpringApplication.run(PersonApplication.class, args);
   }

}

Hoverfly

Finally, the last of the Java libraries on my list – Hoverfly. To be more precise, we will use the Java version of the Hoverfly library documented here. It is a lightweight service virtualization tool that allows you to stub or simulate HTTP(S) services. Hoverfly Java is a native language binding that gives you an expressive API for managing Hoverfly in Java. It gives you a Hoverfly class which abstracts away the binary and API calls, a DSL for creating simulations, and a JUnit integration for using it within unit tests.

Ok, there are some other, similar libraries… but somehow I really like Hoverfly 🙂 It is a simple, lightweight library that may perform tests in different modes like simulation, spying, capture, or diffing. You can use Java DSL to build request matchers to response mappings. Let’s include the latest version of Hoverfly in the Maven dependencies:

<dependency>
  <groupId>io.specto</groupId>
  <artifactId>hoverfly-java-junit5</artifactId>
  <version>0.14.3</version>
</dependency>

Let’s assume we have the following method in our Spring @RestController. Before returning a ping response for itself, it calls another service under the address http://callme-service:8080/callme/ping.

@GetMapping("/ping")
public String ping() {
   String response = restTemplate
     .getForObject("http://callme-service:8080/callme/ping", 
                   String.class);
   LOGGER.info("Calling: response={}", response);
   return "I'm caller-service " + version + ". Calling... " + response;
}

Now, we will create the test for our controller. In order to use Hoverfly to intercept outgoing traffic we register HoverflyExtension (1). Then we may the Hoverfly object to create a request mather and simulate an HTTP response (2). The simulated response body is I'm callme-service v1.

@SpringBootTest(properties = {"VERSION = v2"}, 
    webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ExtendWith(HoverflyExtension.class) // (1)
public class CallerCallmeTest {
    
   @Autowired
   TestRestTemplate restTemplate;

   @Test
   void callmeIntegration(Hoverfly hoverfly) {
      hoverfly.simulate(
            dsl(service("http://callme-service:8080")
               .get("/callme/ping")
               .willReturn(success().body("I'm callme-service v1.")))
      ); // (2)
      String response = restTemplate
         .getForObject("/caller/ping", String.class);
      assertEquals("I'm caller-service v2. Calling... I'm callme-service v1.", response);
   }
}

We can easily customize Hovefly behavior with the @HoverflyConfig annotation. By default, Hoverfly works in proxy mode. Assuming we want it to act as a web server we need to set the property webserver to true (1). After that, it will listen for requests on localhost and the port indicated by the proxyPort property. In the next step, we will also enable Spring Cloud @LoadBalancedClient to configure a static list of target URLs instead of dynamic discovery (2). Finally, we can create a Hoverfly test. This time we are intercepting traffic from the web server listening on the localhost:8080 (3).

@SpringBootTest(webEnvironment = 
   SpringBootTest.WebEnvironment.RANDOM_PORT)
@HoverflyCore(config = 
   @HoverflyConfig(logLevel = LogLevel.DEBUG, 
                    webServer = true, 
                    proxyPort = 8080)) // (1)
@ExtendWith(HoverflyExtension.class)
@LoadBalancerClient(name = "account-service", 
                    configuration = AccountServiceConf.class) // (2)
public class GatewayTests {

    @Autowired
    TestRestTemplate restTemplate;

    @Test
    public void findAccounts(Hoverfly hoverfly) {
        hoverfly.simulate(dsl(
            service("http://localhost:8080")
                .andDelay(200, TimeUnit.MILLISECONDS).forAll()
                .get(any())
                .willReturn(success("[{\"id\":\"1\",\"number\":\"1234567890\",\"balance\":5000}]", "application/json")))); // (3)

        ResponseEntity<String> response = restTemplate
                .getForEntity("/account/1", String.class);
        Assertions.assertEquals(200, response.getStatusCodeValue());
        Assertions.assertNotNull(response.getBody());
    }
}

Here’s the load balancer client configuration created just for the test purpose.

class AccountServiceInstanceListSuppler implements 
    ServiceInstanceListSupplier {

    private final String serviceId;

    AccountServiceInstanceListSuppler(String serviceId) {
        this.serviceId = serviceId;
    }

    @Override
    public String getServiceId() {
        return serviceId;
    }

    @Override
    public Flux<List<ServiceInstance>> get() {
        return Flux.just(Arrays
                .asList(new DefaultServiceInstance(serviceId + "1", 
                        serviceId, 
                        "localhost", 8080, false)));
    }
}

Final Thoughts

As you probably figured out, I used all that Java libraries with Spring Boot apps. Although Spring Boot comes with a defined set of external libraries, sometimes we may need some add-ons. The Java libraries I presented are usually created to solve a single, particular problem like e.g. test data generation. It’s totally fine from my perspective. I hope you will find at least one position from my list useful in your projects.

The post Useful & Unknown Java Libraries appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/01/30/useful-unknown-java-libraries/feed/ 6 13954
Native Java with GraalVM and Virtual Threads on Kubernetes https://piotrminkowski.com/2023/01/04/native-java-with-graalvm-and-virtual-threads-on-kubernetes/ https://piotrminkowski.com/2023/01/04/native-java-with-graalvm-and-virtual-threads-on-kubernetes/#comments Wed, 04 Jan 2023 12:23:21 +0000 https://piotrminkowski.com/?p=13847 In this article, you will learn how to use virtual threads, build a native image with GraalVM and run such the Java app on Kubernetes. Currently, the native compilation (GraalVM) and virtual threads (Project Loom) are probably the hottest topics in the Java world. They improve the general performance of your app including memory usage […]

The post Native Java with GraalVM and Virtual Threads on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use virtual threads, build a native image with GraalVM and run such the Java app on Kubernetes. Currently, the native compilation (GraalVM) and virtual threads (Project Loom) are probably the hottest topics in the Java world. They improve the general performance of your app including memory usage and startup time. Since startup time and memory usage were always a problem for Java, expectations for native images or virtual threads were really big.

Of course, we usually consider such performance issues within the context of microservices or serverless apps. They should not consume many OS resources and should be easily auto-scalable. We can easily control resource usage on Kubernetes. If you are interested in Java virtual threads you can read my previous article about using them to create an HTTP server available here. For more details about Knative as serverless on Kubernetes, you can refer to the following article.

Introduction

Let’s start with the plan for our exercise today. In the first step, we will create a simple Java web app that uses virtual threads for processing incoming HTTP requests. Before we run the sample app we will install Knative on Kubernetes to quickly test autoscaling based on HTTP traffic. We will also install Prometheus on Kubernetes. This monitoring stack allows us to compare the performance of the app without/with GraalVM and virtual threads on Kubernetes. Then, we can proceed with the deployment. In order to easily build and run our native app on Kubernetes we will use Cloud Native Buildpacks. Finally, we will perform some load tests and compare metrics.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should follow my instructions.

Create Java App with Virtual Threads

In the first step, we will create a simple Java app that acts as an HTTP server and handles incoming requests. In order to do that, we can use the HttpServer object from the core Java API. Once we create the server we can override a default thread executor with the setExecutor method. In the end, we will try to compare the app using standard threads with the same app using virtual threads. Therefore, we allow overriding the type of executor using an environment variable. The name of that is THREAD_TYPE. If you want to enable virtual threads you need to set the value virtual for that env. Here’s the main method of our app.

public class MainApp {

   public static void main(String[] args) throws IOException {
      HttpServer httpServer = HttpServer
         .create(new InetSocketAddress(8080), 0);

      httpServer.createContext("/example", 
         new SimpleCPUConsumeHandler());

      if (System.getenv("THREAD_TYPE").equals("virtual")) {
         httpServer.setExecutor(
            Executors.newVirtualThreadPerTaskExecutor());
      } else {
         httpServer.setExecutor(Executors.newFixedThreadPool(200));
      }
      httpServer.start();
   }

}

In order to process incoming requests, the HTTP server uses the handler that implements the HttpHandler interface. In our case, the handler is implemented inside the SimpleCPUConsumeHandler class as shown below. It consumes a lot of CPU since it creates an instance of BigInteger with the constructor that performs a lot of computations under the hood. It will also consume some time, so we have the simulation of processing time in the same step. As a response, we just return the next number in the sequence with the Hello_ prefix.

public class SimpleCPUConsumeHandler implements HttpHandler {

   Logger LOG = Logger.getLogger("handler");
   AtomicLong i = new AtomicLong();
   final Integer cpus = Runtime.getRuntime().availableProcessors();

   @Override
   public void handle(HttpExchange exchange) throws IOException {
      new BigInteger(1000, 3, new Random());
      String response = "Hello_" + i.incrementAndGet();
      LOG.log(Level.INFO, "(CPU->{0}) {1}", 
         new Object[] {cpus, response});
      exchange.sendResponseHeaders(200, response.length());
      OutputStream os = exchange.getResponseBody();
      os.write(response.getBytes());
      os.close();
   }
}

In order to use virtual threads in Java 19 we need to enable preview mode during compilation. With Maven we need to enable preview features using maven-compiler-plugin as shown below.

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-compiler-plugin</artifactId>
  <version>3.10.1</version>
  <configuration>
    <release>19</release>
    <compilerArgs>
      --enable-preview
    </compilerArgs>
  </configuration>
</plugin>

Install Knative on Kubernetes

This and the next step are not required to run the native application on Kubernetes. We will use Knative to easily autoscale the app in reaction to the volume of incoming traffic. In the next section, I’ll describe how to run a monitoring stack on Kubernetes.

The simplest way to install Knative on Kubernetes is with the kubectl command. We just need the Knative Serving component without any additional features. The Knative CLI (kn) is not required. We will deploy the application from the YAML manifest using Skaffold.

First, let’s install the required custom resources with the following command:

$ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.8.3/serving-crds.yaml

Then, we can Install the core components of Knative Serving by running the command:

$ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.8.3/serving-core.yaml

In order to access Knative services outside of the Kubernetes cluster we also need to install a networking layer. By default, Knative uses Kourier as an ingress. We can install the Kourier controller by running the following command.

$ kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.8.1/kourier.yaml

Finally, let’s configure Knative Serving to use Kourier with the following command:

kubectl patch configmap/config-network \
  --namespace knative-serving \
  --type merge \
  --patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'

If you don’t have an external domain configured or you are running Knative on the local cluster you need to configure DNS. Otherwise, you would have to run curl commands with a host header. Knative provides a Kubernetes Job that sets sslip.io as the default DNS suffix.

$ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.8.3/serving-default-domain.yaml

The generated URL contains the name of the service, the namespace, and the address of your Kubernetes cluster. Since I’m running my service on the local Kubernetes cluster in the demo-sless namespace my service is available under the following address:

But before we deploy the sample app on Knative, let’s do some other things.

Install Prometheus Stack on Kubernetes

As I mentioned before, we can also install a monitoring stack on Kubernetes.

The simplest way to install it is with the kube-prometheus-stack Helm chart. The package contains Prometheus and Grafana. It also includes all required rules and dashboards to visualize the basic metrics of your Kubernetes cluster. Firstly, let’s add the Helm repository containing our chart:

$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

Then we can install the kube-prometheus-stack Helm chart in the prometheus namespace with the following command:

$ helm install prometheus-stack prometheus-community/kube-prometheus-stack  \
    -n prometheus \
    --create-namespace

If everything goes fine, you should see a similar list of Kubernetes services:

$ kubectl get svc -n prometheus
NAME                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-operated                       ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   11s
prometheus-operated                         ClusterIP   None             <none>        9090/TCP                     10s
prometheus-stack-grafana                    ClusterIP   10.96.218.142    <none>        80/TCP                       23s
prometheus-stack-kube-prom-alertmanager     ClusterIP   10.105.10.183    <none>        9093/TCP                     23s
prometheus-stack-kube-prom-operator         ClusterIP   10.98.190.230    <none>        443/TCP                      23s
prometheus-stack-kube-prom-prometheus       ClusterIP   10.111.158.146   <none>        9090/TCP                     23s
prometheus-stack-kube-state-metrics         ClusterIP   10.100.111.196   <none>        8080/TCP                     23s
prometheus-stack-prometheus-node-exporter   ClusterIP   10.102.39.238    <none>        9100/TCP                     23s

We will analyze Grafana dashboards with memory and CPU statistics. We can enable port-forward to access it locally on the defined port, for example 9080:

$ kubectl port-forward svc/prometheus-stack-grafana 9080:80 -n prometheus

The default username for Grafana is admin and password prom-operator.

We will create two panels in the custom Grafana dashboard. First of them will show the memory usage per single pod in the demo-sless namespace.

sum(container_memory_working_set_bytes{namespace="demo-sless"} / (1024 * 1024)) by (pod)

The second of them will show the average CPU usage per single pod in the demo-sless namespace. You can import both of these directly to Grafana from the k8s/grafana-dasboards.json file from the GitHub repo.

rate(container_cpu_usage_seconds_total{namespace="demo-sless"}[3m])

Build and Deploy a native Java Application

We have already created the sample app and then configured the Kubernetes environment. Now, we may proceed to the deployment phase. Our goal here is to simplify the process of building a native image and running it on Kubernetes as much as possible. Therefore, we will use Cloud Native Buildpacks and Skaffold. With Buildpacks we don’t need to have anything installed on our laptop besides Docker. Skaffold can be easily integrated with Buildpacks to automate the whole process of building and running the app on Kubernetes. You just need to install the skaffold CLI on your machine.

For building a native image of a Java application we may use Paketo Buildpacks. It provides a dedicated buildpack for GraalVM called Paketo GraalVM Buildpack. We should include it in the configuration using the paketo-buildpacks/graalvm name. Since Skaffold supports Buildpacks, we should set all the properties inside the skaffold.yaml file. We need to override some default settings with environment variables. First of all, we have to set the version of Java to 19 and enable preview features (virtual threads). The Kubernetes deployment manifest is available under the k8s/deployment.yaml path.

apiVersion: skaffold/v2beta29
kind: Config
metadata:
  name: sample-java-concurrency
build:
  artifacts:
  - image: piomin/sample-java-concurrency
    buildpacks:
      builder: paketobuildpacks/builder:base
      buildpacks:
        - paketo-buildpacks/graalvm
        - paketo-buildpacks/java-native-image
      env:
        - BP_NATIVE_IMAGE=true
        - BP_JVM_VERSION=19
        - BP_NATIVE_IMAGE_BUILD_ARGUMENTS=--enable-preview
  local:
    push: true
deploy:
  kubectl:
    manifests:
    - k8s/deployment.yaml

Knative simplifies not only autoscaling, but also Kubernetes manifests. Here’s the manifest for our sample app available in the k8s/deployment.yaml file. We need to define a single object Service containing details of the application container. We will change the autoscaling target from the default 200 concurrent requests to 80. It means that if a single instance of the app will process more than 80 requests simultaneously Knative will create a new instance of the app (or a pod – to be more precise). In order to enable virtual threads for our app we also need to set the environment variable THREAD_TYPE to virtual.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: sample-java-concurrency
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/target: "80"
    spec:
      containers:
        - name: sample-java-concurrency
          image: piomin/sample-java-concurrency
          ports:
            - containerPort: 8080
          env:
            - name: THREAD_TYPE
              value: virtual
            - name: JAVA_TOOL_OPTIONS
              value: --enable-preview

Assuming you already installed Skaffold, the only thing you need to do is to run the following command:

$ skaffold run -n demo-sless

Or you can just deploy a ready image from my registry on Docker Hub. However, in that case, you need to change the image tag in the deployment.yaml manifest to virtual-native.

Once you deploy the app you can verify a list of Knative Service. The name of our target service is sample-java-concurrency. The address of the service is returned in the URL field.

$ kn service list -n demo-sless

Load Testing

We will run three testing scenarios today. In the first of them, we will test a standard compilation and a standard thread pool of 100 size. In the second of them, we will test a standard compilation with virtual threads. The final test will check native compilation in conjunction with virtual threads. In all these scenarios, we will set the same autoscaling target – 80 concurrent requests. I’m using the k6 tool for load tests. Each test scenario consists of 4 same steps. Each step takes 2 minutes. In the first step, we are simulating 50 users.

$ k6 run -u 50 -d 120s k6-test.js

Then, we are simulating 100 users.

$ k6 run -u 100 -d 120s k6-test.js

Finally, we run the test for 200 users twice. So, in total, there are four tests with 50, 100, 200, and 200 users, which takes 8 minutes.

$ k6 run -u 200 -d 120s k6-test.js

Let’s verify the results. By the way, here is our test for the k6 tool in javascript.

import http from 'k6/http';
import { check } from 'k6';

export default function () {
  const res = http.get(`http://sample-java-concurrency.demo-sless.127.0.0.1.sslip.io/example`);
  check(res, {
    'is status 200': (res) => res.status === 200,
    'body size is > 0': (r) => r.body.length > 0,
  });
}

Test for Standard Compilation and Threads

The diagram visible below shows memory usage at each phase of the test scenario. After simulating 200 users Knative scales up the number of instances. Theoretically, it should do that during 100 users test. But Knative measures incoming traffic at the level of the sidecar container inside the pod. The memory usage for the first instance is around ~900MB (it includes also sidecar container usage).

graalvm-virtual-threads-kubernetes-memory

Here’s a similar view as before but for the CPU usage. The highest consumption was before autoscaling occurs at the level of ~1.2 core. Then, depending on the number of instances ranges from ~0.4 core to ~0.7 core. As I mentioned before, we are using a time-consuming BigInteger constructor to simulate CPU usage under a heavy load.

graalvm-virtual-threads-kubernetes-cpu

Here are the test results for 50 users. The application was able to process ~105k requests in 2 minutes. The highest processing time value was ~3 seconds.

graalvm-virtual-threads-kubernetes-load-test

Here are the test results for 100 users. The application was able to process ~130k requests in 2 minutes with an average response time of ~90ms.

graalvm-virtual-threads-kubernetes-heavy-load

Finally, we have results for 200 users test. The application was able to process ~135k requests in 2 minutes with an average response time of ~175ms. The failure threshold was at the level of 0.02%.

Test for Standard Compilation and Virtual Threads

The same as in the previous section, here’s the diagram that shows memory usage at each phase of the test scenario. After simulating 100 users Knative scales up the number of instances. Theoretically, it should run the third instance of the app for 200 users. The memory usage for the first instance is around ~850MB (it includes also sidecar container usage).

graalvm-virtual-threads-kubernetes-memory-2

Here’s a similar view as before but for the CPU usage. The highest consumption was before autoscaling occurs at ~1.1 core. Then, depending on the number of instances ranges from ~0.3 core to ~0.7 core.

Here are the test results for 50 users. The application was able to process ~105k requests in 2 minutes. The highest processing time value was ~2.2 seconds.

Here are the test results for 100 users. The application was able to process ~115k requests in 2 minutes with an average response time of ~100ms.

Finally, we have results for 200 users test. The application was able to process ~135k requests in 2 minutes with an average response time of ~180ms. The failure threshold was at the level of 0.02%.

Test for Native Compilation and Virtual Threads

The same as in the previous section, here’s the diagram that shows memory usage at each phase of the test scenario. After simulating 100 users Knative scales up the number of instances. Theoretically, it should run the third instance of the app for 200 users (the third pod visible on the diagram was in fact in the Terminating phase for some time). The memory usage for the first instance is around ~50MB.

graalvm-virtual-threads-kubernetes-native-memory

Here’s a similar view as before but for the CPU usage. The highest consumption was before autoscaling occurs at ~1.3 core. Then, depending on the number of instances ranges from ~0.3 core to ~0.9 core.

Here are the test results for 50 users. The application was able to process ~75k requests in 2 minutes. The highest processing time value was ~2 seconds.

Here are the test results for 100 users. The application was able to process ~85k requests in 2 minutes with an average response time of ~140ms

Finally, we have results for 200 users test. The application was able to process ~100k requests in 2 minutes with an average response time of ~240ms. Plus – there were no failures at the second 200 users attempt.

Summary

In this article, I tried to compare the behavior of the Java app for GraalVM native compilation with virtual threads on Kubernetes with a standard approach. There are several conclusions after running all described tests:

  • There are no significant differences between standard and virtual threads when comes to resource usage or request processing time. The resource usage is slightly lower for virtual threads. On the other hand, the processing time is slightly lower for standard threads. However, if our handler method would take more time, this proportion changes in favor of virtual threads.
  • Autoscaling works quite better for virtual threads. However, I’m not sure why 🙂 Anyway, the number of instances was scaled up for 100 users with a target at the level of 80 for virtual threads, while for standard thread no. Of course, virtual threads give us more flexibility when setting an autoscaling target. For standard threads, we have to choose a value lower than a thread pool size, while for virtual threads we can set any reasonable value.
  • Native compilation significantly reduces app memory usage. For the native app, it was ~50MB instead of ~900MB. On the other hand, the CPU consumption was slightly higher for the native app.
  • Native app process requests slower than a standard app. In all the tests it was 30% lower than the number of requests processed by a standard app.

The post Native Java with GraalVM and Virtual Threads on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/01/04/native-java-with-graalvm-and-virtual-threads-on-kubernetes/feed/ 7 13847
Java HTTP Server and Virtual Threads https://piotrminkowski.com/2022/12/22/java-http-server-and-virtual-threads/ https://piotrminkowski.com/2022/12/22/java-http-server-and-virtual-threads/#comments Thu, 22 Dec 2022 08:43:53 +0000 https://piotrminkowski.com/?p=13822 In this article, you will learn how to create an HTTP server with Java and use virtual threads for handling incoming requests. We will compare this solution with an HTTP server that uses a standard thread pool. Our test will compare memory usage in both scenarios under a heavy load of around 200 concurrent requests. […]

The post Java HTTP Server and Virtual Threads appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create an HTTP server with Java and use virtual threads for handling incoming requests. We will compare this solution with an HTTP server that uses a standard thread pool. Our test will compare memory usage in both scenarios under a heavy load of around 200 concurrent requests.

If you like articles about Java you can also read my post about unknown and useful Java features. It is not my first article about virtual threads. I have already written about Java 19 virtual threads and support for them in the Quarkus framework in this article.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should follow my instructions.

Prerequisites

In order to do the exercise on your laptop you need to have JDK 19+ and Maven installed.

Enable Virtual Threads

Even if you have Java 19 that’s not all. Since virtual threads are still a preview feature in Java 19 we need to enable it during compilation. With Maven we need to enable preview features using maven-compiler-plugin as shown below.

<build>
  <plugins>
    <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-compiler-plugin</artifactId>
      <version>3.10.1</version>
      <configuration>
        <release>19</release>
        <compilerArgs>
          --enable-preview
        </compilerArgs>
      </configuration>
    </plugin>
  </plugins>
</build>

Create HTTP Server with Virtual Threads

We don’t need much to create an HTTP or even HTTPS server with Java. In Java API, an object called HttpServer allows us to achieve it very easily. Once we will create the server we can override a default thread executor with the setExecutor method. No matter which type of executor we choose, there is one requirement that must be fulfilled by our server. It needs to be able to handle 200 requests simultaneously. Therefore for standard Java threads, we will create a pool with a maximum size of 200. For virtual threads, there is no sense to create any pools. They do not consume many resources since they are related directly to the OS.

Let’s take a look at the fragment of code visible below. That’s our method for creating an HTTP server. It will listen on 8080 port (1) under the /example context path (2). The SimpleDelayedHandler object handles all incoming requests. Depending on the value of the withLock variable, it will simulate delay without locking (false) or with ReentrantLock (true). In order to simplify the exercise, we can switch between standard (4) and virtual threads executor (3) using the single boolean parameter. After setting all required parameters, we can start the server (5).

private static void runServer(boolean virtual, boolean withLock) 
      throws IOException {
   
   HttpServer httpServer = HttpServer
         .create(new InetSocketAddress(8080), 0); // (1)

   httpServer.createContext("/example", 
      new SimpleDelayedHandler(withLock)); // (2)
   
   if (virtual) {
      httpServer.setExecutor(
            Executors.newVirtualThreadPerTaskExecutor()
      ); // (3)
   } else {
      httpServer.setExecutor(
            Executors.newFixedThreadPool(200)
      ); // (4)
   }

   httpServer.start(); // (5)
}

Then, we need to call the runServer method from the main method. We will test 4 scenarios depending on the value of two input arguments. We will discuss it in the next section.

public static void main(String[] args) throws IOException {
   runServer(true, false);
}

After running the server you can make a test call using the following command:

$ curl http://localhost:8080/example

Build Test Scenarios

As mentioned before, we will run four test scenarios. In the first two of them, we just compare the performance of the HTTP server with the standard thread pool and with virtual threads. We will simulate the processing time with the Thread.sleep method. In the next two scenarios, we will simulate the usage of the workers’ pool (1). For example, it can be something similar to using a JDBC connection pool in the REST app. There are 50 workers handling 200 requests (2). Those workers will also delay the thread execution with the Thread.sleep method, but this time they will lock the thread at the beginning of execution and unlock it at the end.

Depending on the value of the withLock input argument we will use the workers’ pool (3) or we will just sleep the thread (4). In both cases, we will finally return the response Ping_ and incremented number (5) represented by the AtomicLong object. Here’s the implementation of our handler.

public class SimpleDelayedHandler implements HttpHandler {

   private final List<SimpleWork> workers = 
      new ArrayList<>(); // (1)
   private final int workersCount = 50;
   private final boolean withLock;
   AtomicLong id = new AtomicLong();

   public SimpleDelayedHandler(boolean withLock) {
      this.withLock = withLock;
      if (withLock) {
         for (int i = 0; i < workersCount; i++) { // (2)
            workers.add(new SimpleWork());
         }
      }
   }

   @Override
   public void handle(HttpExchange t) throws IOException {
      String response = null;
      if (withLock) {
         response = workers
            .get((int) (id.incrementAndGet() % workersCount))
            .doJob();
      } else {
         try {
            Thread.sleep(200);
         } catch (InterruptedException e) {
            throw new RuntimeException(e);
         }
         response = "Ping_" + id.incrementAndGet();
      }

      t.sendResponseHeaders(200, response.length());
      OutputStream os = t.getResponseBody();
      os.write(response.getBytes());
      os.close();
   }
}

Here’s the implementation of our worker. As you it also sleeps the thread (this time for 100 milliseconds). However, during that time it locks the object. Since we have 50 worker objects in the pool only 50 threads may use it at the same time. Others will wait until the lock will be released.

public class SimpleWork {

   AtomicLong id = new AtomicLong();
   ReentrantLock lock = new ReentrantLock();

   public String doJob() {
      String response = null;
      lock.lock();
      try {
         Thread.sleep(100);
         response = "Ping_" + id.incrementAndGet();
      } catch (InterruptedException e) {
         throw new RuntimeException();
      } finally {
         lock.unlock();
      }
      return response;
   }

}

Load Test for Java Virtual vs Standard Threads

Let’s begin with the first scenario. We will test standard threads without any locking workers simulation.

public static void main(String[] args) throws IOException {
   runServer(false, false);
}

We can make some warmup tests as shown below. I’m using the siege tool for load testing. We can define the number of concurrent threads and the number of repetitions.

In the right test, we will simulate 200 concurrent requests.

$  siege http://localhost:8080/example -c 200 -r 500

Let’s switch to the profiler view. Here you can see heap memory usage during the test. The usage is around 300 MB, while the reservation is more than 500 MB.

java-virtual-threads-memory-standard

Let’s take a look at the telemetry view. As you see there are ~200 running threads.

Now, we will run the same test for the HTTP server using virtual threads. Let’s restart the application with the following arguments:

public static void main(String[] args) throws IOException {
   runServer(true, false);
}

Let’s switch to the profiler view once again. Here you can see heap memory usage during the test. You can compare it to the previous results. Now the usage is around 180 MB, while the reservation is around 300 MB.

java-virtual-threads-memory-virtual

Here’s the telemetry view. There are just some (~10) platform threads that “carry” virtual threads.

Here’s the visualization of the thread pool from the beginning of the test. As you see there are just some platform threads (CarrierThreads) and a lot of short-lived virtual threads.

java-virtual-threads-pool

Locks with Virtual Threads

In the end, let’s make the same checks, but this time with our worker objects pool that uses ReentrantLock to synchronize threads. Firstly, we will start the app with the following arguments to test standard threads.

public static void main(String[] args) throws IOException {
   runServer(false, true);
}

In fact, for standard threads, the main difference is in thread pool visualization. As you see, now many threads waiting for the lock to release. Our workers’ pool became a bottleneck for the app.

java-virtual-threads-histogram

It doesn’t have any impact on RAM usage in comparison to the previous test for standard Java threads.

And finally the last scenario. Now, we will do the same check for virtual threads.

public static void main(String[] args) throws IOException {
   runServer(true, true);
}

Here are the results for memory usage.

In thread pool visualization we have just some “carrier” threads. As you see they are not “locked”.

In the “Thread Monitor” view there are a lot of virtual threads that wait a moment until the lock is released.

java-virtual-threads-virtual-locks

Of course, you can clone my GitHub repo and make your own tests. I was using JProfiler for memory and threads visualization.

Final Thoughts

Java virtual threads are really long-awaited feature. Since they are still in the preview status in Java 19 we need to wait for their wide adoption in the most popular Java libraries. Unfortunately, even Java 19 is not an LTS and if you are working for one of those companies that only use LTS versions you will have to wait for Java 21 which should be released in September 2023. Nevertheless, virtual threads can reduce the effort of writing, maintaining, and observing especially for high-throughput concurrent applications. We can use them as simply as the standard Java threads. The aim of this article was to show you how you can start with virtual threads to build your own solution, for example, an HTTP server. Then you can easily compare the difference in performance between standard and virtual threads.

The post Java HTTP Server and Virtual Threads appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/12/22/java-http-server-and-virtual-threads/feed/ 8 13822