S2I Archives - Piotr's TechBlog https://piotrminkowski.com/tag/s2i/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 17 Nov 2025 09:27:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 S2I Archives - Piotr's TechBlog https://piotrminkowski.com/tag/s2i/ 32 32 181738725 Running .NET Apps on OpenShift https://piotrminkowski.com/2025/11/17/running-net-apps-on-openshift/ https://piotrminkowski.com/2025/11/17/running-net-apps-on-openshift/#respond Mon, 17 Nov 2025 09:27:34 +0000 https://piotrminkowski.com/?p=15785 This article will guide you on running a .NET application on OpenShift using the Source-to-Image (S2I) tool. While .NET is not my primary area of expertise, I have been working with it quite extensively lately. In this article, we will examine more complex application cases, which may initially present some challenges. If you are interested […]

The post Running .NET Apps on OpenShift appeared first on Piotr's TechBlog.

]]>
This article will guide you on running a .NET application on OpenShift using the Source-to-Image (S2I) tool. While .NET is not my primary area of expertise, I have been working with it quite extensively lately. In this article, we will examine more complex application cases, which may initially present some challenges.

If you are interested in developing applications for OpenShift, you may also want to read my article on deploying Java applications using the odo tool.

Why Source-to-Image?

That’s probably the first question that comes to mind. Let’s start with a brief definition. Source-to-Image (S2I) is a framework and tool that enables you to write images using the application’s source code as input, producing a new image. In other words, it provides a clean, repeatable, and developer-friendly way to build container images directly from source code – especially in OpenShift, where it’s a core built-in mechanism. With S2I, there is no need to create Dockerfiles, and you can trust that the images will be built to run seamlessly on OpenShift without any issues.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

Prerequisite – OpenShift cluster

There are several ways in which you can run an OpenShift cluster. I’m using a cluster that runs in AWS. But you can run it locally using OpenShift Local. This article describes how to install it on your laptop. You can also take advantage of the 30-day free Developer Sandbox service. However, it is worth mentioning that its use requires creating an account with Red Hat. To provision an OpenShift cluster in the developer sandbox, go here. You can also download and install Podman Desktop, which will help you set up both OpenShift Local and connect to the Developer Sandbox. Generally speaking, there are many possibilities. I assume you simply have an OpenShift cluster at your disposal.

Create a .NET application

I have created a slightly more complex application in terms of its modules. It consists of two main projects and two projects with unit tests. The WebApi.Library project is simply a module to be included in the main application, which is WebApi.App. Below is the directory structure of our sample repository.

.
├── README.md
├── WebApi.sln
├── src
│   ├── WebApi.App
│   │   ├── Controllers
│   │   │   └── VersionController.cs
│   │   ├── Program.cs
│   │   ├── Startup.cs
│   │   ├── WebApi.App.csproj
│   │   └── appsettings.json
│   └── WebApi.Library
│       ├── VersionService.cs
│       └── WebApi.Library.csproj
└── tests
    ├── WebApi.App.Tests
    │   ├── VersionControllerTests.cs
    │   └── WebApi.App.Tests.csproj
    └── WebApi.Library.Tests
        ├── VersionServiceTests.cs
        └── WebApi.Library.Tests.csproj
Plaintext

Both the library and the application are elementary in nature. The library provides a single method in the VersionService class to return its version read from the .csproj file.

using System.Reflection;

namespace WebApi.Library;

public class VersionService
{
    private readonly Assembly _assembly;

    public VersionService(Assembly? assembly = null)
    {
        _assembly = assembly ?? Assembly.GetExecutingAssembly();
    }

    public string? GetVersion()
    {
        var informationalVersion = _assembly
            .GetCustomAttribute<AssemblyInformationalVersionAttribute>()?
            .InformationalVersion;

        return informationalVersion ?? _assembly.GetName().Version?.ToString();
    }
}
C#

The application includes the library and uses its VersionService class to read and return the library version in the GET /api/version endpoint. There is no story behind it.

using Microsoft.AspNetCore.Mvc;
using WebApi.Library;

namespace WebApi.App.Controllers
{
    [ApiController]
    [Route("api/version")]
    public class VersionController : ControllerBase
    {
        private readonly VersionService _versionService;
        private readonly ILogger<VersionController> _logger;

        public VersionController(ILogger<VersionController> logger)
        {
            _versionService = new VersionService();
            _logger = logger;
        }

        [HttpGet]
        public IActionResult GetVersion()
        {
            _logger.LogInformation("GetVersion");
            var version = _versionService.GetVersion();
            return Ok(new { version });
        }
    }
}
C#

The application itself utilizes several other libraries, including those for generating Swagger API documentation, Prometheus metrics, and Kubernetes health checks.

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.OpenApi.Models;
using HealthChecks.UI.Client;
using Microsoft.AspNetCore.Diagnostics.HealthChecks;
using Prometheus;
using System;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Diagnostics.HealthChecks;

namespace WebApi.App
{
    public class Startup
    {
        private readonly IConfiguration _configuration;

        public Startup(IConfiguration configuration)
        {
            _configuration = configuration;
        }

        public void ConfigureServices(IServiceCollection services)
        {
            // Enhanced Health Checks
            services.AddHealthChecks()
                .AddCheck("memory", () =>
                    HealthCheckResult.Healthy("Memory usage is normal"),
                    tags: new[] { "live" });

            services.AddControllers();

            services.AddSwaggerGen(c =>
            {
                c.SwaggerDoc("v1", new OpenApiInfo {Title = "WebApi.App", Version = "v1"});
            });
        }

        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            // Enable prometheus metrics
            app.UseMetricServer();
            app.UseHttpMetrics();

            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            app.UseSwagger();
            app.UseSwaggerUI(c => c.SwaggerEndpoint("/swagger/v1/swagger.json", "person-service v1"));

            // Kubernetes probes
            app.UseHealthChecks("/health/live", new HealthCheckOptions
            {
                Predicate = reg => reg.Tags.Contains("live"),
                ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
            });

            app.UseHealthChecks("/health/ready", new HealthCheckOptions
            {
                Predicate = reg => reg.Tags.Contains("ready"),
                ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
            });

            app.UseRouting();

            app.UseEndpoints(endpoints =>
            {
                endpoints.MapControllers();
                endpoints.MapMetrics();
            });

            using var scope = app.ApplicationServices.CreateScope();
        }
    }
}
C#

As you can see, the WebApi.Library project is included as an internal module, while other dependencies are simply added from the external NuGet repository.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <ItemGroup>
    <ProjectReference Include="..\WebApi.Library\WebApi.Library.csproj" />
    <PackageReference Include="Swashbuckle.AspNetCore" Version="9.0.4" />
    <PackageReference Include="prometheus-net.AspNetCore" Version="8.2.1" />
    <PackageReference Include="AspNetCore.HealthChecks.NpgSql" Version="9.0.0" />
    <PackageReference Include="AspNetCore.HealthChecks.UI.Client" Version="9.0.0" />
    <PackageReference Include="Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore" Version="9.0.8" />
  </ItemGroup>

  <PropertyGroup>
    <TargetFramework>net9.0</TargetFramework>
    <ImplicitUsings>enable</ImplicitUsings>
    <Nullable>enable</Nullable>
    <Version>1.0.3</Version>
    <IsPackable>true</IsPackable>
    <GeneratePackageOnBuild>true</GeneratePackageOnBuild>
    <PackageId>WebApi.App</PackageId>
    <Authors>piomin</Authors>
    <Description>WebApi</Description>
  </PropertyGroup>

</Project>
XML

Using OpenShift Source-to-Image for .NET

S2I Locally with CLI

Before testing a mechanism on OpenShift, you try Source-to-Image locally. On macOS, you can install s2i CLI using Homebrew:

brew install source-to-image
ShellSession

After installation, check its version:

$ s2i version
s2i v1.5.1
ShellSession

Then, go to the repository root directory. At this point, we need to parameterize our build because the repository contains several projects. Fortunately, S2I provides a parameter that allows us to set the main project in a multi-module structure easily. It must be set as an environment variable for the s2i command. The following command sets the DOTNET_STARTUP_PROJECT environment variable and uses the registry.access.redhat.com/ubi8/dotnet-90:latest as a builder image.

s2i build . registry.access.redhat.com/ubi8/dotnet-90:latest webapi-app \
  -e DOTNET_STARTUP_PROJECT=src/WebApi.App
ShellSession

Of course, you must have Docker or Podman running on your laptop to use s2i. So, before using a builder image, pull it to your host.

podman pull registry.access.redhat.com/ubi8/dotnet-90:latest
ShellSession

Let’s take a look at the s2i build command output. As you can see, s2i restored and built two projects, but then created a runnable output for the WebApi.App project.

net-openshift-s2i-cli

What about our unit tests? To execute tests during the build, we must also set the DOTNET_TEST_PROJECTS environment variable.

s2i build . registry.access.redhat.com/ubi8/dotnet-90:latest webapi-app \
  -e DOTNET_STARTUP_PROJECT=src/WebApi.App \
  -e DOTNET_TEST_PROJECTS=tests/WebApi.App.Tests
ShellSession

Here’s the command output:

net-openshift-cli-2

The webapi-app image is ready.

$ podman images webapi-app
REPOSITORY                    TAG         IMAGE ID      CREATED        SIZE
docker.io/library/webapi-app  latest      e9d94f983ac1  5 seconds ago  732 MB
ShellSession

We can run it locally with Podman (or Docker):

S2I for .NET on OpenShift

Then, let’s switch to the OpenShift cluster. You need to log in to your cluster using the oc login command. After that, create a new project for testing purposes:

oc new-project dotnet
ShellSession

In OpenShift, a single command can handle everything necessary to build and deploy an application. We need to provide the address of the Git repository containing the source code, specify the branch name, and indicate the name of the builder image located within the cluster’s namespace. Additionally, we should include the same environment variables as we did previously. Since the version of source code we tested before is located in the dev branch, we must pass it together with the repository URL after #.

oc new-app openshift/dotnet:latest~https://github.com/piomin/web-api-2.git#dev --name webapi-app \
  --build-env DOTNET_STARTUP_PROJECT=src/WebApi.App \
  --build-env DOTNET_TEST_PROJECTS=tests/WebApi.App.Tests
ShellSession

Here’s the oc new-app command output:

Then, let’s expose the application outside the cluster using OpenShift Route.

oc expose service/webapi-app
ShellSession

Finally, we can verify the build and deployment status:

There is also a really nice command you can use here. Try yourself 🙂

oc get all
ShellSession

OpenShift Builds with Source-to-Image

Verify Build Status

Let’s verify what has happened after taking the steps from the previous section. Here’s the panel that summarizes the status of our application on the cluster. OpenShift automatically built the image from the .NET source code repository and then deployed it in the target namespace.

net-openshift-console

Here are the logs from the Pod with our application:

The build was entirely performed on the cluster. You can verify the logs from the build by accessing the Build object. After building, the image was pushed to the internal image registry in OpenShift.

net-openshift-build

Under the hood, the BuildConfig was created. This will be the starting point for the next example we will consider.

apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftNewApp
  labels:
    app: webapi-app
    app.kubernetes.io/component: webapi-app
    app.kubernetes.io/instance: webapi-app
  name: webapi-app
  namespace: dotnet
spec:
  output:
    to:
      kind: ImageStreamTag
      name: webapi-app:latest
  source:
    git:
      ref: dev
      uri: https://github.com/piomin/web-api-2.git
    type: Git
  strategy:
    sourceStrategy:
      env:
      - name: DOTNET_STARTUP_PROJECT
        value: src/WebApi.App
      - name: DOTNET_TEST_PROJECTS
        value: tests/WebApi.App.Tests
      from:
        kind: ImageStreamTag
        name: dotnet:latest
        namespace: openshift
    type: Source
YAML

OpenShift with .NET and Azure Artifacts Proxy

Now let’s switch to the master branch in our Git repository. In this branch, the WebApi.Library library is no longer included as a path in the project, but as a separate dependency from an external repository. However, this library has not been published in the public NuGet repository, but in the internal Azure Artifacts repository. Therefore, the build process must take place via a proxy pointing to the address of our repository, or rather, a feed in Azure Artifacts.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <ItemGroup>
    <PackageReference Include="WebApi.Library" Version="1.0.3" />
    <PackageReference Include="Swashbuckle.AspNetCore" Version="9.0.4" />
    <PackageReference Include="prometheus-net.AspNetCore" Version="8.2.1" />
    <PackageReference Include="AspNetCore.HealthChecks.NpgSql" Version="9.0.0" />
    <PackageReference Include="AspNetCore.HealthChecks.UI.Client" Version="9.0.0" />
    <PackageReference Include="Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore" Version="9.0.8" />
  </ItemGroup>

  <PropertyGroup>
    <TargetFramework>net9.0</TargetFramework>
    <ImplicitUsings>enable</ImplicitUsings>
    <Nullable>enable</Nullable>
    <Version>1.0.3</Version>
    <IsPackable>true</IsPackable>
    <GeneratePackageOnBuild>true</GeneratePackageOnBuild>
    <PackageId>WebApi.App</PackageId>
    <Authors>piomin</Authors>
    <Description>WebApi</Description>
  </PropertyGroup>

</Project>
XML

This is how it looks in Azure Artifacts. The name of my feed is pminkows. To access the feed, I must be authenticated against Azure DevOps using a personal token. The full address of the NuGet registry exposed via my instance of Azure Artifacts is https://pkgs.dev.azure.com/pminkows/_packaging/pminkows/nuget/v3/index.json.

If you would like to build such an application locally using Azure Artifacts, you should create a nuget.config file with the configuration below. Then place it in the $HOME/.nuget/NuGet directory.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <clear />
    <add key="pminkows" value="https://pkgs.dev.azure.com/pminkows/_packaging/pminkows/nuget/v3/index.json" />
  </packageSources>
  <packageSourceCredentials>
    <pminkows>
      <add key="Username" value="pminkows" />
      <add key="ClearTextPassword" value="<MY_PERSONAL_TOKEN>" />
    </pminkows>
  </packageSourceCredentials>
</configuration>
nuget.config

Our goal is to run this type of build on OpenShift instead of locally. To achieve this, we need to create a Kubernetes Secret containing the nuget.config file.

oc create secret generic nuget-config --from-file=nuget.config
ShellSession

Then, we must update the contents of the BuildConfig object. The changed lines in the object have been highlighted. The most important element is spec.source.secrets. The Kubernetes Secret containing the nuget.config file must be mounted in the HOME directory of the base image with .NET. We also change the branch in the repository to master and increase the logging level for the builder to detailed.

apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftNewApp
  labels:
    app: webapi-app
    app.kubernetes.io/component: webapi-app
    app.kubernetes.io/instance: webapi-app
  name: webapi-app
  namespace: dotnet
spec:
  output:
    to:
      kind: ImageStreamTag
      name: webapi-app:latest
  source:
    git:
      ref: master
      uri: https://github.com/piomin/web-api-2.git
    type: Git
    secrets:
      - secret:
          name: nuget-config
        destinationDir: /opt/app-root/src/
  strategy:
    sourceStrategy:
      env:
      - name: DOTNET_STARTUP_PROJECT
        value: src/WebApi.App
      - name: DOTNET_TEST_PROJECTS
        value: tests/WebApi.App.Tests
      - name: DOTNET_VERBOSITY
        value: d
      from:
        kind: ImageStreamTag
        name: dotnet:latest
        namespace: openshift
    type: Source
YAML

Next, we can run the build again, but this time with new parameters using the command below. With increased logging level, you can confirm that all dependencies are being retrieved via the Azure Artifacts instance.

oc start-build webapi-app --follow
ShellSession

Conclusion

This article covers different scenarios about building and deploying .NET applications in developer mode on OpenShift. It demonstrates how to use various parameters to customize image building according to the application’s needs. My goal was to demonstrate that deploying .NET applications on OpenShift is straightforward with the help of Source-to-Image.

The post Running .NET Apps on OpenShift appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/11/17/running-net-apps-on-openshift/feed/ 0 15785
Java Development on OpenShift with odo https://piotrminkowski.com/2021/02/05/java-development-on-openshift-with-odo/ https://piotrminkowski.com/2021/02/05/java-development-on-openshift-with-odo/#respond Fri, 05 Feb 2021 16:06:58 +0000 https://piotrminkowski.com/?p=9425 OpenShift Do (odo) is a CLI tool for running applications on OpenShift. Opposite to the oc client, it is a tool for developers. It automates all the things required to deploy your application on OpenShift. Thanks to odo you can focus on the most important aspect – code. In order to start, you just need […]

The post Java Development on OpenShift with odo appeared first on Piotr's TechBlog.

]]>
OpenShift Do (odo) is a CLI tool for running applications on OpenShift. Opposite to the oc client, it is a tool for developers. It automates all the things required to deploy your application on OpenShift. Thanks to odo you can focus on the most important aspect – code. In order to start, you just need to download the latest release from GitHub and add it to your path.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. It is the simple Spring Boot application we are going to deploy on OpenShift. Then you should just follow my instructions.

Before we begin with OpenShift odo

It is important to understand some key concepts related to odo before we begin. If you want to deploy a new application you should create a component. Each component can be run and deployed separately. There are two types of components: Devfile and S2I. The S2I component is based on the Source-To-Image process. Therefore, your application is built on the server-side with the S2I builder. With the Devfile component, you run the application with the mvn spring-boot:run command. Here’s the list of available components. We will use the java component.

openshift-odo-component-list

You can also check out the list of services by executing the command odo catalog list services. A service is a software that your component links to or depends on. However, I won’t focus on that feature. Currently, the Service Catalog is deprecated in OpenShift. Instead, you can operators with odo the same as you would use templates.

The Spring Boot application

Let’s take a moment on discussing our sample Spring Boot application. It is a REST-based application, which connects to MongoDB. We will use JDK 11 for compilation.

<groupId>pl.piomin.samples</groupId>
<artifactId>sample-spring-boot-on-kubernetes</artifactId>
<version>1.0-SNAPSHOT</version>

<properties>
    <java.version>11</java.version>
</properties>

<dependencies>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-data-mongodb</artifactId>
   </dependency>
   ...
</dependencies>

The MongoDB connection settings are set inside application.yml. We will use environment variables for getting credentials and database name. We will set these values in the local configuration using odo.

spring:
  application:
    name: sample-spring-boot-on-kubernetes
  data:
    mongodb:
      uri: mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@mongodb/${MONGO_DATABASE}

Creating configuration with odo

Ok, let’s by creating a new component. We need to choose java S2I type and then set a name of our component.

$ odo create java sample-spring-boot

After that odo is creating a file config.yaml inside the .odo directory in the project root catalog. It’s pretty simple.

kind: LocalConfig
apiversion: odo.dev/v1alpha1
ComponentSettings:
  Type: java
  SourceLocation: ./
  SourceType: local
  Ports:
  - 8443/TCP
  - 8778/TCP
  - 8080/TCP
  Application: app
  Project: pminkows-workshop
  Name: sample-spring-boot

Then, we will add some additional configuration properties. Firstly, we need to expose our application outside the cluster. The command visible below creates a Route to the Service on OpenShift on the 8080 port.

$ odo url create --port 8080

In the next step, we add environment variables to the odo configuration responsible for establishing MongoDB connection: MONGO_USERNAME, MONGO_PASSWORD, and MONGO_DATABASE. Just to simplify, I created a standalone instance of MongoDB on OpenShift using the template. Here are the values in the mongodb secret.

Let’s set environment variables using odo config command. Of course, all things are still available only in the local configuration.

$ odo config set --env MONGO_USERNAME=userJ5Q
$ odo config set --env MONGO_PASSWORD=UrfgtUKohNOFVqbQ
$ odo config set --env MONGO_DATABASE=sampledb 

Finally, we are ready to deploy our application on OpenShift. To do that, we need to execute odo push command. If you would like to see the logs from the build add option --show-log.

openshift-odo-push

Instead of using environment variables to provide MongoDB connection settings, you may take advantage of odo link command. This feature helps to connect an odo component to a service or another component. However, to use it you first need to install the Service Binding Operator on OpenShift. Then you may install a database like MongoDB with an operator and link to your application component.

Verify S2I build on OpenShift

After running odo push let’s switch to oc client. Our application is running on OpenShift as shown below. The same with MongoDB.

We can also navigate to the OpenShift Management Console. The environment variables set inside odo configuration has been applied to the DeploymentConfig.

Now, let’s say we need to provide some changes in our code. Fortunately, we can execute the command that watches for changes in the directory for current component. After detecting such a change, the new version of application is immediately deployed on OpenShift.

$ odo watch
Waiting for something to change in /Users/pminkows/IdeaProjects/sample-spring-boot-on-kubernetes

Using OpenShift odo in Devfile mode

In opposition to the S2I approach, we may use Devfile mode. In order to do that we will choose the java-springboot component.

As a result, OpenShift odo creates devfile.yaml in the project root directory.

schemaVersion: 2.0.0
metadata:
  name: java-springboot
  version: 1.1.0
starterProjects:
  - name: springbootproject
    git:
      remotes:
        origin: "https://github.com/odo-devfiles/springboot-ex.git"
components:
  - name: tools
    container:
      image: quay.io/eclipse/che-java11-maven:nightly
      memoryLimit: 768Mi
      mountSources: true
      endpoints:
      - name: '8080-tcp'
        targetPort: 8080
      volumeMounts:
        - name: m2
          path: /home/user/.m2
  - name: m2
    volume:
      size: 3Gi
commands:
  - id: build
    exec:
      component: tools
      commandLine: "mvn clean -Dmaven.repo.local=/home/user/.m2/repository package -Dmaven.test.skip=true"
      group:
        kind: build
        isDefault: true
  - id: run
    exec:
      component: tools
      commandLine: "mvn -Dmaven.repo.local=/home/user/.m2/repository spring-boot:run"
      group:
        kind: run
        isDefault: true
  - id: debug
    exec:
      component: tools
      commandLine: "java -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=${DEBUG_PORT},suspend=n -jar target/*.jar"
      group:
        kind: debug
        isDefault: true

Of course, all next steps are the same as for S2I component.

Install OpenShift Intellij Plugin

If you do not like command-line tools, you may install the Intellij OpenShift plugin as well. It uses odo for build and deploy. The good news is that the latest version of this plugin supports odo in 2.0.3 version.

openshift-odo-intellij

Thanks to that plugin you can for example easily create component with OpenShift odo.

Conclusion

With odo you can easily deploy your application on OpenShift in a few seconds. You may also continuously watch for changes in the code, and immediately deploy a new version of application. Moreover, you don’t need to create any Kubernetes YAML manifests.

OpenShift odo is a similar tool to Skaffold. If you would like to compare both these tools used to run the Spring Boot application you may read the following article about Skaffold.

The post Java Development on OpenShift with odo appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/02/05/java-development-on-openshift-with-odo/feed/ 0 9425
Quick Guide to Microservices with Quarkus on Openshift https://piotrminkowski.com/2020/08/18/quick-guide-to-microservices-with-quarkus-on-openshift/ https://piotrminkowski.com/2020/08/18/quick-guide-to-microservices-with-quarkus-on-openshift/#comments Tue, 18 Aug 2020 14:18:04 +0000 https://piotrminkowski.wordpress.com/?p=7219 In this article I will show you how to use the Quarkus OpenShift module. Quarkus is a framework for building Java applications in times of microservices and serverless architectures. If you compare it with other frameworks like Spring Boot / Spring Cloud or Micronaut, the first difference is native support for running on Kubernetes or […]

The post Quick Guide to Microservices with Quarkus on Openshift appeared first on Piotr's TechBlog.

]]>
In this article I will show you how to use the Quarkus OpenShift module. Quarkus is a framework for building Java applications in times of microservices and serverless architectures. If you compare it with other frameworks like Spring Boot / Spring Cloud or Micronaut, the first difference is native support for running on Kubernetes or Openshift platforms. It is built on top of well-known Java standards like CDI, JAX-RS, and Eclipse MicroProfile which also distinguishes it from Spring Boot or Micronaut.
Some other features that may convince you to use Quarkus are extremely fast boot time, minimal memory footprint optimized for running in containers, and lower time-to-first-request. Also, even though it is a relatively new framework, it has a lot of extensions including support for Hibernate, Kafka, RabbitMQ, OpenApi, Vert.x, and many more.
I’m going to guide you through building microservices with Quarkus and running them on OpenShift 4. We will cover the following topics:

  • Building REST-based application with input validation
  • Communication between microservices with RestClient
  • Exposing health checks (liveness, readiness)
  • Exposing OpenAPI/Swagger documentation
  • Running applications on the local machine with Quarkus Maven plugin
  • Testing with JUnit and RestAssured
  • Deploying and running Quarkus applications on OpenShift using source-2-image

github-logo Source code

The source code of application is available on GitHub: https://github.com/piomin/sample-quarkus-microservices.git.

If you are interested in more materials related to Quarkus framework you can read my previous articles about it. In the article Guide to Quarkus with Kotlin I’m showing how to build a simple REST-based application written in Kotlin. In the article Guide to Quarkus on Kubernetes, I’m showing how to deploy it using Quarkus built-in support for Kubernetes.

1. Dependencies required for Quarkus OpenShift

When creating a new application you may execute a single Maven command that uses quarkus-maven-plugin. A list of dependencies should be declared in parameter -Dextensions.


mvn io.quarkus:quarkus-maven-plugin:1.7.0.Final:create \
    -DprojectGroupId=pl.piomin.services \
    -DprojectArtifactId=employee-service \
    -DclassName="pl.piomin.services.employee.controller.EmployeeController" \
    -Dpath="/employees" \
    -Dextensions="resteasy-jackson, hibernate-validator"

Here’s the structure of our pom.xml:

<properties>
   <quarkus.version>1.7.0.Final</quarkus.version>
   <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
   <maven.compiler.source>11</maven.compiler.source>
   <maven.compiler.target>11</maven.compiler.target>
</properties>
<dependencyManagement>
   <dependencies>
      <dependency>
         <groupId>io.quarkus</groupId>
         <artifactId>quarkus-bom</artifactId>
         <version>${quarkus.version}</version>
         <type>pom</type>
         <scope>import</scope>
      </dependency>
   </dependencies>
</dependencyManagement>
<build>
   <plugins>
      <plugin>
         <groupId>io.quarkus</groupId>
         <artifactId>quarkus-maven-plugin</artifactId>
         <version>${quarkus.version}</version>
         <executions>
            <execution>
               <goals>
                  <goal>build</goal>
               </goals>
            </execution>
         </executions>
      </plugin>
   </plugins>
</build>

For building a simple REST-based application with input validation we don’t need to include many modules. As you have probably noticed I declared just two extensions, which is the same as the following list of dependencies in Maven pom.xml:

<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-resteasy-jackson</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-hibernate-validator</artifactId>
</dependency>

2. Source code

What might be a bit surprising for Spring Boot or Micronaut users there is no main, runnable class with static method. A resource/controller class is defacto the main class. Quarkus resource/controller class and methods should be marked using annotations from javax.ws.rs library. Here’s the implementation of REST controller inside employee-service:

@Path("/employees")
@Produces(MediaType.APPLICATION_JSON)
public class EmployeeController {

   private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
	
   @Inject
   EmployeeRepository repository;
	
   @POST
   public Employee add(@Valid Employee employee) {
      LOGGER.info("Employee add: {}", employee);
      return repository.add(employee);
   }
	
   @Path("/{id}")
   @GET
   public Employee findById(@PathParam("id") Long id) {
      LOGGER.info("Employee find: id={}", id);
      return repository.findById(id);
   }

   @GET
   public Set<Employee> findAll() {
      LOGGER.info("Employee find");
      return repository.findAll();
   }
	
   @Path("/department/{departmentId}")
   @GET
   public Set<Employee> findByDepartment(@PathParam("departmentId") Long departmentId) {
      LOGGER.info("Employee find: departmentId={}", departmentId);
      return repository.findByDepartment(departmentId);
   }
	
   @Path("/organization/{organizationId}")
   @GET
   public Set<Employee> findByOrganization(@PathParam("organizationId") Long organizationId) {
      LOGGER.info("Employee find: organizationId={}", organizationId);
      return repository.findByOrganization(organizationId);
   }
	
}

We use CDI for dependency injection and SLF4J for logging. The controller class uses an in-memory repository bean for storing and retrieving data. Repository bean is annotated with CDI @ApplicationScoped and injected into the controller:

public class EmployeeRepository {

   private Set<Employee> employees = new HashSet<>();

   public EmployeeRepository() {
      add(new Employee(1L, 1L, "John Smith", 30, "Developer"));
      add(new Employee(1L, 1L, "Paul Walker", 40, "Architect"));
   }

   public Employee add(Employee employee) {
      employee.setId((long) (employees.size()+1));
      employees.add(employee);
      return employee;
   }
   
   public Employee findById(Long id) {
      Optional<Employee> employee = employees.stream()
            .filter(a -> a.getId().equals(id))
            .findFirst();
      if (employee.isPresent())
         return employee.get();
      else
         return null;
   }

   public Set<Employee> findAll() {
      return employees;
   }
   
   public Set<Employee> findByDepartment(Long departmentId) {
      return employees.stream()
            .filter(a -> a.getDepartmentId().equals(departmentId))
            .collect(Collectors.toSet());
   }
   
   public Set<Employee> findByOrganization(Long organizationId) {
      return employees.stream()
            .filter(a -> a.getOrganizationId().equals(organizationId))
            .collect(Collectors.toSet());
   }

}

And the last component is domain class with validation:

public class Employee {

   private Long id;
   @NotNull
   private Long organizationId;
   @NotNull
   private Long departmentId;
   @NotBlank
   private String name;
   @Min(1)
   @Max(100)
   private int age;
   @NotBlank
   private String position;
   
   // ... GETTERS AND SETTERS
   
}

3. Unit Testing

Unit testing with Quarkus is very simple. If you are testing REST-based web application you should include the following dependencies in your pom.xml:


<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-junit5</artifactId>
   <scope>test</scope>
</dependency>
<dependency>
   <groupId>io.rest-assured</groupId>
   <artifactId>rest-assured</artifactId>
   <scope>test</scope>
</dependency>

Let’s analyze the test class from organization-service (our another microservice along with employee-service and department-service). A test class should be annotated with @QuarkusTest. We may inject other beans via @Inject annotation. The rest is typical for JUnit and RestAssured – we are testing the API methods exposed by the controller. Because we are using an in-memory repository we don’t have to mock anything except inter-service communication (we discuss it later in that article). We have some positive scenarios for GET, POST methods and a single negative scenario that does not pass input validation (testInvalidAdd).

@QuarkusTest
public class OrganizationControllerTests {

   @Inject
   OrganizationRepository repository;

   @Test
   public void testFindAll() {
      given().when().get("/organizations")
	         .then()
			 .statusCode(200)
			 .body(notNullValue());
   }

   @Test
   public void testFindById() {
      Organization organization = new Organization("Test3", "Address3");
      organization = repository.add(organization);
      given().when().get("/organizations/{id}", organization.getId()).then().statusCode(200)
             .body("id", equalTo(organization.getId().intValue()))
             .body("name", equalTo(organization.getName()));
   }

   @Test
   public void testFindByIdWithDepartments() {
      given().when().get("/organizations/{id}/with-departments", 1L).then().statusCode(200)
             .body(notNullValue())
             .body("departments.size()", is(1));
   }

   @Test
   public void testAdd() {
      Organization organization = new Organization("Test5", "Address5");
      given().contentType("application/json").body(organization)
             .when().post("/organizations").then().statusCode(200)
             .body("id", notNullValue())
             .body("name", equalTo(organization.getName()));
   }

   @Test
   public void testInvalidAdd() {
      Organization organization = new Organization();
      given().contentType("application/json").body(organization)
	         .when()
			 .post("/organizations")
			 .then()
			 .statusCode(400);
   }

}

4. Inter-service communication

Since Quarkus is dedicated to running on Kubernetes it does not provide any built-in support for third-party service discovery (for example through Consul or Netflix Eureka) and HTTP client integrated with this discovery. However, it provides dedicated client support for REST communication. To use it we first need to include the following dependency:


<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-rest-client</artifactId>
</dependency>

Quarkus provides declarative REST client based on MicroProfile REST Client. You need to create an interface with the required methods and annotate it with @RegisterRestClient. Other annotations are pretty the same as on the server-side. Since you use @RegisterRestClient for marking Quarkus know that this interface is meant to be available for CDI injection as a REST Client.

@Singleton
@Path("/departments")
@RegisterRestClient
public interface DepartmentClient {

   @GET
   @Path("/organization/{organizationId}")
   @Produces(MediaType.APPLICATION_JSON)
   List<Department> findByOrganization(@PathParam("organizationId") Long organizationId);

   @GET
   @Path("/organization/{organizationId}/with-employees")
   @Produces(MediaType.APPLICATION_JSON)
   List<Department> findByOrganizationWithEmployees(@PathParam("organizationId") Long organizationId);
   
}

Now, let’s take a look at the controller class inside the organization-service. Together with @Inject we need to use @RestClient annotation to inject REST client bean properly. After that, you can use interface methods to call endpoints exposed by other services.

@Path("/organizations")
@Produces(MediaType.APPLICATION_JSON)
public class OrganizationController {

   private static final Logger LOGGER = LoggerFactory.getLogger(OrganizationController.class);
   
   @Inject
   OrganizationRepository repository;
   @Inject
   @RestClient
   DepartmentClient departmentClient;
   @Inject
   @RestClient
   EmployeeClient employeeClient;
   
   // ... OTHER FIND METHODS

   @Path("/{id}/with-departments")
   @GET
   public Organization findByIdWithDepartments(@PathParam("id") Long id) {
      LOGGER.info("Organization find: id={}", id);
      Organization organization = repository.findById(id);
      organization.setDepartments(departmentClient.findByOrganization(organization.getId()));
      return organization;
   }
   
   @Path("/{id}/with-departments-and-employees")
   @GET
   public Organization findByIdWithDepartmentsAndEmployees(@PathParam("id") Long id) {
      LOGGER.info("Organization find: id={}", id);
      Organization organization = repository.findById(id);
      organization.setDepartments(departmentClient.findByOrganizationWithEmployees(organization.getId()));
      return organization;
   }
   
   @Path("/{id}/with-employees")
   @GET
   public Organization findByIdWithEmployees(@PathParam("id") Long id) {
      LOGGER.info("Organization find: id={}", id);
      Organization organization = repository.findById(id);
      organization.setEmployees(employeeClient.findByOrganization(organization.getId()));
      return organization;
   }
   
}

The last missing thing required for communication is the addresses of target services. We may provide them using field baseUri of @RegisterRestClient annotation. However, it seems that a better solution would be to place them inside application.properties. The name of the property needs to contain the fully qualified name of the client interface and suffix mp-rest/url. The address used for communication in development mode is different than in production mode when the application is deployed on Kubernetes or OpenShift. That’s why I’m using prefix %dev in the name of the property setting target URL.


%dev.pl.piomin.services.organization.client.DepartmentClient/mp-rest/url=http://localhost:8090
%dev.pl.piomin.services.organization.client.EmployeeClient/mp-rest/url=http://localhost:8080

I have already mentioned unit testing and inter-service communication in the previous section. To test the API method that communicates with other applications we need to mock the REST client. Here’s the sample of mock created for DepartmentClient. It should be visible only during the tests, so we have to place it inside src/test/java. If we annotate it with @Mock and @RestClient Quarkus automatically use this bean by default instead of declarative REST client-defined inside src/main/java.

@Mock
@ApplicationScoped
@RestClient
public class MockDepartmentClient implements DepartmentClient {

    @Override
    public List<Department> findByOrganization(Long organizationId) {
        return Collections.singletonList(new Department("Test1"));
    }

    @Override
    public List<Department> findByOrganizationWithEmployees(Long organizationId) {
        return null;
    }

}

5. Monitoring and Documentation

We can easily expose health checks or API documentation with Quarkus. API documentation is built using OpenAPI/Swagger. Quarkus leverages libraries available within the project SmallRye. We should include the following dependencies to our pom.xml:


<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-openapi</artifactId>
</dependency>
<dependency>
   <groupId>io.quarkus</groupId>
   <artifactId>quarkus-smallrye-health</artifactId>
</dependency>

We can define two types of health checks: readiness and liveness. There are available under /health/ready and /health/live context paths. To expose them outside application we need to define a bean that implements MicroProfile HealthCheck interface. Readiness endpoint should be annotated with @Readiness, while liveness with @Liveness.

@ApplicationScoped
@Readiness
public class ReadinessHealthcheck implements HealthCheck {

    @Override
    public HealthCheckResponse call() {
        return HealthCheckResponse.named("Employee Health Check").up().build();
    }

}

To enable Swagger documentation we don’t need to do anything more than adding a dependency. Quarkus also provides a built-in UI for Swagger. By default it is enabled on development mode, so if you are willing to use on the production you should add the line quarkus.swagger-ui.always-include=true to your application.properties file. Now, if run the application employee-service locally in development mode by executing Maven command mvn compile quarkus:dev you may view API specification available under URL http://localhost:8080/swagger-ui.

swagger

Here’s my log from application startup. It prints a listening port and list of loaded extensions.

quarkus-startup

6. Running Quarkus Microservices on the Local Machine

Because we would like to run more than one application on the same machine we need to override their default HTTP listening port. While employee-service is still running on the default 8080 port, other microservices use different ports as shown below.

department-service:
port-department

organization-service:
port-organization

Let’s test inter-service communication from Swagger UI. I called endpoint GET /organizations/{id}/with-departments that calls endpoint GET /departments/organization/{organizationId} exposed by department-service. The result is visible on the below.

quarkus-communication

7. Running Quarkus on OpenShift

We have already finished the implementation of our sample microservices architecture and run them on the local machine. Now, we can proceed to the last step and deploy these applications on OpenShift or Minishift. We have some different approaches when deploying the Quarkus application on OpenShift. Today I’ll show you leverage the S2I build mechanism for that.
We are going to use Quarkus GraalVM Native S2I Builder. It is available on quai.io as quarkus/ubi-quarkus-native-s2i. I’m using the Openshift 4 cluster running on Azure. You try it as well on the local version OpenShift 3 – Minishift. Before deploying our applications we need to start Minishift. Following Quarkus documentation GraalVM-based native build consumes much memory and CPU, so you should set 6GB and 4 cores for Minishift.


$ minishift start --vm-driver=virtualbox --memory=6G --cpus=4

Also, we need to modify the source code of our application a little. As you probably remember we used JDK 11 for running them locally. We also need to include a declaration of native profile as shown below:

<properties>
   <quarkus.version>1.7.0.Final</quarkus.version>
   <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
   <maven.compiler.source>1.8</maven.compiler.source>
   <maven.compiler.target>1.8</maven.compiler.target>
</properties>
...
<profiles>
   <profile>
      <id>native</id>
      <activation>
         <property>
            <name>native</name>
         </property>
      </activation>
      <build>
         <plugins>
            <plugin>
               <groupId>io.quarkus</groupId>
               <artifactId>quarkus-maven-plugin</artifactId>
               <version>${quarkus.version}</version>
               <executions>
                  <execution>
                     <goals>
                        <goal>native-image</goal>
                     </goals>
                     <configuration>
                        <enableHttpUrlHandler>true</enableHttpUrlHandler>
                     </configuration>
                  </execution>
               </executions>
            </plugin>
            <plugin>
               <artifactId>maven-failsafe-plugin</artifactId>
               <version>2.22.1</version>
               <executions>
                  <execution>
                     <goals>
                        <goal>integration-test</goal>
                        <goal>verify</goal>
                     </goals>
                     <configuration>
                        <systemProperties>
                           <native.image.path>${project.build.directory}/${project.build.finalName}-runner</native.image.path>
                        </systemProperties>
                     </configuration>
                  </execution>
               </executions>
            </plugin>
         </plugins>
      </build>
   </profile>
</profiles>

Two other changes should be performed inside application.properties file. We don’t have to override port number, since OpenShift dynamically assigns virtual IP for every pod. An inter-service communication is realized via OpenShift discovery, so we just need to set the name of service instead of the localhost. It can be set in the default profile for the property since properties with %dev prefix are used in development mode.

quarkus.swagger-ui.always-include=true
pl.piomin.services.organization.client.DepartmentClient/mp-rest/url=http://department:8080
pl.piomin.services.organization.client.EmployeeClient/mp-rest/url=http://employee:8080

Finally we may deploy our applications on OpenShift. To do that you should execute the following commands using your oc client:

$ oc new-app quay.io/quarkus/ubi-quarkus-native-s2i:20.1.0-java11~https://github.com/piomin/sample-quarkus-microservices.git --context-dir=employee-service --name=employee
$ oc new-app quay.io/quarkus/ubi-quarkus-native-s2i:20.1.0-java11~https://github.com/piomin/sample-quarkus-microservices.git --context-dir=department-service --name=department
$ oc new-app quay.io/quarkus/ubi-quarkus-native-s2i:20.1.0-java11~https://github.com/piomin/sample-quarkus-microservices.git --context-dir=organization-service --name=organization

As you can see the repository with applications source code is available on my GitHub account under address https://github.com/piomin/sample-quarkus-microservices.git. Because all the applications are stored within a single repository we need to define a parameter context-dir for every single deployment.
I was quite disappointed. Since we are using GraalVM for compilation the memory consumption of the build is pretty large. The whole build process takes around 10 minutes.

quarkus-microservices-openshift-build

Here’s the list of performed builds.

quarkus-microservices-openshift-build-list

Although a build process consumes much memory, the memory usage of Quarkus applications compiled using GraalVM is just amazing.

quarkus-microservices-openshift-pods

To execute some test calls we need to expose applications outside the OpenShift cluster.

$ oc expose svc employee
$ oc expose svc department
$ oc expose svc organization

In my OpenShift cluster they will be available under the address, for example http://department-quarkus.apps.np9zir0r.westeurope.aroapp.io. You can run Swagger UI by calling /swagger-ui context path on every single application.

quarkus-microservices-openshift-routes

The post Quick Guide to Microservices with Quarkus on Openshift appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/08/18/quick-guide-to-microservices-with-quarkus-on-openshift/feed/ 2 7219
Running Java Microservices on OpenShift using Source-2-Image https://piotrminkowski.com/2019/01/08/running-java-microservices-on-openshift-using-source-2-image/ https://piotrminkowski.com/2019/01/08/running-java-microservices-on-openshift-using-source-2-image/#comments Tue, 08 Jan 2019 09:06:31 +0000 https://piotrminkowski.wordpress.com/?p=6944 One of the reasons you would prefer OpenShift instead of Kubernetes is the simplicity of running new applications. When working with plain Kubernetes you need to provide an already built image together with the set of descriptor templates used for deploying it. OpenShift introduces Source-2-Image feature used for building reproducible Docker images from application source […]

The post Running Java Microservices on OpenShift using Source-2-Image appeared first on Piotr's TechBlog.

]]>
One of the reasons you would prefer OpenShift instead of Kubernetes is the simplicity of running new applications. When working with plain Kubernetes you need to provide an already built image together with the set of descriptor templates used for deploying it. OpenShift introduces Source-2-Image feature used for building reproducible Docker images from application source code. With S2I you don’t have to provide any Kubernetes YAML templates or build a Docker image by yourself, OpenShift will do it for you. Let’s see how it works. The best way to test it locally is via Minishift. But the first step is to prepare sample applications source code.

1. Prepare application code

I have already described how to run your Java applications on Kubernetes in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker. We will use the same source code as used in that article now, so you would be able to compare those two different approaches. Our source code is available on GitHub in repository sample-spring-microservices-new. We will modify a little the version used in Kubernetes by removing Spring Cloud Kubernetes library and including some additional resources. The current version is available in the branch openshift.
Our sample system consists of three microservices which communicate with each other and use Mongo database backend. Here’s the diagram that illustrates our architecture.

s2i-1

Every microservice is a Spring Boot application, which uses Maven as a built tool. After including spring-boot-maven-plugin it is able to generate single fat jar with all dependencies, which is required by source-2-image builder.

<build>
   <plugins>
      <plugin>
         <groupId>org.springframework.boot</groupId>
         <artifactId>spring-boot-maven-plugin</artifactId>
      </plugin>
   </plugins>
</build>

Every application includes starters for Spring Web, Spring Actuator and Spring Data MongoDB for integration with Mongo database. We will also include libraries for generating Swagger API documentation, and Spring Cloud OpenFeign for these applications which call REST endpoints exposed by other microservices.

<dependencies>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-actuator</artifactId>
   </dependency>
   <dependency>
      <groupId>io.springfox</groupId>
      <artifactId>springfox-swagger2</artifactId>
      <version>2.9.2>/version<
   </dependency>
   <dependency>
      <groupId>io.springfox</groupId>
      <artifactId>springfox-swagger-ui</artifactId>
      <version>2.9.2</version>
   </dependency>
   <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-data-mongodb</artifactId>
   </dependency>
</dependencies>

Every Spring Boot application exposes REST API for simple CRUD operations on a given resource. The Spring Data repository bean is injected into the controller.

@RestController
@RequestMapping(“/employee”)
public class EmployeeController {

   private static final Logger LOGGER = LoggerFactory.getLogger(EmployeeController.class);
   
   @Autowired
   EmployeeRepository repository;
   
   @PostMapping("/")
   public Employee add(@RequestBody Employee employee) {
      LOGGER.info("Employee add: {}", employee);
      return repository.save(employee);
   }
   
   @GetMapping("/{id}")
   public Employee findById(@PathVariable("id") String id) {
      LOGGER.info("Employee find: id={}", id);
      return repository.findById(id).get();
   }
   
   @GetMapping("/")
   public Iterable<Employee> findAll() {
      LOGGER.info("Employee find");
      return repository.findAll();
   }
   
   @GetMapping("/department/{departmentId}")
   public List<Employee> findByDepartment(@PathVariable("departmentId") Long departmentId) {
      LOGGER.info("Employee find: departmentId={}", departmentId);
      return repository.findByDepartmentId(departmentId);
   }
   
   @GetMapping("/organization/{organizationId}")
   public List<Employee> findByOrganization(@PathVariable("organizationId") Long organizationId) {
      LOGGER.info("Employee find: organizationId={}", organizationId);
      return repository.findByOrganizationId(organizationId);
   }
   
}

The application expects to have environment variables pointing to the database name, user and password.

spring:
  application:
    name: employee
  data:
    mongodb:
      uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME}

Inter-service communication is realized through OpenFeign – a declarative REST client. It is included in department and organization microservices.

@FeignClient(name = "employee", url = "${microservices.employee.url}")
public interface EmployeeClient {

   @GetMapping("/employee/organization/{organizationId}")
   List<Employee> findByOrganization(@PathVariable("organizationId") String organizationId);
   
}

The address of the target service accessed by Feign client is set inside application.yml. The communication is realized via OpenShift/Kubernetes services. The name of each service is also injected through an environment variable.


spring:
  application:
    name: organization
  data:
    mongodb:
      uri: mongodb://${MONGO_DATABASE_USER}:${MONGO_DATABASE_PASSWORD}@mongodb/${MONGO_DATABASE_NAME}
microservices:
  employee:
    url: http://${EMPLOYEE_SERVICE}:8080
  department:
    url: http://${DEPARTMENT_SERVICE}:8080

2. Running Minishift

To run Minishift locally you just have to download it from that site, copy minishift.exe (for Windows) to your PATH directory and start using minishift start command. For more details you may refer to my previous article about OpenShift and Java applications Quick guide to deploying Java apps on OpenShift. The current version of Minishift used during writing this article is 1.29.0.
After starting Minishift we need to run some additional oc commands to enable source-2-image for Java apps. First, we add some privileges to user admin to be able to access project openshift. In this project OpenShift stores all the build-in templates and image streams used, for example as S2I builders. Let’s begin from enable admin-user addon.

$ minishift addons apply admin-user

Thanks to that plugin we are able to login to Minishift as cluster admin. Now, we can grant role cluster-admin to user admin.

$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin
$ oc login -u admin -p admin

After that, you can login to the web console using credentials admin/admin. You should be able to see project openshift. It is not all. The image used for building runnable Java apps (openjdk18-openshift) is not available by default on Minishift. We can import it manually from RedHat registry using oc import-image command or just enable and apply plugin xpaas. I prefer the second option.

$ minishift addons apply xpaas

Now, you can go to Minishift web console (for me available under address https://192.168.99.100:8443), select project openshift and then navigate to Builds -> Images. You should see the image stream redhat-openjdk18-openshift on the list.

s2i-2

The newest version of that image is 1.3. Surprisingly it is not the newest version on OpenShift Container Platform. There you have version 1.5. However, the newest versions of builder images has been moved to registry.redhat.io, which requires authentication.

3. Deploying Java app using S2I

We are finally able to deploy our app on Minishift with S2I builder. The application source code is ready, and the same with the Minishift instance. The first step is to deploy an instance of MongoDB. It is very easy with OpenShift, because the Mongo template is available in the built-in service catalog. We can provide our own configuration settings or left default values. What’s important for us, OpenShift generates secrets, by default available under the name mongodb.

s2i-3

The S2I builder image provided by OpenShift may be used through the image stream redhat-openjdk18-openshift. This image is intended for use with Maven-based Java standalone projects that are run via main class, for example Spring Boot applications. If you would not provide any builder during creating the new app the type of application is auto-detected by OpenShift, and source code written Java it will be deployed on WildFly server. The current version of the Java S2I builder image supports OpenJDK 1.8, Jolokia 1.3.5, and Maven 3.3.9-2.8.
Let’s create our first application on OpenShift. We begin from microservice employee. Under normal circumstances each microservice would be located in a separate Git repository. In our sample all of them are placed in the single repository, so we have provided the location of the current app by setting parameter --context-dir. We will also override default branch to openshift, which has been created for the purposes of this article.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=employee --context-dir=employee-service

All our microservices are connected to the Mongo database, so we also have to inject connection settings and credentials into the application pod. It can be achieved by injecting mongodb secret to BuildConfig object.

$ oc set env bc/employee --from="secret/mongodb" --prefix=MONGO_

BuildConfig is one of the OpenShift object created after running command oc new-app. It also creates DeploymentConfig with deployment definition, Service, and ImageStream with newest Docker image of application. After creating the application a new build is running. First, it downloads source code from the Git repository, then it builds it using Maven, assembles, builds results into the Docker image, and finally saves the image in the registry.
Now, we can create the next application – department. For simplification, all three microservices are connecting to the same database, which is not recommended under normal circumstances. In that case the only difference between department and employee app is the environment variable EMPLOYEE_SERVICE set as parameter on oc new-app command.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=department --context-dir=department-service -e EMPLOYEE_SERVICE=employee 

The same as before we also inject mongodb secret into BuildConfig object.

$ oc set env bc/department --from="secret/mongodb" --prefix=MONGO_

A build is starting just after creating a new application, but we can also start it manually by executing the following running command.

$ oc start-build department

Finally, we are deploying the last microservice. Here are the appropriate commands.

$ oc new-app redhat-openjdk18-openshift:1.3~https://github.com/piomin/sample-spring-microservices-new.git#openshift --name=organization --context-dir=organization-service -e EMPLOYEE_SERVICE=employee -e DEPARTMENT_SERVICE=department
$ oc set env bc/organization --from="secret/mongodb" --prefix=MONGO_

4. Deep look into created OpenShift objects

The list of builds may be displayed on the web console under section Builds -> Builds. As you can see on the picture below there are three BuildConfig objects available – each one for the single application. The same list can be displayed using oc command oc get bc.

s2i-4

You can take a look at build history by selecting one of the elements from the list. You can also start a new by clicking the button Start Build as shown below.

s2i-5

We can always display YAML configuration files with BuildConfig definition. But it is also possible to perform a similar action using a web console. The following picture shows the list of environment variables injected from mongodb secret into the BuildConfig object.

s2i-6.PNG

Every build generates a Docker image with an application and saves it in Minishift internal registry. Minishift internal registry is available under address 172.30.1.1:5000. The list of available image streams is available under section Builds -> Images.

s2i-7

Every application is automatically exposed on ports 8080 (HTTP), 8443 (HTTPS) and 8778 (Jolokia) via services. You can also expose these services outside Minishift by creating OpenShift Route using oc expose command.

s2i-8

5. Testing the sample system

To proceed with the tests we should first expose our microservices outside Minishift. To do that just run the following commands.

$ oc expose svc employee
$ oc expose svc department
$ oc expose svc organization

After that we can access applications on the address http://${APP_NAME}-${PROJ_NAME}.${MINISHIFT_IP}.nip.io as shown below.

s2i-9

Each microservice provides Swagger2 API documentation available on page swagger-ui.html. Thanks to that we can easily test every single endpoint exposed by the service.

s2i-10

It’s worth notice that every application making use of three approaches to inject environment variables into the pod:

  1. It stores version number in source code repository inside the file .s2i/environment. S2I builder reads all the properties defined inside that file and set them as environment variables for builder pod, and then application pod. Our property name is VERSION, which is injected using Spring @Value, and set for Swagger API (the code is visible below).
  2. I have already set the names of dependent services as ENV vars during executing command oc new-app for department and organization apps.
  3. I have also inject MongoDB secret into every BuildConfig object using oc set env command.
@Value("${VERSION}")
String version;

public static void main(String[] args) {
   SpringApplication.run(DepartmentApplication.class, args);
}

@Bean
public Docket swaggerApi() {
   return new Docket(DocumentationType.SWAGGER_2)
      .select()
         .apis(RequestHandlerSelectors.basePackage("pl.piomin.services.department.controller"))
         .paths(PathSelectors.any())
      .build()
      .apiInfo(new ApiInfoBuilder().version(version).title("Department API").description("Documentation Department API v" + version).build());
}

Conclusion

In this article I show you that deploying your applications on OpenShift may be a very simple thing. You don’t have to create any YAML descriptor files or build Docker images by yourself to run your app. It is built directly from your source code. You can compare it with deployment on Kubernetes described in one of my previous articles Quick Guide to Microservices with Kubernetes, Spring Boot 2.0 and Docker.

The post Running Java Microservices on OpenShift using Source-2-Image appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2019/01/08/running-java-microservices-on-openshift-using-source-2-image/feed/ 1 6944