Cloud Archives - Piotr's TechBlog https://piotrminkowski.com/category/cloud/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 17 Nov 2025 09:27:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Cloud Archives - Piotr's TechBlog https://piotrminkowski.com/category/cloud/ 32 32 181738725 Running .NET Apps on OpenShift https://piotrminkowski.com/2025/11/17/running-net-apps-on-openshift/ https://piotrminkowski.com/2025/11/17/running-net-apps-on-openshift/#respond Mon, 17 Nov 2025 09:27:34 +0000 https://piotrminkowski.com/?p=15785 This article will guide you on running a .NET application on OpenShift using the Source-to-Image (S2I) tool. While .NET is not my primary area of expertise, I have been working with it quite extensively lately. In this article, we will examine more complex application cases, which may initially present some challenges. If you are interested […]

The post Running .NET Apps on OpenShift appeared first on Piotr's TechBlog.

]]>
This article will guide you on running a .NET application on OpenShift using the Source-to-Image (S2I) tool. While .NET is not my primary area of expertise, I have been working with it quite extensively lately. In this article, we will examine more complex application cases, which may initially present some challenges.

If you are interested in developing applications for OpenShift, you may also want to read my article on deploying Java applications using the odo tool.

Why Source-to-Image?

That’s probably the first question that comes to mind. Let’s start with a brief definition. Source-to-Image (S2I) is a framework and tool that enables you to write images using the application’s source code as input, producing a new image. In other words, it provides a clean, repeatable, and developer-friendly way to build container images directly from source code – especially in OpenShift, where it’s a core built-in mechanism. With S2I, there is no need to create Dockerfiles, and you can trust that the images will be built to run seamlessly on OpenShift without any issues.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

Prerequisite – OpenShift cluster

There are several ways in which you can run an OpenShift cluster. I’m using a cluster that runs in AWS. But you can run it locally using OpenShift Local. This article describes how to install it on your laptop. You can also take advantage of the 30-day free Developer Sandbox service. However, it is worth mentioning that its use requires creating an account with Red Hat. To provision an OpenShift cluster in the developer sandbox, go here. You can also download and install Podman Desktop, which will help you set up both OpenShift Local and connect to the Developer Sandbox. Generally speaking, there are many possibilities. I assume you simply have an OpenShift cluster at your disposal.

Create a .NET application

I have created a slightly more complex application in terms of its modules. It consists of two main projects and two projects with unit tests. The WebApi.Library project is simply a module to be included in the main application, which is WebApi.App. Below is the directory structure of our sample repository.

.
├── README.md
├── WebApi.sln
├── src
│   ├── WebApi.App
│   │   ├── Controllers
│   │   │   └── VersionController.cs
│   │   ├── Program.cs
│   │   ├── Startup.cs
│   │   ├── WebApi.App.csproj
│   │   └── appsettings.json
│   └── WebApi.Library
│       ├── VersionService.cs
│       └── WebApi.Library.csproj
└── tests
    ├── WebApi.App.Tests
    │   ├── VersionControllerTests.cs
    │   └── WebApi.App.Tests.csproj
    └── WebApi.Library.Tests
        ├── VersionServiceTests.cs
        └── WebApi.Library.Tests.csproj
Plaintext

Both the library and the application are elementary in nature. The library provides a single method in the VersionService class to return its version read from the .csproj file.

using System.Reflection;

namespace WebApi.Library;

public class VersionService
{
    private readonly Assembly _assembly;

    public VersionService(Assembly? assembly = null)
    {
        _assembly = assembly ?? Assembly.GetExecutingAssembly();
    }

    public string? GetVersion()
    {
        var informationalVersion = _assembly
            .GetCustomAttribute<AssemblyInformationalVersionAttribute>()?
            .InformationalVersion;

        return informationalVersion ?? _assembly.GetName().Version?.ToString();
    }
}
C#

The application includes the library and uses its VersionService class to read and return the library version in the GET /api/version endpoint. There is no story behind it.

using Microsoft.AspNetCore.Mvc;
using WebApi.Library;

namespace WebApi.App.Controllers
{
    [ApiController]
    [Route("api/version")]
    public class VersionController : ControllerBase
    {
        private readonly VersionService _versionService;
        private readonly ILogger<VersionController> _logger;

        public VersionController(ILogger<VersionController> logger)
        {
            _versionService = new VersionService();
            _logger = logger;
        }

        [HttpGet]
        public IActionResult GetVersion()
        {
            _logger.LogInformation("GetVersion");
            var version = _versionService.GetVersion();
            return Ok(new { version });
        }
    }
}
C#

The application itself utilizes several other libraries, including those for generating Swagger API documentation, Prometheus metrics, and Kubernetes health checks.

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.OpenApi.Models;
using HealthChecks.UI.Client;
using Microsoft.AspNetCore.Diagnostics.HealthChecks;
using Prometheus;
using System;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Diagnostics.HealthChecks;

namespace WebApi.App
{
    public class Startup
    {
        private readonly IConfiguration _configuration;

        public Startup(IConfiguration configuration)
        {
            _configuration = configuration;
        }

        public void ConfigureServices(IServiceCollection services)
        {
            // Enhanced Health Checks
            services.AddHealthChecks()
                .AddCheck("memory", () =>
                    HealthCheckResult.Healthy("Memory usage is normal"),
                    tags: new[] { "live" });

            services.AddControllers();

            services.AddSwaggerGen(c =>
            {
                c.SwaggerDoc("v1", new OpenApiInfo {Title = "WebApi.App", Version = "v1"});
            });
        }

        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            // Enable prometheus metrics
            app.UseMetricServer();
            app.UseHttpMetrics();

            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            app.UseSwagger();
            app.UseSwaggerUI(c => c.SwaggerEndpoint("/swagger/v1/swagger.json", "person-service v1"));

            // Kubernetes probes
            app.UseHealthChecks("/health/live", new HealthCheckOptions
            {
                Predicate = reg => reg.Tags.Contains("live"),
                ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
            });

            app.UseHealthChecks("/health/ready", new HealthCheckOptions
            {
                Predicate = reg => reg.Tags.Contains("ready"),
                ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
            });

            app.UseRouting();

            app.UseEndpoints(endpoints =>
            {
                endpoints.MapControllers();
                endpoints.MapMetrics();
            });

            using var scope = app.ApplicationServices.CreateScope();
        }
    }
}
C#

As you can see, the WebApi.Library project is included as an internal module, while other dependencies are simply added from the external NuGet repository.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <ItemGroup>
    <ProjectReference Include="..\WebApi.Library\WebApi.Library.csproj" />
    <PackageReference Include="Swashbuckle.AspNetCore" Version="9.0.4" />
    <PackageReference Include="prometheus-net.AspNetCore" Version="8.2.1" />
    <PackageReference Include="AspNetCore.HealthChecks.NpgSql" Version="9.0.0" />
    <PackageReference Include="AspNetCore.HealthChecks.UI.Client" Version="9.0.0" />
    <PackageReference Include="Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore" Version="9.0.8" />
  </ItemGroup>

  <PropertyGroup>
    <TargetFramework>net9.0</TargetFramework>
    <ImplicitUsings>enable</ImplicitUsings>
    <Nullable>enable</Nullable>
    <Version>1.0.3</Version>
    <IsPackable>true</IsPackable>
    <GeneratePackageOnBuild>true</GeneratePackageOnBuild>
    <PackageId>WebApi.App</PackageId>
    <Authors>piomin</Authors>
    <Description>WebApi</Description>
  </PropertyGroup>

</Project>
XML

Using OpenShift Source-to-Image for .NET

S2I Locally with CLI

Before testing a mechanism on OpenShift, you try Source-to-Image locally. On macOS, you can install s2i CLI using Homebrew:

brew install source-to-image
ShellSession

After installation, check its version:

$ s2i version
s2i v1.5.1
ShellSession

Then, go to the repository root directory. At this point, we need to parameterize our build because the repository contains several projects. Fortunately, S2I provides a parameter that allows us to set the main project in a multi-module structure easily. It must be set as an environment variable for the s2i command. The following command sets the DOTNET_STARTUP_PROJECT environment variable and uses the registry.access.redhat.com/ubi8/dotnet-90:latest as a builder image.

s2i build . registry.access.redhat.com/ubi8/dotnet-90:latest webapi-app \
  -e DOTNET_STARTUP_PROJECT=src/WebApi.App
ShellSession

Of course, you must have Docker or Podman running on your laptop to use s2i. So, before using a builder image, pull it to your host.

podman pull registry.access.redhat.com/ubi8/dotnet-90:latest
ShellSession

Let’s take a look at the s2i build command output. As you can see, s2i restored and built two projects, but then created a runnable output for the WebApi.App project.

net-openshift-s2i-cli

What about our unit tests? To execute tests during the build, we must also set the DOTNET_TEST_PROJECTS environment variable.

s2i build . registry.access.redhat.com/ubi8/dotnet-90:latest webapi-app \
  -e DOTNET_STARTUP_PROJECT=src/WebApi.App \
  -e DOTNET_TEST_PROJECTS=tests/WebApi.App.Tests
ShellSession

Here’s the command output:

net-openshift-cli-2

The webapi-app image is ready.

$ podman images webapi-app
REPOSITORY                    TAG         IMAGE ID      CREATED        SIZE
docker.io/library/webapi-app  latest      e9d94f983ac1  5 seconds ago  732 MB
ShellSession

We can run it locally with Podman (or Docker):

S2I for .NET on OpenShift

Then, let’s switch to the OpenShift cluster. You need to log in to your cluster using the oc login command. After that, create a new project for testing purposes:

oc new-project dotnet
ShellSession

In OpenShift, a single command can handle everything necessary to build and deploy an application. We need to provide the address of the Git repository containing the source code, specify the branch name, and indicate the name of the builder image located within the cluster’s namespace. Additionally, we should include the same environment variables as we did previously. Since the version of source code we tested before is located in the dev branch, we must pass it together with the repository URL after #.

oc new-app openshift/dotnet:latest~https://github.com/piomin/web-api-2.git#dev --name webapi-app \
  --build-env DOTNET_STARTUP_PROJECT=src/WebApi.App \
  --build-env DOTNET_TEST_PROJECTS=tests/WebApi.App.Tests
ShellSession

Here’s the oc new-app command output:

Then, let’s expose the application outside the cluster using OpenShift Route.

oc expose service/webapi-app
ShellSession

Finally, we can verify the build and deployment status:

There is also a really nice command you can use here. Try yourself 🙂

oc get all
ShellSession

OpenShift Builds with Source-to-Image

Verify Build Status

Let’s verify what has happened after taking the steps from the previous section. Here’s the panel that summarizes the status of our application on the cluster. OpenShift automatically built the image from the .NET source code repository and then deployed it in the target namespace.

net-openshift-console

Here are the logs from the Pod with our application:

The build was entirely performed on the cluster. You can verify the logs from the build by accessing the Build object. After building, the image was pushed to the internal image registry in OpenShift.

net-openshift-build

Under the hood, the BuildConfig was created. This will be the starting point for the next example we will consider.

apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftNewApp
  labels:
    app: webapi-app
    app.kubernetes.io/component: webapi-app
    app.kubernetes.io/instance: webapi-app
  name: webapi-app
  namespace: dotnet
spec:
  output:
    to:
      kind: ImageStreamTag
      name: webapi-app:latest
  source:
    git:
      ref: dev
      uri: https://github.com/piomin/web-api-2.git
    type: Git
  strategy:
    sourceStrategy:
      env:
      - name: DOTNET_STARTUP_PROJECT
        value: src/WebApi.App
      - name: DOTNET_TEST_PROJECTS
        value: tests/WebApi.App.Tests
      from:
        kind: ImageStreamTag
        name: dotnet:latest
        namespace: openshift
    type: Source
YAML

OpenShift with .NET and Azure Artifacts Proxy

Now let’s switch to the master branch in our Git repository. In this branch, the WebApi.Library library is no longer included as a path in the project, but as a separate dependency from an external repository. However, this library has not been published in the public NuGet repository, but in the internal Azure Artifacts repository. Therefore, the build process must take place via a proxy pointing to the address of our repository, or rather, a feed in Azure Artifacts.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <ItemGroup>
    <PackageReference Include="WebApi.Library" Version="1.0.3" />
    <PackageReference Include="Swashbuckle.AspNetCore" Version="9.0.4" />
    <PackageReference Include="prometheus-net.AspNetCore" Version="8.2.1" />
    <PackageReference Include="AspNetCore.HealthChecks.NpgSql" Version="9.0.0" />
    <PackageReference Include="AspNetCore.HealthChecks.UI.Client" Version="9.0.0" />
    <PackageReference Include="Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore" Version="9.0.8" />
  </ItemGroup>

  <PropertyGroup>
    <TargetFramework>net9.0</TargetFramework>
    <ImplicitUsings>enable</ImplicitUsings>
    <Nullable>enable</Nullable>
    <Version>1.0.3</Version>
    <IsPackable>true</IsPackable>
    <GeneratePackageOnBuild>true</GeneratePackageOnBuild>
    <PackageId>WebApi.App</PackageId>
    <Authors>piomin</Authors>
    <Description>WebApi</Description>
  </PropertyGroup>

</Project>
XML

This is how it looks in Azure Artifacts. The name of my feed is pminkows. To access the feed, I must be authenticated against Azure DevOps using a personal token. The full address of the NuGet registry exposed via my instance of Azure Artifacts is https://pkgs.dev.azure.com/pminkows/_packaging/pminkows/nuget/v3/index.json.

If you would like to build such an application locally using Azure Artifacts, you should create a nuget.config file with the configuration below. Then place it in the $HOME/.nuget/NuGet directory.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <clear />
    <add key="pminkows" value="https://pkgs.dev.azure.com/pminkows/_packaging/pminkows/nuget/v3/index.json" />
  </packageSources>
  <packageSourceCredentials>
    <pminkows>
      <add key="Username" value="pminkows" />
      <add key="ClearTextPassword" value="<MY_PERSONAL_TOKEN>" />
    </pminkows>
  </packageSourceCredentials>
</configuration>
nuget.config

Our goal is to run this type of build on OpenShift instead of locally. To achieve this, we need to create a Kubernetes Secret containing the nuget.config file.

oc create secret generic nuget-config --from-file=nuget.config
ShellSession

Then, we must update the contents of the BuildConfig object. The changed lines in the object have been highlighted. The most important element is spec.source.secrets. The Kubernetes Secret containing the nuget.config file must be mounted in the HOME directory of the base image with .NET. We also change the branch in the repository to master and increase the logging level for the builder to detailed.

apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftNewApp
  labels:
    app: webapi-app
    app.kubernetes.io/component: webapi-app
    app.kubernetes.io/instance: webapi-app
  name: webapi-app
  namespace: dotnet
spec:
  output:
    to:
      kind: ImageStreamTag
      name: webapi-app:latest
  source:
    git:
      ref: master
      uri: https://github.com/piomin/web-api-2.git
    type: Git
    secrets:
      - secret:
          name: nuget-config
        destinationDir: /opt/app-root/src/
  strategy:
    sourceStrategy:
      env:
      - name: DOTNET_STARTUP_PROJECT
        value: src/WebApi.App
      - name: DOTNET_TEST_PROJECTS
        value: tests/WebApi.App.Tests
      - name: DOTNET_VERBOSITY
        value: d
      from:
        kind: ImageStreamTag
        name: dotnet:latest
        namespace: openshift
    type: Source
YAML

Next, we can run the build again, but this time with new parameters using the command below. With increased logging level, you can confirm that all dependencies are being retrieved via the Azure Artifacts instance.

oc start-build webapi-app --follow
ShellSession

Conclusion

This article covers different scenarios about building and deploying .NET applications in developer mode on OpenShift. It demonstrates how to use various parameters to customize image building according to the application’s needs. My goal was to demonstrate that deploying .NET applications on OpenShift is straightforward with the help of Source-to-Image.

The post Running .NET Apps on OpenShift appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/11/17/running-net-apps-on-openshift/feed/ 0 15785
Spring AI with Azure OpenAI https://piotrminkowski.com/2025/03/25/spring-ai-with-azure-openai/ https://piotrminkowski.com/2025/03/25/spring-ai-with-azure-openai/#comments Tue, 25 Mar 2025 16:02:02 +0000 https://piotrminkowski.com/?p=15651 This article will show you how to use Spring AI features like chat client memory, multimodality, tool calling, or embedding models with the Azure OpenAI service. Azure OpenAI is supported in almost all Spring AI use cases. Moreover, it goes beyond standard OpenAI capabilities, providing advanced AI-driven text generation and incorporating additional AI safety and […]

The post Spring AI with Azure OpenAI appeared first on Piotr's TechBlog.

]]>
This article will show you how to use Spring AI features like chat client memory, multimodality, tool calling, or embedding models with the Azure OpenAI service. Azure OpenAI is supported in almost all Spring AI use cases. Moreover, it goes beyond standard OpenAI capabilities, providing advanced AI-driven text generation and incorporating additional AI safety and responsible AI features. It also enables the integration of AI-focused resources, such as Vector Stores on Azure.

This is the eighth part of my series of articles about Spring Boot and AI. It is worth reading the following posts before proceeding with the current one. Here’s a list of articles about Spring AI on my blog with a short description:

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Then you should only follow my instructions.

Enable and Configure Azure OpenAI

You need to begin the exercise by creating an instance of the Azure OpenAI service. The most crucial element here is the service’s name since it is part of the exposed Open AI endpoint. My service’s name is piomin-azure-openai.

spring-ai-azure-openai-create

The Azure OpenAI service should be exposed without restrictions to allow easy access to the Spring AI app.

After creating the service, go to its main page in the Azure Portal. It provides information about API keys and an endpoint URL. Also, you have to deploy an Azure OpenAI model to start making API calls from your Spring AI app.

Copy the key and the endpoint URL and save them for later usage.

spring-ai-azure-openai-api-key

You must create a new deployment with an AI model in the Azure AI Foundry portal. There are several available options. The Spring AI Azure OpenAI starter by default uses the gpt-4o model. If you choose another AI model, you will have to set its name in the spring.ai.azure.openai.chat.options.deployment-name Spring AI property. After selecting the preferred model, click the “Confirm” button.

spring-ai-azure-openai-deploy-model

Finally, you can deploy the model on the Azure AI Foundry portal. Choose the most suitable deployment type for your needs.

Azure allows us to deploy multiple models. You can verify a list of model deployments here:

That’s all on the Azure Portal side. Now it’s time for the implementation part in the application source code.

Enable Azure OpenAI for Spring AI

Spring AI provides the Spring Boot starter for the Azure OpenAI Chat Client. You must add the following dependency to your Maven pom.xml file. Since the sample Spring Boot application is portable across various AI models, it includes the Azure OpenAI starter only if the azure-ai profile is active. Otherwise, it uses the spring-ai-openai-spring-boot-starter library.

<profile>
  <id>azure-ai</id>
  <dependencies>
    <dependency>
      <groupId>org.springframework.ai</groupId>
      <artifactId>spring-ai-azure-openai-spring-boot-starter</artifactId>
    </dependency>
  </dependencies>
</profile>
XML

It’s time to use the key you previously copied from the Azure OpenAI service page. Let’s export it as the AZURE_OPENAI_API_KEY environment variable.

export AZURE_OPENAI_API_KEY=<YOUR_AZURE_OPENAI_API_KEY>
ShellSession

Here are the application properties dedicated to the azure-ai Spring Boot profile. The previously exported AZURE_OPENAI_API_KEY environment variable is set as the spring.ai.azure.openai.api-key property. You also must set the OpenAI service endpoint. This address depends on your Azure OpenAI service name.

spring.ai.azure.openai.api-key = ${AZURE_OPENAI_API_KEY}
spring.ai.azure.openai.endpoint = https://piomin-azure-openai.openai.azure.com/
application-azure-ai.properties

To run the application and connect to your instance of the Azure OpenAI service, you must activate the azure-ai Maven profile and the Spring Boot profile under the same name. Here’s the required command:

mvn spring-boot:run -Pazure-ai -Dspring-boot.run.profiles=azure-ai
ShellSession

Test Spring AI Features with Azure OpenAI

I described several Spring AI features in the previous articles from this series. In each section, I will briefly mention the tested feature with a fragment of the sample source code. Please refer to my previous posts for more details about each feature and its sample implementation.

Chat Client with Memory and Structured Output

Here’s the @RestController containing endpoints we will use in these tests.

@RestController
@RequestMapping("/persons")
public class PersonController {

    private final ChatClient chatClient;

    public PersonController(ChatClient.Builder chatClientBuilder,
                            ChatMemory chatMemory) {
        this.chatClient = chatClientBuilder
                .defaultAdvisors(
                        new PromptChatMemoryAdvisor(chatMemory),
                        new SimpleLoggerAdvisor())
                .build();
    }

    @GetMapping
    List<Person> findAll() {
        PromptTemplate pt = new PromptTemplate("""
                Return a current list of 10 persons if exists or generate a new list with random values.
                Each object should contain an auto-incremented id field.
                The age value should be a random number between 18 and 99.
                Do not include any explanations or additional text.
                Return data in RFC8259 compliant JSON format.
                """);

        return this.chatClient.prompt(pt.create())
                .call()
                .entity(new ParameterizedTypeReference<>() {});
    }

    @GetMapping("/{id}")
    Person findById(@PathVariable String id) {
        PromptTemplate pt = new PromptTemplate("""
                Find and return the object with id {id} in a current list of persons.
                """);
        Prompt p = pt.create(Map.of("id", id));
        return this.chatClient.prompt(p)
                .call()
                .entity(Person.class);
    }
}
Java

First, you must call the endpoint that generates a list of ten persons from different countries. Then choose one person by ID to pick it up from the chat memory. Here are the results.

spring-ai-azure-openai-test-chat-model

The interesting part happens in the background. Here’s a fragment of advice context added to the prompt by Spring AI.

Tool Calling

Here’s the @RestController containing endpoints we will use in these tests. There are two tools injected into the chat client: StockTools and WalletTools. These tools interact with a local H2 database to get a sample stock wallet structure and with the stock online API to load the latest share prices.

@RestController
@RequestMapping("/wallet")
public class WalletController {

    private final ChatClient chatClient;
    private final StockTools stockTools;
    private final WalletTools walletTools;

    public WalletController(ChatClient.Builder chatClientBuilder,
                            StockTools stockTools,
                            WalletTools walletTools) {
        this.chatClient = chatClientBuilder
                .defaultAdvisors(new SimpleLoggerAdvisor())
                .build();
        this.stockTools = stockTools;
        this.walletTools = walletTools;
    }

    @GetMapping("/with-tools")
    String calculateWalletValueWithTools() {
        PromptTemplate pt = new PromptTemplate("""
        What’s the current value in dollars of my wallet based on the latest stock daily prices ?
        """);

        return this.chatClient.prompt(pt.create())
                .tools(stockTools, walletTools)
                .call()
                .content();
    }

    @GetMapping("/highest-day/{days}")
    String calculateHighestWalletValue(@PathVariable int days) {
        PromptTemplate pt = new PromptTemplate("""
        On which day during last {days} days my wallet had the highest value in dollars based on the historical daily stock prices ?
        """);

        return this.chatClient.prompt(pt.create(Map.of("days", days)))
                .tools(stockTools, walletTools)
                .call()
                .content();
    }
}
Java

You must have your API key for the Twelvedata service to run these tests. Don’t forget to export it as the STOCK_API_KEY environment variable before running the app.

export STOCK_API_KEY=<YOUR_STOCK_API_KEY>
Java

The GET /wallet/with-tools endpoint calculates the current stock wallet value in dollars.

spring-ai-azure-openai-test-tool-calling

The GET /wallet/highest-day/{days} computes the value of the stock wallet for a given period in days and identifies the day with the highest value.

Multimodality and Images

Here’s a part of the @RestController responsible for describing image content and generating a new image with a given item.

@RestController
@RequestMapping("/images")
public class ImageController {

    private final static Logger LOG = LoggerFactory.getLogger(ImageController.class);
    private final ObjectMapper mapper = new ObjectMapper();

    private final ChatClient chatClient;
    private ImageModel imageModel;

    public ImageController(ChatClient.Builder chatClientBuilder,
                           Optional<ImageModel> imageModel) {
        this.chatClient = chatClientBuilder
                .defaultAdvisors(new SimpleLoggerAdvisor())
                .build();
        imageModel.ifPresent(model -> this.imageModel = model);
    }
        
    @GetMapping("/describe/{image}")
    List<Item> describeImage(@PathVariable String image) {
        Media media = Media.builder()
                .id(image)
                .mimeType(MimeTypeUtils.IMAGE_PNG)
                .data(new ClassPathResource("images/" + image + ".png"))
                .build();
        UserMessage um = new UserMessage("""
        List all items you see on the image and define their category.
        Return items inside the JSON array in RFC8259 compliant JSON format.
        """, media);
        return this.chatClient.prompt(new Prompt(um))
                .call()
                .entity(new ParameterizedTypeReference<>() {});
    }
    
    @GetMapping(value = "/generate/{object}", produces = MediaType.IMAGE_PNG_VALUE)
    byte[] generate(@PathVariable String object) throws IOException, NotSupportedException {
        if (imageModel == null)
            throw new NotSupportedException("Image model is not supported");
        ImageResponse ir = imageModel.call(new ImagePrompt("Generate an image with " + object, ImageOptionsBuilder.builder()
                .height(1024)
                .width(1024)
                .N(1)
                .responseFormat("url")
                .build()));
        String url = ir.getResult().getOutput().getUrl();
        UrlResource resource = new UrlResource(url);
        LOG.info("Generated URL: {}", url);
        dynamicImages.add(Media.builder()
                .id(UUID.randomUUID().toString())
                .mimeType(MimeTypeUtils.IMAGE_PNG)
                .data(url)
                .build());
        return resource.getContentAsByteArray();
    }
    
}
Java

The GET /images/describe/{image} returns a structured list of items identified on a given image. It also categorizes each detected item. In this case, there are two available categories: fruits and vegetables.

spring-ai-azure-openai-test-multimodality

By the way, here’s the image described above.

The image generation feature requires a dedicated model on Azure AI. The DALL-E 2 and DALL-E 3 models on Azure support a text-to-image feature.

spring-ai-azure-openai-dalle3

The application must be aware of the model name. That’s why you must add a new property to your application properties with the following value.

spring.ai.azure.openai.image.options.deployment-name = dall-e-3
Plaintext

Then you must restart the application. After that, you can generate an image by calling the GET /images/generate/{object} endpoint. Here’s the result for the pineapple.

Enable Azure CosmosDB Vector Store

Dependency

By default, the sample Spring Boot application uses Pinecone vector store. However, SpringAI supports two services available on Azure: Azure AI Search and CosmosDB. Let’s choose CosmosDB as the vector store. You must add the following dependency to your Maven pom.xml file:

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-azure-cosmos-db-store-spring-boot-starter</artifactId>
</dependency>
XML

Configuration on Azure

Then, you must create an instance of CosmosDB in your Azure account. The name of my instance is piomin-ai-cosmos.

Once it is created, you will obtain its address and API key. To do that, go to the “Settings -> Keys” menu and save both values visible below.

spring-ai-azure-openai-cosmosdb

Then, you have to create a dedicated database and container for your application. To do that, go to the “Data Explorer” tab and provide names for the database and container ID. You must also set the partition key.

All previously provided values must be set in the application properties. Export your CosmosDB API key as the AZURE_VECTORSTORE_API_KEY environment variable.

spring.ai.vectorstore.cosmosdb.endpoint = https://piomin-ai-cosmos.documents.azure.com:443/
spring.ai.vectorstore.cosmosdb.key = ${AZURE_VECTORSTORE_API_KEY}
spring.ai.vectorstore.cosmosdb.databaseName = spring-ai
spring.ai.vectorstore.cosmosdb.containerName = spring-ai
spring.ai.vectorstore.cosmosdb.partitionKeyPath = /id
application-azure-ai.properties

Unfortunately, there are still some issues with the Azure CosmosDB support in the Spring AI M6 milestone version. I see that they were fixed in the SNAPSHOT version. So, if you want to test it by yourself, you will have to switch from milestones to snapshots.

<properties>
  <java.version>21</java.version>
  <spring-ai.version>1.0.0-SNAPSHOT</spring-ai.version>
</properties>
  
<repositories>
  <repository>
    <name>Central Portal Snapshots</name>
    <id>central-portal-snapshots</id>
    <url>https://central.sonatype.com/repository/maven-snapshots/</url>
    <releases>
      <enabled>false</enabled>
    </releases>
    <snapshots>
      <enabled>true</enabled>
    </snapshots>
  </repository>
  <repository>
    <id>spring-snapshots</id>
    <name>Spring Snapshots</name>
    <url>https://repo.spring.io/snapshot</url>
    <releases>
      <enabled>false</enabled>
    </releases>
    <snapshots>
      <enabled>true</enabled>
    </snapshots>
  </repository>
</repositories>
XML

Run and Test the Application

After those changes, you can start the application with the following command:

mvn spring-boot:run -Pazure-ai -Dspring-boot.run.profiles=azure-ai
XML

Once the application is running, you can test the following @RestController that offers RAG functionality. The GET /stocks/load-data endpoint obtains stock prices of given companies and puts them in the vector store. The GET /stocks/v2/most-growth-trend uses the RetrievalAugmentationAdvisor instance to retrieve the most suitable data and include it in the user query.

@RestController
@RequestMapping("/stocks")
public class StockController {

    private final ObjectMapper mapper = new ObjectMapper();
    private final static Logger LOG = LoggerFactory.getLogger(StockController.class);
    private final ChatClient chatClient;
    private final RewriteQueryTransformer.Builder rqtBuilder;
    private final RestTemplate restTemplate;
    private final VectorStore store;

    @Value("${STOCK_API_KEY:none}")
    private String apiKey;

    public StockController(ChatClient.Builder chatClientBuilder,
                           VectorStore store,
                           RestTemplate restTemplate) {
        this.chatClient = chatClientBuilder
                .defaultAdvisors(new SimpleLoggerAdvisor())
                .build();
        this.rqtBuilder = RewriteQueryTransformer.builder()
                .chatClientBuilder(chatClientBuilder);
        this.store = store;
        this.restTemplate = restTemplate;
    }

    @GetMapping("/load-data")
    void load() throws JsonProcessingException {
        final List<String> companies = List.of("AAPL", "MSFT", "GOOG", "AMZN", "META", "NVDA");
        for (String company : companies) {
            StockData data = restTemplate.getForObject("https://api.twelvedata.com/time_series?symbol={0}&interval=1day&outputsize=10&apikey={1}",
                    StockData.class,
                    company,
                    apiKey);
            if (data != null && data.getValues() != null) {
                var list = data.getValues().stream().map(DailyStockData::getClose).toList();
                var doc = Document.builder()
                        .id(company)
                        .text(mapper.writeValueAsString(new Stock(company, list)))
                        .build();
                store.add(List.of(doc));
                LOG.info("Document added: {}", company);
            }
        }
    }

    @RequestMapping("/v2/most-growth-trend")
    String getBestTrendV2() {
        PromptTemplate pt = new PromptTemplate("""
                {query}.
                Which {target} is the most % growth?
                The 0 element in the prices table is the latest price, while the last element is the oldest price.
                """);

        Prompt p = pt.create(Map.of("query", "Find the most growth trends", "target", "share"));

        Advisor retrievalAugmentationAdvisor = RetrievalAugmentationAdvisor.builder()
                .documentRetriever(VectorStoreDocumentRetriever.builder()
                        .similarityThreshold(0.7)
                        .topK(3)
                        .vectorStore(store)
                        .build())
                .queryTransformers(rqtBuilder.promptTemplate(pt).build())
                .build();

        return this.chatClient.prompt(p)
                .advisors(retrievalAugmentationAdvisor)
                .call()
                .content();
    }

}
Java

Finally, you can call the following two endpoints.

$ curl http://localhost:8080/stocks/load-data
$ curl http://localhost:8080/stocks/v2/most-growth-trend
ShellSession

Final Thoughts

This exercise shows how to modify an existing Spring Boot AI application to integrate it with the Azure OpenAI service. It also gives a recipe on how to include Azure CosmosDB as a vector store for RAG scenarios and similarity searches.

The post Spring AI with Azure OpenAI appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2025/03/25/spring-ai-with-azure-openai/feed/ 4 15651
Getting Started with Azure Kubernetes Service https://piotrminkowski.com/2024/02/05/getting-started-with-azure-kubernetes-service/ https://piotrminkowski.com/2024/02/05/getting-started-with-azure-kubernetes-service/#respond Mon, 05 Feb 2024 11:12:47 +0000 https://piotrminkowski.com/?p=14887 In this article, you will learn how to create and manage a Kubernetes cluster on Azure and run your apps on it. We will focus on the Azure features that simplify Kubernetes adoption. We will discuss such topics as enabling monitoring based on Prometheus or exposing an app outside of the cluster using the Ingress […]

The post Getting Started with Azure Kubernetes Service appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create and manage a Kubernetes cluster on Azure and run your apps on it. We will focus on the Azure features that simplify Kubernetes adoption. We will discuss such topics as enabling monitoring based on Prometheus or exposing an app outside of the cluster using the Ingress object and Azure mechanisms. To proceed with that article, you don’t need to have a deep knowledge of Kubernetes. However, you may find a lot of articles about Kubernetes and cloud-native development on my blog. For example, if you are developing Java apps and running them on Kubernetes you may read the following article about best practices.

On the other hand, if you are interested in Azure and looking for some other approaches for running Java apps there, you can also refer to some previous posts on my blog. I have already described how to use such services as Azure Spring Apps or Azure Function for Java. For example, in that article, you can read how to integrate Spring Boot with Azure services using the Spring Cloud Azure project. For more information about Azure Function and Spring Cloud refer to that article.

Source Code

This time we won’t work much with a source code. However, if you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should follow my further instructions.

Create Cluster with Azure Kubernetes Service

After signing in to Azure Portal we can create a resource group for our cluster. The name of my resource group is aks. Then we need to find the Azure Kubernetes Service (AKS) in the marketplace. We are creating the instance of AKS in the aks resource group.

We will redirected to the first page of the creation wizard. I will just put the name of my cluster, and leave all the others with the recommended values. The name of my cluster is piomin. The default cluster preset configuration is “Dev/Test”, which is enough for our exercise. However, if you choose e.g. the “Production Standard” preset it will set the 3 availability zones and change the pricing tier for your cluster. Let’s click the “Next” button to proceed to the next page.

azure-kubernetes-install-general

We also won’t change anything in the “Node pools” section. On the “Networking” page, we choose “Azure CNI” instead of "Kubenet" as network configuration, and “Azure” instead of “Calico” as network policy. In comparison to Kubenet, the Azure CNI simplifies integration between Kubernetes and Azure Application Gateway.

We will also provide some changes in the Monitoring section. The main goal here is to enable managed Prometheus service for our cluster. In order to do it, we need to create a new workspace in Azure Monitor. The name of my workspace is prometheus.

That’s all that we needed. Finally, we can create our first AKS cluster.

After some minutes our Kubernetes cluster is ready. We can display a list of resources created inside the aks group. As you see, there are some resources related to the Prometheus or Azure Monitor and a single Kubernetes service “piomin”. It is our Kubernetes cluster. We can click it to see details.

azure-kubernetes-resources-aks

Of course, we can manage the cluster using Azure Portal. However, we can also easily switch to the kubectl CLI. Here’s the Kubernetes API server address for our cluster: piomin-xq30re6n.hcp.eastus.azmk8s.io.

Manage AKS with CLI

We can easily import the AKS cluster credentials into our local Kubeconfig file with the following az command (piomin is the name of the cluster, while aks is the name of the resource group):

$ az aks get-credentials -n piomin -g aks

Once you execute the command visible above, it will add a new Kube context or override the existing one.

After that, we can switch to the kubectl CLI. For example, we can display a list of Deployments across all the namespaces:

$ kubectl get deploy -A
NAMESPACE     NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   ama-logs-rs          1/1     1            1           56m
kube-system   ama-metrics          1/1     1            1           52m
kube-system   ama-metrics-ksm      1/1     1            1           52m
kube-system   coredns              2/2     2            2           58m
kube-system   coredns-autoscaler   1/1     1            1           58m
kube-system   konnectivity-agent   2/2     2            2           58m
kube-system   metrics-server       2/2     2            2           58m

Deploy Sample Apps on the AKS Cluster

Once we can interact with the Kubernetes cluster on Azure through the kubectl CLI, we can run our first app there. In order to do it, firstly, go to the callme-service directory. It contains a simple Spring Boot app that exposes REST endpoints. The Kubernetes manifests are located inside the k8s directory. Let’s take a look at the deployment YAML manifest. It contains the Kubernetes Deployment and Service objects.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
  template:
    metadata:
      labels:
        app: callme-service
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

In order to simplify deployment on Kubernetes we can use Skaffold. It integrates with the kubectl CLI. We just need to execute the following command to build the app from the source code and run it on AKS:

$ cd callme-service
$ skaffold run

After that, we will deploy a second app on the cluster. Go to the caller-service directory. Here’s the YAML manifest with Kubernetes Service and Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        env:
          - name: VERSION
            value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: caller-service
  labels:
    app: caller-service
spec:
  type: ClusterIP
  ports:
    - port: 8080
      name: http
  selector:
    app: caller-service

The caller-service app invokes an endpoint exposed by the callme-service app. Here’s the implementation of Spring @RestController responsible for that:

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallerController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", 
          buildProperties.or(Optional::empty), version);
      String response = restTemplate.getForObject(
         "http://callme-service:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }

}

Once again, let’s build the app on Kubernetes with the Skaffold CLI:

$ cd caller-service
$ skaffold run

Let’s switch to the Azure Portal. In the Azure Kubernetes Service page go to the workloads section. As you see, there are two Deployments: callme-service and caller-service.

We can switch to the pods view.

Monitoring with Managed Prometheus

In order to access Prometheus metrics for our AKS cluster, we need to go to the prometheus Azure Monitor workspace. In the first step, let’s take a look at the list of clusters assigned to that workspace.

Then, we can switch to the “Prometheus explorer” section. It allows us to provide the PromQL query to see a diagram illustrating the selected metric. You will find a full list of metrics collected for the AKS cluster in the following article. For example, we can visualize the RAM usage for both our apps running in the default namespace. In order to do that, we should use the node_namespace_pod_container:container_memory_working_set_bytes metric as shown below.

azure-kubernetes-prometheus

Exposing App Outside Azure Kubernetes

Install Azure Application Gateway on Kubernetes

In order to expose the service outside of the AKS, we need to create the Ingress object. However, we must have an ingress controller installed on the cluster to satisfy an Ingress. Since we are running the cluster on Azure, our natural choice is the AKS Application Gateway Ingress Controller that configures the Azure Application Gateway. We can install it through the Azure Portal. Go to your AKS cluster page and then switch to the “Networking” section. After that just select the “Enable ingress controller” checkbox. The new ingress-appgateway will be created and assigned to the AKS cluster.

azure-kubernetes-gateway

Once it is ready, you can display its details. The ingress-appgateway object exists in the same virtual network as Azure Kubernetes Service. There is a dedicated resource group – in my case MC_aks_piomin_eastus. The gateway has the public IP addresses assigned. For me, it is 20.253.111.153 as shown below.

After installing the Azure Application Gateway addon on AKS, there is a new Deployment ingress-appgw-deployment responsible for integration between the cluster and Azure Application Gateway service. It is our ingress controller.

Create Kubernetes Ingress

There is also a default IngressClass object installed on the cluster. We can display a list of available ingress classes by executing the command visible below. Our IngressClass object is available under the azure-application-gateway name.

$ kubectl get ingressclass
NAME                        CONTROLLER                  PARAMETERS   AGE
azure-application-gateway   azure/application-gateway   <none>       18m

Let’s take a look at the Ingress manifest. It contains several standard fields inside the spec.rules.* section. It exposes the callme-service Kubernetes Service under the 8080 port. Our Ingress object needs to refer to the azure-application-gateway IngressClass. The Azure Application Gateway Ingress Controller (AGIC) will watch such an object. Once we apply the manifest, AGIC will automatically configure the Application Gateway instance. The Application Gateway contains some health checks to verify the status of the backend. Since the Spring Boot app exposes a liveness endpoint under the /actuator/health/liveness path and 8080 port, we need to override the default settings. In order to do it, we need to use the appgw.ingress.kubernetes.io/health-probe-path and appgw.ingress.kubernetes.io/health-probe-port annotations.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: callme-ingress
  namespace: default
  annotations:
    appgw.ingress.kubernetes.io/health-probe-hostname: localhost
    appgw.ingress.kubernetes.io/health-probe-path: /actuator/health/liveness
    appgw.ingress.kubernetes.io/health-probe-port: '8080'
spec:
  ingressClassName: azure-application-gateway
  rules:
    - http:
        paths:
          - path: /callme
            pathType: Prefix
            backend:
              service:
                name: callme-service
                port:
                  number: 8080

The Ingress for the caller-service is very similar. We just need to change the path and the name of the backend service.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: caller-ingress
  namespace: default
  annotations:
    appgw.ingress.kubernetes.io/health-probe-hostname: localhost
    appgw.ingress.kubernetes.io/health-probe-path: /actuator/health/liveness
    appgw.ingress.kubernetes.io/health-probe-port: '8080'
spec:
  ingressClassName: azure-application-gateway
  rules:
    - http:
        paths:
          - path: /caller
            pathType: Prefix
            backend:
              service:
                name: caller-service
                port:
                  number: 8080

Let’s take a look at the list of ingresses in the Azure Portal. They are available under the same address and port. There is just a difference in the target context path.

azure-kubernetes-ingress

We can test both services using the gateway IP address and the right context path. Each app exposes the GET /ping endpoint.

$ curl http://20.253.111.153/callme/ping
$ curl http://20.253.111.153/caller/ping

The Azure Application Gateway contains a list of backends. In the Kubernetes context, those backends are the IP addresses of running pods. As you both health checks respond with the HTTP 200 OK code.

azure-kubernetes-gateway-backend

What’s next

We have already created a Kubernetes cluster, run the apps there, and exposed them to the external client. Now, the question is – how Azure may help in other activities. Let’s say, we want to install some additional software on the cluster. In order to do that, we need to go to the “Extensions + applications” section on the AKS cluster page. Then, we have to click the “Install an extension” button.

The link redirects us to the app marketplace. There are several different apps we can install in a simplified, graphical form. It could be a database, a message broker, or e.g. one of the Kubernetes-native tools like Argo CD.

azure-kubernetes-extensions

We just need to create a new instance of Argo CD and fill in some basic information. The installer is based on the Argo CD Helm chart provided by Bitnami.

azure-kubernetes-argocd

After a while, the instance of Argo CD is running on our cluster. We can display a list of installed extensions.

I installed Argo CD in the gitops namespace. Let’s verify a list of pods running in that namespace after successful installation:

$ kubectl get pod -n gitops
NAME                                             READY   STATUS    RESTARTS   AGE
gitops-argo-cd-app-controller-6d6848f46c-8n44j   1/1     Running   0          4m46s
gitops-argo-cd-repo-server-5f7cccd9d5-bc6ts      1/1     Running   0          4m46s
gitops-argo-cd-server-5c656c9998-fsgb5           1/1     Running   0          4m46s
gitops-redis-master-0                            1/1     Running   0          4m46s

And the last thing. As you remember, we exposed our apps outside the AKS cluster under the IP address. What about exposing them under the DNS name? Firstly, we need to have a DNS zone created on Azure. In this zone, we have to add a new record set containing the IP address of our application gateway. The name of the record set indicates the hostname of the gateway. In my case it is apps.cb57d.azure.redhatworkshops.io.

After that, we need to change the definition of the Ingress object. It should contain the hostname field inside the rules section, with the public DNS address of our gateway.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: caller-ingress
  namespace: default
  annotations:
    appgw.ingress.kubernetes.io/health-probe-hostname: localhost
    appgw.ingress.kubernetes.io/health-probe-path: /actuator/health/liveness
    appgw.ingress.kubernetes.io/health-probe-port: '8080'
spec:
  ingressClassName: azure-application-gateway
  rules:
    - host: apps.cb57d.azure.redhatworkshops.io
      http:
        paths:
          - path: /caller
            pathType: Prefix
            backend:
              service:
                name: caller-service
                port:
                  number: 8080

Final Thoughts

In this article, I focused on Azure features that simplify starting with the Kubernetes cluster. We covered such topics as cluster creation, monitoring, or exposing apps for external clients. Of course, these are not all the interesting features provided by Azure Kubernetes Service.

The post Getting Started with Azure Kubernetes Service appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/02/05/getting-started-with-azure-kubernetes-service/feed/ 0 14887
Serverless on Azure Function with Quarkus https://piotrminkowski.com/2024/01/31/serverless-on-azure-function-with-quarkus/ https://piotrminkowski.com/2024/01/31/serverless-on-azure-function-with-quarkus/#comments Wed, 31 Jan 2024 08:57:11 +0000 https://piotrminkowski.com/?p=14865 This article will teach you how to create and run serverless apps on Azure Function using the Quarkus Funqy extension. You can compare it to the Spring Boot and Spring Cloud support for Azure functions described in my previous article. There are also several other articles about Quarkus on my blog. If you are interested […]

The post Serverless on Azure Function with Quarkus appeared first on Piotr's TechBlog.

]]>
This article will teach you how to create and run serverless apps on Azure Function using the Quarkus Funqy extension. You can compare it to the Spring Boot and Spring Cloud support for Azure functions described in my previous article. There are also several other articles about Quarkus on my blog. If you are interested in the Kubernetes native solutions you can read more about serverless functions on OpenShift here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The Quarkus app used in the article is located in the account-function directory. After you go to that directory you should just follow my further instructions.

Prerequisites

There are some prerequisites before you start the exercise. You need to install JDK17+ and Maven on your local machine. You also need to have an account on Azure and az CLI to interact with that account. Once you install the az CLI and log in to Azure you can execute the following command for verification:

$ az account show

If you would like to test Azure Functions locally, you need to install Azure Functions Core Tools. You can find detailed installation instructions in Microsoft Docs here. For macOS, there are three required commands to run:

$ brew tap azure/functions
$ brew install azure-functions-core-tools@4
$ brew link --overwrite azure-functions-core-tools@4

Create Resources on Azure

Before proceeding with the source code, we must create several required resources on the Azure cloud. In the first step, we will prepare a resource group for all the required objects. The name of the group is quarkus-serverless. The location depends on your preferences. For me it is eastus.

$ az group create -l eastus -n quarkus-serverless

In the next step, we need to create a storage account. The Azure Function service requires it, but we will also use that account during the local development with Azure Functions Core Tools.

$ az storage account create -n pminkowsserverless \
     -g quarkus-serverless \
     -l eastus \
     --sku Standard_LRS

In order to run serverless apps with the Quarkus Azure extension, we need to create the Azure Function App instances. Of course, we use the previously created resource group and storage account. The name of my Function App instance is pminkows-account-function. We can also set a default OS type (Linux), functions version (4), and a runtime stack (Java) for each Function App.

$ az functionapp create -n pminkows-account-function \
     -c eastus \
     --os-type Linux \
     --functions-version 4 \
     -g quarkus-serverless \
     --runtime java \
     --runtime-version 17.0 \
     -s pminkowsserverless

Now, let’s switch to the Azure Portal. Then, find the quarkus-serverless resource group. You should have the same list of resources inside this group as shown below. It means that our environment is ready and we can proceed to the app implementation.

quarkus-azure-function-resources

Building Serverless Apps with Quarkus Funqy HTTP

In this article, we will consider the simplest option for building and running Quarkus apps on Azure Functions. Therefore, we include the Quarkus Funqy HTTP extension. It provides a simple way to expose services as HTTP endpoints, but shouldn’t be treated as a replacement for REST over HTTP. In case you need the full REST functionality you can use, e.g. the Quarkus RESTEasy module with Azure Function Java library. In order to deploy the app on the Azure Function service, we need to include the quarkus-azure-functions-http extension. Our function will also store data in the H2 in-memory database through the Panache module integration. Here’s a list of required Maven dependencies:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-funqy-http</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-azure-functions-http</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-hibernate-orm-panache</artifactId>
</dependency>
<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-jdbc-h2</artifactId>
</dependency>

With the quarkus-azure-functions extension we don’t need to include and configure any Maven plugin to deploy an app on Azure. That extension will do the whole deployment work for us. By default, Quarkus uses the Azure CLI in the background to authenticate and deploy to Azure. We just need to provide several configuration properties with the quarkus.azure-functions prefix inside the Quarkus application.properties file. In the configuration section, we have to set the name of the Azure Function App instance (pminkows-account-function), the target resource group (quarkus-serverless), the region (eastus), the service plan (EastUSLinuxDynamicPlan). We will also add several properties responsible for database connection and setting the root API context path (/api).

quarkus.azure-functions.app-name = pminkows-account-function
quarkus.azure-functions.app-service-plan-name = EastUSLinuxDynamicPlan
quarkus.azure-functions.resource-group = quarkus-serverless
quarkus.azure-functions.region = eastus
quarkus.azure-functions.runtime.java-version = 17

quarkus.datasource.db-kind = h2
quarkus.datasource.username = sa
quarkus.datasource.password = password
quarkus.datasource.jdbc.url = jdbc:h2:mem:testdb
quarkus.hibernate-orm.database.generation = drop-and-create

quarkus.http.root-path = /api

Here’s our @Entity class. We take advantage of the Quarkus Panache active record pattern.

@Entity
public class Account extends PanacheEntity {
    public String number;
    public int balance;
    public Long customerId;
}

Let’s take a look at the implementation of our Quarkus HTTP functions. By default, with the Quarkus Funqy extension, the URL path to execute a function is the function name. We just need to annotate the target method with @Funq. In case we want to override a default path, we put the request name as the annotation value field. There are two methods. The addAccount method is responsible for adding new accounts and is exposed under the add-account path. On the other hand, the findByNumber method allows us to find the account by its number. We can access it under the by-number path. This approach allows us to deploy multiple Funqy functions on a single Azure Function.

public class AccountFunctionResource {

    @Inject
    Logger log;

    @Funq("add-account")
    @Transactional
    public Account addAccount(Account account) {
        log.infof("Add: %s", account);
        Account.persist(account);
        return account;
    }

    @Funq("by-number")
    public Account findByNumber(Account account) {
        log.infof("Find: %s", account.number);
        return Account
                .find("number", account.number)
                .singleResult();
    }
}

Running Azure Functions Locally with Quarkus

Before we deploy our functions on Azure, we can run and test them locally. I assume you have already the Azure Functions Core Tools according to the “Prerequisites” section. Firstly, we need to build the app with the following Maven command:

$ mvn clean package

Then, we can take advantage of Quarkus Azure Extension and use the following Maven command to run the app in Azure Functions local environment:

$ mvn quarkus:run

Here’s the output after running the command visible above. As you see, there is just a single Azure function QuarkusHttp, although we have two methods annotated with @Funq. Quarkus allows us to invoke multiple Funqy functions using a single, wildcarded route http://localhost:8081/api/{*path}.

quarkus-azure-function-local

All the required Azure Function configuration files like host.json, local.settings.json and function.json are autogenerated by Quarkus during the build. You can find them in the target/azure-functions directory.

Here’s the auto-generated function.json with our Azure Function definition:

{
  "scriptFile" : "../account-function-1.0.jar",
  "entryPoint" : "io.quarkus.azure.functions.resteasy.runtime.Function.run",
  "bindings" : [ {
    "type" : "httpTrigger",
    "direction" : "in",
    "name" : "req",
    "route" : "{*path}",
    "methods" : [ "GET", "HEAD", "POST", "PUT", "OPTIONS" ],
    "dataType" : "binary",
    "authLevel" : "ANONYMOUS"
  }, {
    "type" : "http",
    "direction" : "out",
    "name" : "$return"
  } ]
}

Let’s call our local function. In the first step, we will add a new account by calling the addAccount function:

$ curl http://localhost:8081/api/add-account \
    -d "{\"number\":\"124\",\"customerId\":1, \"balance\":1000}" \
    -H "Content-Type: application/json"

Then, we can find the account by its number. For GET requests, the Funqy HTTP Binding allows to use of a query parameter mapping for function input parameters. The query parameter names are mapped to properties on the bean class.

$ curl http://localhost:8081/api/by-number?number=124

Deploy Quarkus Serverless on Azure Functions

Finally, we can deploy our sample Quarkus serverless app on Azure. As you probably remember, we already have all the required settings in the application.properties file. So now, we just need to run the following Maven command:

$ mvn quarkus:deploy

Here’s the output of the command. As you see, there is still one Azure Function with a wildcard in the path.

Let’s switch to the Azure Portal. Here’s a page with the pminkows-account-function details:

quarkus-azure-function-portal

We can call a similar query several times with different input data to test the service:

$ curl https://pminkows-account-function.azurewebsites.net/api/add-account \
    -d "{\"number\":\"127\",\"customerId\":4, \"balance\":1000}" \
    -H "Content-Type: application/json"

Here’s the invocation history visible in the Azure Monitor for our QuarkusHttp function.

Final Thoughts

In this article, I’m showing you a simplified scenario of running a Quarkus serverless app on Azure Function. You don’t need to know much about Azure Function to run such a service, since Quarkus handles all the required things around for you.

The post Serverless on Azure Function with Quarkus appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/01/31/serverless-on-azure-function-with-quarkus/feed/ 2 14865
Serverless on Azure with Spring Cloud Function https://piotrminkowski.com/2024/01/19/serverless-on-azure-with-spring-cloud-function/ https://piotrminkowski.com/2024/01/19/serverless-on-azure-with-spring-cloud-function/#respond Fri, 19 Jan 2024 09:25:49 +0000 https://piotrminkowski.com/?p=14829 This article will teach you how to create and run serverless apps on Azure using the Spring Cloud Function and Spring Cloud Azure projects. We will integrate with the Azure Functions and Azure Event Hubs services. It is not my first article about Azure and Spring Cloud. As a preparation for that exercise, it is […]

The post Serverless on Azure with Spring Cloud Function appeared first on Piotr's TechBlog.

]]>
This article will teach you how to create and run serverless apps on Azure using the Spring Cloud Function and Spring Cloud Azure projects. We will integrate with the Azure Functions and Azure Event Hubs services.

It is not my first article about Azure and Spring Cloud. As a preparation for that exercise, it is worth reading the article to familiarize yourself with some interesting features of Spring Cloud Azure. It describes an integration with Azure Spring Apps, Cosmos DB, and App Configuration services. On the other hand, if you are interested in CI/CD for Spring Boot apps you can refer to the following article about Azure DevOps and Terraform.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The Spring Boot apps used in the article are located in the serverless directory. After you go to that directory you should just follow my further instructions.

Architecture

In this exercise, we will prepare two sample Spring Boot apps (aka functions) account-function and customer-function. Then, we will deploy them on the Azure Function service. Our apps do not communicate with each other directly but through the Azure Event Hubs service. Event Hubs is a cloud-native data streaming service compatible with the Apache Kafka API. After adding a new customer the customer-function sends an event to the Azure Event Hubs using the Spring Cloud Stream binder. The account-function app receives the event through the Azure Event Hubs trigger. Also, the customer-function is exposed to the external client through the Azure HTTP trigger. Here’s the diagram of our architecture:

azure-serverless-spring-cloud-arch

Prerequisites

There are some prerequisites before you start the exercise. You need to install JDK17+ and Maven on your local machine. You also need to have an account on Azure and az CLI to interact with that account. Once you install the az CLI and log in to Azure you can execute the following command for verification:

$ az account show

If you would like to test Azure Functions locally, you need to install Azure Functions Core Tools. You can find detailed installation instructions in Microsoft Docs here. For macOS, there are three required commands to run:

$ brew tap azure/functions
$ brew install azure-functions-core-tools@4
$ brew link --overwrite azure-functions-core-tools@4

Create Resources on Azure

Before we proceed with the source code, we need to create several required resources on the Azure cloud. In the first step, we will prepare a resource group for all required objects. The name of the group is spring-cloud-serverless. The location depends on your preferences. For me it is eastus.

$ az group create -l eastus -n spring-cloud-serverless

In the next step, we need to create a storage account. The Azure Function service requires it, but we will also use that account during the local development with Azure Functions Core Tools.

$ az storage account create -n pminkowsserverless \
     -g spring-cloud-serverless \
     -l eastus \
     --sku Standard_LRS

In order to run serverless apps on Azure with e.g. Spring Cloud Function, we need to create the Azure Function App instances. Of course, we use the previously created resource group and storage account. The name of my Function App instances are pminkows-account-function and pminkows-customer-function. We can also set a default OS type (Linux), functions version (4), and a runtime stack (Java) for each Function App.

$ az functionapp create -n pminkows-customer-function \
     -c eastus \
     --os-type Linux \
     --functions-version 4 \
     -g spring-cloud-serverless \
     --runtime java \
     --runtime-version 17.0 \
     -s pminkowsserverless

$ az functionapp create -n pminkows-account-function \
     -c eastus \
     --os-type Linux \
     --functions-version 4 \
     -g spring-cloud-serverless \
     --runtime java \
     --runtime-version 17.0 \
     -s pminkowsserverless

Then, we have to create the Azure Event Hubs namespace. The name of my namespace is spring-cloud-serverless. The same as before I choose the East US location and the spring-cloud-serverless resource group. We can also set the pricing tier (Standard) and the upper limit of throughput units when the AutoInflate option is enabled.

$ az eventhubs namespace create -n spring-cloud-serverless \
     -g spring-cloud-serverless \
     --location eastus \
     --sku Standard \
     --maximum-throughput-units 1 \
     --enable-auto-inflate true

Finally, we have to create topics on Event Hubs. Of course, they have to be assigned to the previously created spring-cloud-serverless Event Hubs namespace. The names of our topics are accounts and customers. The number of partitions is irrelevant in this exercise.

$ az eventhubs eventhub create -n accounts \
     -g spring-cloud-serverless \
     --namespace-name spring-cloud-serverless \
     --cleanup-policy Delete \
     --partition-count 3

$ az eventhubs eventhub create -n customers \
     -g spring-cloud-serverless \
     --namespace-name spring-cloud-serverless \
     --cleanup-policy Delete \
     --partition-count 3

Now, let’s switch to the Azure Portal. Find the spring-cloud-serverless resource group. You should have the same list of resources inside this group as shown below. It means that our environment is ready and we can proceed to the source code.

App Dependencies

Firstly, we need to declare the dependencyManagement section inside the Maven pom.xml for three projects used in the app implementation: Spring Boot, Spring Cloud, and Spring Cloud Azure.

<properties>
  <java.version>17</java.version>
  <spring-boot.version>3.1.4</spring-boot.version>
  <spring-cloud-azure.version>5.7.0</spring-cloud-azure.version>
  <spring-cloud.version>2022.0.4</spring-cloud.version>
  <maven.compiler.release>${java.version}</maven.compiler.release>
  <maven.compiler.source>${java.version}</maven.compiler.source>
  <maven.compiler.target>${java.version}</maven.compiler.target>
</properties>

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-dependencies</artifactId>
      <version>${spring-cloud.version}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
    <dependency>
      <groupId>com.azure.spring</groupId>
      <artifactId>spring-cloud-azure-dependencies</artifactId>
      <version>${spring-cloud-azure.version}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-dependencies</artifactId>
      <version>${spring-boot.version}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>

<build>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-maven-plugin</artifactId>
      <executions>
       <execution>
          <goals>
           <goal>repackage</goal>
          </goals>
         </execution>
       </executions>
    </plugin>
  </plugins>
</build>

Here’s the list of required Maven dependencies. We need to include the spring-cloud-function-context library to enable Spring Cloud Functions. In order to integrate with the Azure Functions service, we need to include the spring-cloud-function-adapter-azure extension. Our apps also send messages to the Azure Event Hubs service through the dedicated Spring Cloud Stream binder.

<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-function-adapter-azure</artifactId>
  </dependency>
  <dependency>
    <groupId>com.azure.spring</groupId>
    <artifactId>spring-cloud-azure-stream-binder-eventhubs</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-function-context</artifactId>
  </dependency>
</dependencies>

Create Azure Serverless Apps with Spring Cloud

Expose Azure Function as HTTP endpoint

Let’s begin with the customer-function. To simplify the app we will use an in-memory H2 for storing data. Each time a new customer is added, it is persisted in the in-memory database using Spring Data JPA extension. Here’s our entity class:

@Entity
public class Customer {
   @Id
   @GeneratedValue(strategy = GenerationType.IDENTITY)
   private Long id;
   private String name;
   private int age;
   private String status;

   // GETTERS AND SETTERS
}

Here’s the Spring Data repository interface for the Customer entity:

public interface CustomerRepository extends ListCrudRepository<Customer, Long> {
}

Let’s proceed to the Spring Cloud Functions implementation. The CustomerInternalFunctions bean defines two functions. The addConsumer function (1) persists a new customer in the database and sends the event about it to the Azure Event Hubs topic. It uses the Spring Cloud Stream StreamBridge bean for interacting with Event Hubs. It also returns the persisted Customer entity as a response. The second function changeStatus (2) doesn’t return any data, it just needs to react to the incoming event and change the status of the particular customer. That event is sent by the account-function app after generating an account for a newly created customer. The important information here is that those are just Spring Cloud functions. For now, they have nothing to do with Azure Functions.

@Service
public class CustomerInternalFunctions {

   private static final Logger LOG = LoggerFactory
      .getLogger(CustomerInternalFunctions.class);

   private StreamBridge streamBridge;
   private CustomerRepository repository;

   public CustomerInternalFunctions(StreamBridge streamBridge, 
                                    CustomerRepository repository) {
      this.streamBridge = streamBridge;
      this.repository = repository;
   }

   // (1)
   @Bean
   public Function<Customer, Customer> addCustomer() {
      return c -> {
         Customer newCustomer = repository.save(c);
         streamBridge.send("customers-out-0", newCustomer);
         LOG.info("New customer added: {}", c);
         return newCustomer;
      };
   }

   // (2)
   @Bean
   public Consumer<Account> changeStatus() {
      return account -> {
         Customer customer = repository.findById(account.getCustomerId())
                .orElseThrow();
         customer.setStatus(Customer.CUSTOMER_STATUS_ACC_ACTIVE);
         repository.save(customer);
         LOG.info("Customer activated: id={}", customer.getId());
      };
   }

}

We also need to provide several configuration properties in the Spring Boot application.properties file. We should set the Azure Event Hubs connection URL and the name of the target topic. Since our app uses Spring Cloud Stream only for sending events we should also turn off the autodiscovery of functional beans as messaging bindings.

spring.cloud.azure.eventhubs.connection-string = ${EVENT_HUBS_CONNECTION_STRING}
spring.cloud.stream.bindings.customers-out-0.destination = customers
spring.cloud.stream.function.autodetect = false

In order to expose the functions on Azure, we will take advantage of the Spring Cloud Function Azure Adapter. It includes several Azure libraries to our Maven dependencies. It can invoke the Spring Cloud Functions directly or through the lookup approach with the FunctionCatalog bean (1). The method has to be annotated with @FunctionName, which defines the name of the function in Azure (2). In order to expose that function over HTTP, we need to define the Azure @HttpTrigger (3). The trigger exposes the function as the POST endpoint and doesn’t require any authorization. Our method receives the request through the HttpRequestMessage object. Then, it invokes the Spring Cloud function addCustomer by name using the FunctionCatalog bean (4).

// (1)
@Autowired
private FunctionCatalog functionCatalog;

@FunctionName("add-customer") // (2)
public Customer addCustomerFunc(
        // (3)
        @HttpTrigger(name = "req",
                     methods = { HttpMethod.POST },
                     authLevel = AuthorizationLevel.ANONYMOUS)
        HttpRequestMessage<Optional<Customer>> request,
        ExecutionContext context) {
   Customer c = request.getBody().orElseThrow();
   context.getLogger().info("Request: {}" + c);
   // (4)
   Function<Customer, Customer> function = functionCatalog
      .lookup("addCustomer");
   return function.apply(c);
}

Integrate Function with Azure Data Hubs Trigger

Let’s switch to the account-function app directory inside our Git repository. The same as customer-function it uses an in-memory H2 database for storing customer accounts. Here’s our Account entity class:

@Entity
public class Account {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    private String number;
    private Long customerId;
    private int balance;

   // GETTERS AND SETTERS ...
}

Inside the account-function app, there is a single Spring Cloud Function addAccount. It generates a new 16-digit account number for each new customer. Then, it saves the account in the database and sends the event to the Azure Event Hubs with Spring Cloud Stream StreamBridge bean.

@Service
public class AccountInternalFunctions {

   private static final Logger LOG = LoggerFactory
      .getLogger(AccountInternalFunctions.class);

   private StreamBridge streamBridge;
   private AccountRepository repository;

   public AccountInternalFunctions(StreamBridge streamBridge, 
                                   AccountRepository repository) {
      this.streamBridge = streamBridge;
      this.repository = repository;
   }

   @Bean
   public Function<Customer, Account> addAccount() {
      return customer -> {
         String n = RandomStringUtils.random(16, false, true);
         Account a = new Account(n, customer.getId(), 0);
         a = repository.save(a);
         boolean b = streamBridge.send("accounts-out-0", a);
         LOG.info("New account added: {}", a);
         return a;
      };
   }

}

The same as for customer-function we need to provide some configuration settings for Spring Cloud Stream. However, instead of the customers topic, this time we are sending events to the accounts topic.

spring.cloud.azure.eventhubs.connection-string = ${EVENT_HUBS_CONNECTION_STRING}
spring.cloud.stream.bindings.accounts-out-0.destination = accounts
spring.cloud.stream.function.autodetect = false

The account-function app is not exposed as the HTTP endpoint. We want to trigger the function in reaction to the event delivered to the Azure Event Hubs topic. The name of our Azure function is new-customer (1). It receives the Customer event from the customers topic thanks to the @EventHubTrigger annotation (2). This annotation also defines a property name, that contains the address of the Azure Event Hubs namespace (EVENT_HUBS_CONNECTION_STRING). I’ll show you later how to set such a property for our function in Azure. Once, the new-customer function is triggered, it invokes the Spring Cloud Function addAccount (3).

@Autowired
private FunctionCatalog functionCatalog;

// (1)
@FunctionName("new-customer")
public void newAccountEventFunc(
        // (2)
        @EventHubTrigger(eventHubName = "customers",
                         name = "newAccountTrigger",
                         connection = "EVENT_HUBS_CONNECTION_STRING",
                         cardinality = Cardinality.ONE)
        Customer event,
        ExecutionContext context) {
   context.getLogger().info("Event: " + event);
   // (3)
   Function<Customer, Account> function = functionCatalog
      .lookup("addAccount");
   function.apply(event);
}

At the end of this section, let’s switch back once again to the customer-function app. It fires on the event sent by the new-customer function to the accounts topic. Therefore we are using the @EventHubTrigger annotation once again.

@FunctionName("activate-customer")
public void activateCustomerEventFunc(
          @EventHubTrigger(eventHubName = "accounts",
                 name = "changeStatusTrigger",
                 connection = "EVENT_HUBS_CONNECTION_STRING",
                 cardinality = Cardinality.ONE)
          Account event,
          ExecutionContext context) {
   context.getLogger().info("Event: " + event);
   Consumer<Account> consumer = functionCatalog.lookup("changeStatus");
   consumer.accept(event);
}

All our functions are ready. Now, we can proceed to the deployment phase.

Running Azure Functions Locally with Maven

Before we deploy our functions on Azure, we can run and test them locally. I assume you have already the Azure Functions Core Tools according to the “Prerequisites” section. Our function still needs to connect with some services on the cloud like Azure Event Hubs or the storage account. The address to the Azure Event Hubs should be set as the EVENT_HUBS_CONNECTION_STRING app property or environment variable. In order to find the Event Hubs connection string, we should switch to the Azure Portal and find the spring-cloud-serverless namespace. Then, we need to go to the “Shared access policies” menu item and click the “RootManagedSharedAccessKey” policy. The connection string value is available in the “Connection string-primary key” field.

We also need to obtain the connection credentials to the Azure storage account. In your storage account, you should find the “Access keys” section and copy the value from the “Connection string” field.

After that, we can create the local.settings.json file in each app’s root directory. Alternatively, we can just set the environment variables AzureWebJobsStorage and EVENT_HUBS_CONNECTION_STRING.

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": <YOUR_ACCOUNT_STORAGE_CONNECTION_STRING>,
    "FUNCTIONS_WORKER_RUNTIME": "java",
    "EVENT_HUBS_CONNECTION_STRING": <YOUR_EVENT_HUBS_CONNECTION_STRING>
  }
}

Once you place your credentials in the local.settings.json file, you can build the app with Maven.

$ mvn clean package

After that, you can use the Maven plugin included in the spring-cloud-function-adapter-azure module. In order to run the function, you need to execute the following command:

$ mvn azure-functions:run 

Here’s the command output for the customer-function app. As you see, it contains two Azure functions: add-customer (HTTP) and activate-customer (Event Hub Trigger). You can test the function by invoking the http://localhost:7071/api/add-customer URL.

Here’s the command output for the account-function app. As you see, it contains a single function new-customer activated through the Event Hub trigger.

Deploy Spring Cloud Serverless on Azure Functions

Let’s take a look at the pminkows-customer-function Azure Function App before we deploy our first app there. The http://pminkows-customer-function.azurewebsites.net base URL will precede all the HTTP endpoint URLs for our functions. We should remember the name of the automatically generated service plan (EastUSLinuxDynamicPlan).

Before deploying our Spring Boot app we should create the host.json file with the following content:

{
  "version": "2.0",
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle",
    "version": "[4.*, 5.0.0)"
  }
}

Then, we should add the azure-functions-maven-plugin Maven plugin to our pom.xml. In the configuration section, we have to set the name of the Azure Function App instance (pminkows-customer-function), the target resource group (spring-cloud-serverless), the region (eastus), the service plan (EastUSLinuxDynamicPlan), and the location of the host.json file. We also need to set the connection string to the Azure Event Hubs inside the EVENT_HUBS_CONNECTION_STRING app property. So before running the build, you should export the value of that environment variable.

<plugin>
  <groupId>com.microsoft.azure</groupId>
  <artifactId>azure-functions-maven-plugin</artifactId>
  <version>1.30.0</version>
  <configuration>
    <appName>pminkows-customer-function</appName>
    <resourceGroup>spring-cloud-serverless</resourceGroup>
    <region>eastus</region>
    <appServicePlanName>EastUSLinuxDynamicPlan</appServicePlanName>
    <hostJson>${project.basedir}/src/main/resources/host.json</hostJson>
    <runtime>
      <os>linux</os>
      <javaVersion>17</javaVersion>
    </runtime>
    <appSettings>
      <property>
        <name>FUNCTIONS_EXTENSION_VERSION</name>
        <value>~4</value>
      </property>
      <property>
        <name>EVENT_HUBS_CONNECTION_STRING</name>
        <value>${EVENT_HUBS_CONNECTION_STRING}</value>
      </property>
    </appSettings>
  </configuration>
  <executions>
    <execution>
      <id>package-functions</id>
      <goals>
        <goal>package</goal>
      </goals>
    </execution>
  </executions>
</plugin>

Let’s begin with the customer-function app. Before deploying the app we need to build it first with the mvn clean package command. Once you do it you can run your first function Azure with the following command:

$ mvn azure-functions:deploy

Here’s my output after running that command. As you see, the function add-customer is exposed under the http://pminkows-customer-function.azurewebsites.net/api/add-customer URL.

azure-serverless-spring-cloud-deploy-mvn

We can come back to the pminkows-customer-function Azure Function App in the portal. According to the expectations, two functions are running there. The activate-customer function is triggered by the Azure Event Hub.

azure-serverless-spring-cloud-functions

Let’s switch to the account-function directory. We will deploy it to the Azure pminkows-account-function function. Once again we need to build the app with mvn clean package command, and then deploy it using the mvn azure-functions:deploy command. Here’s the output. There are no HTTP triggers defined, but just a single function triggered by the Azure Event Hub.

Here are the details of the pminkows-account-function Function App in the Azure Portal.

Invoke Azure Functions

Finally, we can test our functions by calling the following endpoint using e.g. curl. Let’s repeat the similar command several times with different data:

$ curl https://pminkows-customer-function.azurewebsites.net/api/add-customer \ 
    -d "{\"name\":\"Test\",\"age\":33}" \
    -H "Content-Type: application/json"

After that, we should switch to the Azure Portal. Go to the pminkows-customer-function details and click the link “Invocation and more” on the add-customer function.

You will be redirected to the Azure Monitor statistics for that function. Azure Monitor displays a list of invocations with statuses.

azure-serverless-spring-cloud-functions-invocations

We can click one of the records from the invocation history to see the details.

As you probably remember, the add-customer function sends messages to Azure Event Hubs. On the other hand, we can also verify how the new-customer function in the pminkows-account-function consumes and handles those events.

azure-serverless-spring-cloud-functions-logs

Final Thoughts

This article gives you a comprehensive guide on how to build and run Spring Cloud serverless apps on Azure Functions. It explains the concept of the triggers in Azure Functions and shows the integration with Azure Event Hubs. Finally, it shows how to run such functions locally and then monitor them on Azure after deployment.

The post Serverless on Azure with Spring Cloud Function appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/01/19/serverless-on-azure-with-spring-cloud-function/feed/ 0 14829
OpenShift Multicluster with Advanced Cluster Management for Kubernetes and Submariner https://piotrminkowski.com/2024/01/15/openshift-multicluster-with-advanced-cluster-management-for-kubernetes-and-submariner/ https://piotrminkowski.com/2024/01/15/openshift-multicluster-with-advanced-cluster-management-for-kubernetes-and-submariner/#respond Mon, 15 Jan 2024 08:55:03 +0000 https://piotrminkowski.com/?p=14792 This article will teach you how to connect multiple Openshift clusters with Submariner and Advanced Cluster Management for Kubernetes. Submariner allows us to configure direct networking between pods and services in different Kubernetes clusters, either on-premises or in the cloud. It operates at the L3 layer. It establishes a secure tunnel between two clusters and […]

The post OpenShift Multicluster with Advanced Cluster Management for Kubernetes and Submariner appeared first on Piotr's TechBlog.

]]>
This article will teach you how to connect multiple Openshift clusters with Submariner and Advanced Cluster Management for Kubernetes. Submariner allows us to configure direct networking between pods and services in different Kubernetes clusters, either on-premises or in the cloud. It operates at the L3 layer. It establishes a secure tunnel between two clusters and provides service discovery. I have already described how to install and manage it on Kubernetes mostly with the subctl CLI in the following article.

Today we will focus on the integration between Submariner and OpenShift through the Advanced Cluster Management for Kubernetes (ACM). ACM is a tool dedicated to OpenShift. It allows to control of clusters and applications from a single console, with built-in security policies. You can find several articles about it on my blog. For example, the following one describes how to use ACM together with Argo CD in the GitOps approach.

Source Code

This time we won’t work much with a source code. However, if you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. After that, you should follow my further instructions.

Architecture

Our architecture consists of three Openshift clusters: a single hub cluster and two managed clusters. The hub cluster aims to create new managed clusters and establish a secure connection between them using Submariner. So, in the initial state, there is just a hub cluster with the Advanced Cluster Management for Kubernetes (ACM) operator installed on it. With ACM we will create two new Openshift clusters on the target infrastructure (AWS) and install Submariner on them. Finally, we are going to deploy two sample Spring Boot apps. The callme-service app exposes a single GET /callme/ping endpoint and runs on ocp2. We will expose it through Submariner to the ocp1 cluster. On the ocp1 cluster, there is the second app caller-service that invokes the endpoint exposed by the callme-service app. Here’s the diagram of our architecture.

openshift-submariner-arch

Install Advanced Cluster Management on OpenShift

In the first step, we must install the Advanced Cluster Management for Kubernetes (ACM) on OpenShift using an operator. The default installation namespace is open-cluster-management. We won’t change it.

Once we install the operator, we need to initialize the ACM we have to create the MultiClusterHub object. Once again, we will use the open-cluster-management for that. Here’s the object declaration. We don’t need to specify any more advanced settings.

apiVersion: operator.open-cluster-management.io/v1
kind: MultiClusterHub
metadata:
  name: multiclusterhub
  namespace: open-cluster-management
spec: {}

We can do the same thing graphically in the OpenShift Dashboard. Just click the “Create MultiClusterHub” button and then accept the action on the next page. Probably it will take some time to complete the installation since there are several pods to run.

openshift-submariner-acm

Once the installation is completed, you will see the new menu item at the top of the dashboard allowing you to switch to the “All Clusters” view. Let’s do it. After that, we can proceed to the next step.

Create OpenShift Clusters with ACM

Advanced Cluster Management for Kubernetes allows us to import the existing clusters or create new ones on the target infrastructure. In this exercise, you see how to leverage the cloud provider account for that. Let’s just click the “Connect your cloud provider” tile on the welcome screen.

Provide Cloud Credentials

I’m using my already existing account on AWS. ACM will ask us to provide the appropriate credentials for the AWS account. In the first form, we should provide the name and namespace of our secret with credentials and a default base DNS domain.

openshift-submariner-cluster-create

Then, the ACM wizard will redirect us to the next steps. We have to provide AWS access key ID and secret, OpenShift pull secret, and also the SSH private/public keys. Of course, we can create the required Kubernetes Secret without a wizard, just by applying the similar YAML manifest:

apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: aws
  namespace: open-cluster-management
  labels:
    cluster.open-cluster-management.io/type: aws
    cluster.open-cluster-management.io/credentials: ""
stringData:
  aws_access_key_id: AKIAXBLSZLXZJWT3KFPM
  aws_secret_access_key: "********************"
  baseDomain: sandbox2746.opentlc.com
  pullSecret: "********************"
  ssh-privatekey: "********************"
  ssh-publickey: "********************"
  httpProxy: ""
  httpsProxy: ""
  noProxy: ""
  additionalTrustBundle: ""

Provision the Cluster

After that, we can prepare the ACM cluster set. The cluster set feature allows us to group OpenShift clusters. It is the required prerequisite for Submariner installation. Here’s the ManagedClusterSet object. The name is arbitrary. We can set it e.g. as the submariner.

apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSet
metadata:
  name: submariner
spec: {}

Finally, we can create two OpenShift clusters on AWS from the ACM dashboard. Go to the Infrastructure -> Clusters -> Cluster list and click the “Create cluster” button. Then, let’s choose the “Amazon Web Services” tile with already created credentials.

In the “Cluster Details” form we should set the name (ocp1 and then ocp2 for the second cluster) and version of the OpenShift cluster (the “Release image” field). We should also assign it to the submariner cluster set.

Let’s take a look at the “Networking” form. We won’t change anything here intentionally. We will set the same IP address ranges for both the ocp1 and ocp2 clusters. In the default settings, Submariner requires non-overlapping Pod and Service CIDRs between the interconnected clusters. This approach prevents routing conflicts. We are going to break those rules, which results in conflicts in the internal IP addresses between the ocp1 and ocp2 clusters. We will see how Submariner helps to resolve such an issue.

It will take around 30-40 minutes to create both clusters. ACM will connect directly to our AWS and create all the required resources there. As a result, our environment is ready. Let’s take how it looks from the ACM dashboard perspective:

openshift-submariner-clusters

There is a single management (hub) cluster and two managed clusters. Both managed clusters are assigned to the submariner cluster set. If you have the same result as me, you can proceed to the next step.

Enable Submariner for OpenShift clusters with ACM

Install in the Target Managed Cluster Set

Submariner is available on OpenShift in the form of an add-on to ACM. As I mentioned before, it requires ACM ManagedClusterSet objects for grouping clusters that should be connected. In order to enable Submariner for the specific cluster set, we need to view its details and switch to the “Submariner add-ons” tab. Then, we need to click the “Install Submariner add-ons” button. In the installation form, we have to choose the target clusters and enable the “Globalnet” feature to resolve an issue related to the Pod and Service CIDR overlapping. The default value of the “Globalnet” CIDR is 242.0.0.0/8. If it’s fine for us we can leave the empty value in the text field and proceed to the next step.

openshift-submariner-install

In the next form, we are configuring Submariner installation in each OpenShift cluster. We don’t have to change any value there. ACM will create an additional node on the OpenShift cluster using the c5d.large VM type. It will use that node for installing Multus CNI. Multus is a CNI plugin for Kubernetes that enables attaching multiple network interfaces to pods. It is responsible for enabling the Submariner “Globalnet” feature and giving a subnet from this virtual Global Private Network, configured as a new cluster parameter GlobalCIDR. We will run a single instance of the Submariner gateway and leave the default libreswan cable driver.

Of course, we can also provide that configuration as YAML manifests. With that approach, we need to create the ManagedClusterAddOn and SubmarinerConfig objects on both ocp1 and ocp2 clusters through the ACM engine. The Submariner Broker object has to be created on the hub cluster.

apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: submariner
  namespace: ocp2
spec:
  installNamespace: submariner-operator
---
apiVersion: submarineraddon.open-cluster-management.io/v1alpha1
kind: SubmarinerConfig
metadata:
  name: submariner
  namespace: ocp2
spec:
  gatewayConfig:
    gateways: 1
    aws:
      instanceType: c5d.large
  IPSecNATTPort: 4500
  airGappedDeployment: false
  NATTEnable: true
  cableDriver: libreswan
  globalCIDR: ""
  credentialsSecret:
    name: ocp2-aws-creds
---
apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
  name: submariner
  namespace: ocp1
spec:
  installNamespace: submariner-operator
---
apiVersion: submarineraddon.open-cluster-management.io/v1alpha1
kind: SubmarinerConfig
metadata:
  name: submariner
  namespace: ocp1
spec:
  gatewayConfig:
    gateways: 1
    aws:
      instanceType: c5d.large
  IPSecNATTPort: 4500
  airGappedDeployment: false
  NATTEnable: true
  cableDriver: libreswan
  globalCIDR: ""
  credentialsSecret:
    name: ocp1-aws-creds
---
apiVersion: submariner.io/v1alpha1
kind: Broker
metadata:
  name: submariner-broker
  namespace: submariner-broker
  labels:
    cluster.open-cluster-management.io/backup: submariner
spec:
  globalnetEnabled: true
  globalnetCIDRRange: 242.0.0.0/8

Verify the Status of Submariner Network

After installing the Submariner Add-on in the target cluster set, you should see the same statuses for both ocp1 and ocp2 clusters.

openshift-submariner-status

Assuming that you are logged in to all the clusters with the oc CLI, we can the detailed status of the Submariner network with the subctl CLI. In order to do that, we should execute the following command:

$ subctl show all

It examines all the clusters one after the other and prints all key Submariner components installed there. Let’s begin with the command output for the hub cluster. As you see, it runs the Submariner Broker component in the submariner-broker namespace:

Here’s the output for the ocp1 managed cluster. The global CIDR for that cluster is 242.1.0.0/16. This IP range will be used for exposing services to other clusters inside the same Submariner network.

On the other hand, here’s the output for the ocp2 managed cluster. The global CIDR for that cluster is 242.0.0.0/16. The connection between ocp1 and ocp2 clusters is established. Therefore we can proceed to the last step in our exercise. Let’s run the sample apps on our OpenShift clusters!

Export App to the Remote Cluster

Since we already installed Submariner on both OpenShift clusters we can deploy our sample applications. Let’s begin with caller-service. We will run it in the demo-apps namespace. Make sure you are in the ocp1 Kube context. Here’s the YAML manifest with the Deployment and Service definitions for our app:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        env:
          - name: VERSION
            value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: caller-service
  labels:
    app: caller-service
spec:
  type: ClusterIP
  ports:
    - port: 8080
      name: http
  selector:
    app: caller-service

Then go to the caller-service directory and deploy the application using Skaffold as shown below. We can also expose the service outside the cluster using the OpenShift Route object:

$ cd caller-service
$ oc project demo-apps
$ skaffold run
$ oc expose svc/caller-service

Let’s switch to the callme-service app. Make sure you are in the ocp2 Kube context. Here’s the YAML manifest with the Deployment and Service definitions for our second app:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
  template:
    metadata:
      labels:
        app: callme-service
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

Once again, we can deploy the app on OpenShift using Skaffold.

$ cd callme-service
$ oc project demo-apps
$ skaffold run

This time, instead of exposing the service outside of the cluster, we will export it to the Submariner network. Thanks to that, the caller-service app will be able to call directly through the IPSec tunnel established between the clusters. We can do it using the subctl CLI command:

$ subctl export service callme-service

That command creates the ServiceExport CRD object provided by the Submariner operator. We can apply the following YAML definition as well:

apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
  name: callme-service
  namespace: demo-apps

We can verify if everything turned out okay by checking out the ServiceExport object status:

Submariner creates an additional Kubernetes Service with the IP address from the “Globalnet” CIDR pool to avoid services IP overlapping.

Then, let’s switch to the ocp1 cluster. After exporting the Service from the ocp2 cluster Submariner automatically creates the ServiceImport object on the connected clusters.

apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
  name: callme-service
  namespace: demo-apps
spec:
  ports:
    - name: http
      port: 8080
      protocol: TCP
  type: ClusterSetIP
status:
  clusters:
    - cluster: ocp2

Submariner exposes services on the domain clusterset.local. So, our service is now available under the URL callme-service.demo-apps.svc.clusterset.local. We can verify it by executing the following curl command inside the caller-service container. As you see, it uses the external IP address allocated by the Submariner within the “Globalnet” subnet.

Here’s the implementation of @RestController responsible for handling requests coming to the caller-service service. As you see, it uses Spring RestTemplate client to call the remote service using the callme-service.demo-apps.svc.clusterset.local URL provided by Submariner.

@RestController
@RequestMapping("/caller")
public class CallerController {

   private static final Logger LOGGER = LoggerFactory
      .getLogger(CallerController.class);

   @Autowired
   Optional<BuildProperties> buildProperties;
   @Autowired
   RestTemplate restTemplate;
   @Value("${VERSION}")
   private String version;

   @GetMapping("/ping")
   public String ping() {
      LOGGER.info("Ping: name={}, version={}", buildProperties.or(Optional::empty), version);
      String response = restTemplate
         .getForObject("http://callme-service.demo-apps.svc.clusterset.local:8080/callme/ping", String.class);
      LOGGER.info("Calling: response={}", response);
      return "I'm caller-service " + version + ". Calling... " + response;
   }
}

Let’s just make a final test using the OpenShift caller-service Route and the GET /caller/ping endpoint. As you see it calls the callme-service app successfully through the Submariner tunnel.

openshift submariner-tes-

Final Thoughts

In this article, we analyzed the scenario where we are interconnecting two OpenShift clusters with overlapping CIDRs. I also showed you how to leverage the ACM dashboard to simplify the installation and configuration of Submariner on the managed clusters. It is worth mentioning, that there are some other ways to interconnect multiple OpenShift clusters. For example, we can use Red Hat Service Interconnect based on the open-source project Skupper for that. In order to read more about it, you can refer to the following article on my blog.

The post OpenShift Multicluster with Advanced Cluster Management for Kubernetes and Submariner appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/01/15/openshift-multicluster-with-advanced-cluster-management-for-kubernetes-and-submariner/feed/ 0 14792
Azure DevOps and Terraform for Spring Boot https://piotrminkowski.com/2024/01/03/azure-devops-and-terraform-for-spring-boot/ https://piotrminkowski.com/2024/01/03/azure-devops-and-terraform-for-spring-boot/#respond Wed, 03 Jan 2024 13:50:27 +0000 https://piotrminkowski.com/?p=14759 This article will teach you how to automate your Spring Boot app deployment with Azure DevOps and Terraform. In the previous article in this series, we created a simple Spring Boot RESTful app. Then we integrated it with the popular Azure services like Cosmos DB or App Configuration using the Spring Cloud Azure project. We […]

The post Azure DevOps and Terraform for Spring Boot appeared first on Piotr's TechBlog.

]]>
This article will teach you how to automate your Spring Boot app deployment with Azure DevOps and Terraform. In the previous article in this series, we created a simple Spring Boot RESTful app. Then we integrated it with the popular Azure services like Cosmos DB or App Configuration using the Spring Cloud Azure project. We also leveraged the Azure Spring Apps service to deploy, run, and manage our app on the Azure cloud. All the required steps have been performed with the az CLI and Azure Portal.

Today, we are going to design the CI/CD process for building and deploying the app created in the previous article on Azure. In order to configure required services like Azure Spring Apps or Cosmos DB automatically we will use Terraform. We will use Azure DevOps and Azure Pipelines to build and deploy the app.

Preparation

For the purpose of that exercise, we need to provision an account on Azure and another one on the Azure DevOps platform. Once you install the az CLI and log in to Azure you can execute the following command for verification:

az account show
ShellSession

In the next step, you should set up the account on the Azure DevOps platform. In order to do it, go to the following site and click the “Start free” button or “Sign in” if you already have an account there. After that, you should see the Azure DevOps main page. Before we start, we need to create the organization and project. I’m using the “pminkows” name in both cases.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The Spring Boot app used in the article is located in the microservices/account-service directory. You will also find the Terraform manifest inside the microservice/terraform directory and the azure-pipelines.yml file in the repository root directory. After you go to that directory you should just follow my further instructions.

Create Azure Resources with Terraform

Terraform is a great tool for defining resources according to the “Infrastructure as a code” approach. An official Terraform provider is allowing to configure infrastructure on Azure with the Azure Resource Manager API’s. In order to use it, we need to include the azurerm provider in the Terraform manifest. We will put all the required objects into the spring-group resource group.

terraform {
  required_version = ">= 1.0"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">=3.3.0"
    }
  }
}

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "spring-group" {
  location = "eastus"
  name     = "spring-apps"
}
HCL

Configure the Azure Cosmos DB Service

In the first step, we are going to configure the Cosmos DB instance required by our sample Spring Boot app. It requires a new database account (1). The name of my account is sample-pminkows-cosmosdb. It is placed inside our sample-spring-cloud resource group. We also need to define a default consistency level (consistency_policy) and a replication policy (geo_location). Once we enable a database account we can create a database instance (2). The name of our database is sampledb. Of course, it has to be placed in the previously created sample-pminkows-cosmosdb Cosmos DB account. Finally, we need to create a container inside our database (3). The name of the container should be the same as the value of the containerName field declared in the model class. We also have to set the partition key path. It corresponds to the name of the field inside the model class annotated with @PartitionKey.

# (1)
resource "azurerm_cosmosdb_account" "sample-db-account" {
  name                = "sample-pminkows-cosmosdb"
  location            = azurerm_resource_group.spring-group.location
  resource_group_name = azurerm_resource_group.spring-group.name
  offer_type          = "Standard"

  consistency_policy {
    consistency_level = "Session"
  }

  geo_location {
    failover_priority = 0
    location          = "eastus"
  }
}

# (2)
resource "azurerm_cosmosdb_sql_database" "sample-db" {
  name                = "sampledb"
  resource_group_name = azurerm_cosmosdb_account.sample-db-account.resource_group_name
  account_name        = azurerm_cosmosdb_account.sample-db-account.name
}

# (3)
resource "azurerm_cosmosdb_sql_container" "sample-db-container" {
  name                  = "accounts"
  resource_group_name   = azurerm_cosmosdb_account.sample-db-account.resource_group_name
  account_name          = azurerm_cosmosdb_account.sample-db-account.name
  database_name         = azurerm_cosmosdb_sql_database.sample-db.name
  partition_key_paths   = ["/customerId"]
  partition_key_version = 1
  throughput            = 400
}
HCL

Just for the record, here’s a model Java class inside our sample Spring Boot app:

@Container(containerName = "accounts")
public class Account {
   @Id
   @GeneratedValue
   private String id;
   private String number;
   @PartitionKey
   private String customerId;

   // GETTERS AND SETTERS ...
}
Java

Install the Azure App Configuration Service

In the next step, we need to enable the Azure App Configuration service and put some properties into the store (1). The name of that instance is sample-spring-cloud-config. Our sample Spring Boot app uses the Spring Cloud Azure project to interact with the App Configuration service. In order to take advantage of that integration, we need to give proper names to all the configuration keys. They should be prefixed with the /application. They should also contain the name of the property automatically recognized by Spring Cloud. In our case, these are the properties used for establishing a connection with the Cosmos DB instance. We need to define three Spring Cloud properties: spring.cloud.azure.cosmos.key (2)spring.cloud.azure.cosmos.database (3), and spring.cloud.azure.cosmos.endpoint (4). We can retrieve the value of the Comso DB instance primary key or endpoint URL from the previously created azurerm_cosmosdb_account resource.

# (1)
resource "azurerm_app_configuration" "sample-config" {
  name                = "sample-spring-cloud-config"
  resource_group_name = azurerm_resource_group.spring-group.name
  location            = azurerm_resource_group.spring-group.location
}

# (2)
resource "azurerm_app_configuration_key" "cosmosdb-key" {
  configuration_store_id = azurerm_app_configuration.sample-config.id
  key                    = "/application/spring.cloud.azure.cosmos.key"
  value                  = azurerm_cosmosdb_account.sample-db-account.primary_key
}

# (3)
resource "azurerm_app_configuration_key" "cosmosdb-key" {
  configuration_store_id = azurerm_app_configuration.sample-config.id
  key                    = "/application/spring.cloud.azure.cosmos.database"
  value                  = "sampledb"
}

# (4)
resource "azurerm_app_configuration_key" "cosmosdb-key" {
  configuration_store_id = azurerm_app_configuration.sample-config.id
  key                    = "/application/spring.cloud.azure.cosmos.endpoint"
  value                  = azurerm_cosmosdb_account.sample-db-account.endpoint
}
HCL

info
Title

The App Configuration service requires some additional permissions. First of all, you may need to install the provider with the ‘az provider register –namespace Microsoft.AppConfiguration’ command. Also, the Terraform script in the sample Git repository assigns the additional role ‘App Configuration Data Owner’ to the client.

Create the Azure Spring Apps Instance

In the last step, we need to configure the Azure Spring Apps service instance used for running Spring Boot apps. The name of our instance is sample-spring-cloud-apps (1). We will enable tracing for the Spring Azure Apps instance with the Application Insights service (2). After that, we will create a single app inside sample-spring-cloud-apps with the account-service name (3). This app requires a basic configuration containing an amount of requested resources, a version of Java runtime, or some environment variables including the address of the Azure App Configuration service instance. All those things should be set inside the deployment object represented by the azurerm_spring_cloud_java_deployment resource (4).

resource "azurerm_application_insights" "spring-insights" {
  name                = "spring-insights"
  location            = azurerm_resource_group.spring-group.location
  resource_group_name = azurerm_resource_group.spring-group.name
  application_type    = "web"
}

# (1)
resource "azurerm_spring_cloud_service" "spring-cloud-apps" {
  name                = "sample-spring-cloud-apps"
  location            = azurerm_resource_group.spring-group.location
  resource_group_name = azurerm_resource_group.spring-group.name
  sku_name            = "S0"

  # (2)
  trace {
    connection_string = azurerm_application_insights.spring-insights.connection_string
    sample_rate       = 10.0
  }

  tags = {
    Env = "Staging"
  }
}

# (3)
resource "azurerm_spring_cloud_app" "account-service" {
  name                = "account-service"
  resource_group_name = azurerm_resource_group.spring-group.name
  service_name        = azurerm_spring_cloud_service.spring-cloud-apps.name

  identity {
    type = "SystemAssigned"
  }
}

# (4)
resource "azurerm_spring_cloud_java_deployment" "slot-staging" {
  name                = "dep1"
  spring_cloud_app_id = azurerm_spring_cloud_app.account-service.id
  instance_count      = 1
  jvm_options         = "-XX:+PrintGC"
  runtime_version     = "Java_17"

  quota {
    cpu    = "500m"
    memory = "1Gi"
  }

  environment_variables = {
    "Env" : "Staging",
    "APP_CONFIGURATION_CONNECTION_STRING": azurerm_app_configuration.sample-config.primary_read_key[0].connection_string
  }
}

resource "azurerm_spring_cloud_active_deployment" "dep-staging" {
  spring_cloud_app_id = azurerm_spring_cloud_app.account-service.id
  deployment_name     = azurerm_spring_cloud_java_deployment.slot-staging.name
}
HCL

Apply the Terraform Manifest to Azure

Our Terraform configuration is ready. Finally, we can apply it to the target Azure account. Go to the microservices/terraform directory and then run the following commands:

$ terraform init
$ terraform apply -auto-approve
ShellSession

It can take several minutes until the command finishes. In the end, you should have a similar result. Terraform created 15 resources on Azure successfully.

We can switch to the Azure Portal for a moment. Let’s take a look at a list of resources inside our spring-apps resource group. As you see, all the required resources including Cosmos DB, App Configuration, and Azure Spring Apps are ready.

Build And Deploy the App with Azure Pipelines

After preparing the required infrastructure on Azure, we may proceed to the creation of a CI/CD pipeline for the app. Assuming you have already logged in to the Azure DevOps portal, you should find the “Pipelines” item on the left-side menu. Once you expand it, you should see several options.

Create Environment

Let’s start with the “Environments”. We will prepare just a single staging environment as shown below. We don’t need to choose any resources now (the “None” option).

azure-devops-environment

Thanks to environments we can add approval checks for our pipelines. In order to do it, you should go to your environment details and switch to the “Approvals and checks” tab. There are several different available options. Let’s choose a simple approval, which requires someone to manually approve running the particular stage of the pipeline.

azure-devops-approval-check

After clicking the “Next” button, you will be redirected to the next page containing a list of approvers. We can set a single person responsible for it or a whole group. For me, it doesn’t matter since I have only one user in the project. After defining a list of approvers click the “Create” button.

Define the Azure Pipeline

Now, let’s switch to the “Pipelines” view. We can create a pipeline manually with a GUI editor or just provide the azure-pipelines.yml file in the repository root directory. Of course, a GUI editor is also creating and committing the YAML manifest with a pipeline definition to the Git repository.

Let’s analyze our pipeline step by step. It is triggered by the commit into the master branch in the Git repository (1). We choose a standard agent pool (2). Our pipeline consists of two stages: Build_Test and Deploy_Stage (3). In the Build_Test stage we are building the app with Maven (4) and publishing the JAR file to the Azure Artifacts Feeds (5). Thanks to that we will be able to use that artifact in the next stage.

The next Deploy_Stage stage (6) waits until the previous stage is finished successfully (7). However, it won’t continue until we do a review and approve the pipeline. In order to do that, the job must refer to the previously defined staging environment (8) that contains the approval check. Once we approve the pipeline it proceeds to the step responsible for downloading artifacts from the Azure Artifacts Feeds (9). After that, it starts a deployment process (10). We need to use the AzureSpringCloud task responsible for deploying to the Azure Spring Apps service.

The deployment task requires several inputs. We need to set the Azure subscription ID (11), the ID of the Azure Spring Apps instance (12), the name of the app inside the Azure Spring Apps (13), and the name of a target deployment slot (14). Finally, we are setting the path to the JAR file downloaded in the previous step of the whole job (15). The pipeline reads the values of the Azure subscription ID and Azure Spring Apps instance ID from the input variables: subscription and serviceName.

# (1)
trigger:
- master

# (2)
pool:
  vmImage: ubuntu-latest

# (3)
stages:
- stage: Build_Test
  jobs:
  - job: Maven_Package
    steps:
    - task: MavenAuthenticate@0
      inputs:
        artifactsFeeds: 'pminkows'
        mavenServiceConnections: 'pminkows'
      displayName: 'Maven Authenticate'
    # (4)
    - task: Maven@3
      inputs:
        mavenPomFile: 'microservices/account-service/pom.xml'
        mavenOptions: '-Xmx3072m'
        javaHomeOption: 'JDKVersion'
        jdkVersionOption: '1.17'
        jdkArchitectureOption: 'x64'
        publishJUnitResults: true
        testResultsFiles: '**/surefire-reports/TEST-*.xml'
        goals: 'deploy'
        mavenAuthenticateFeed: true # (5)
      displayName: 'Build'

# (6)
- stage: Deploy_Stage
  dependsOn: Build_Test
  condition: succeeded() # (7)
  jobs:
    - deployment: Deployment_Staging
      environment:
        name: staging # (8) 
      strategy:
        runOnce:
          deploy:
            steps:
            # (9)
            - task: DownloadPackage@1
              inputs:
                packageType: 'maven'
                feed: 'pminkows'
                view: 'Local'
                definition: 'pl.piomin:account-service'
                version: '1.0'
                downloadPath: '$(System.ArtifactsDirectory)'
            - script: 'ls -la $(System.ArtifactsDirectory)' 
            # (10)
            - task: AzureSpringCloud@0
              inputs:
                azureSubscription: $(subscription) # (11)
                Action: 'Deploy'
                AzureSpringCloud: $(serviceName) # (12)
                AppName: 'account-service' # (13)
                DeploymentName: dep1 # (14)
                Package: '$(System.ArtifactsDirectory)/account-service-1.0.jar' # (15)
YAML

Run the Azure Pipeline

Let’s import our pipeline into the Azure DevOps platform. Azure DevOps provides a simple wizard for that. We need to choose the Git repository containing the pipeline definition.

After selecting the repository we will see the review page. We can change the definition of our pipeline taken from the azure-pipelines.yml file. If there is no need for any changes, we may add some variables or run (and save) the pipeline.

azure-devops-pipeline-yaml

However, before running the pipeline we should define the required variables. The serviceName variable needs to contain the fully qualified ID of the Azure Spring Apps resource, e.g. /subscriptions/d4cde383-3611-4557-b2b1-b64b50378c9d/resourceGroups/spring-apps/providers/Microsoft.AppPlatform/Spring/sample-spring-cloud-apps.

azure-devops-pipeline-variables

We also need to create the Azure Artifact Feed. The pipeline uses it to cache and store artifacts during the Maven build. We should go to the “Artifacts” section. Then click the “Create Feed” button. The name of my feed is pminkows.

Once we run the pipeline, it will publish the app artifact to the target feed. The Maven group ID and its name determine the artifact’s name. The current version number is 1.0.

Let’s run the pipeline. It is starting from the build phase.

azure-devops-pipeline-run

After finishing the build phase successfully, it proceeds to the deployment phase. However, the pipeline requires us to perform a review and approve the movement to the next step.

We need to click the Deploy_Stage tile. After that, you should see a similar approval screen as shown below. You can approve or reject the changes.

After approval, the pipeline starts the deployment phase. After around one minute it should deploy our app into the target Azure Spring Apps instance. Here’s the successfully finished run of the pipeline.

We can switch to the Azure Portal once again. Go to the sample-spring-cloud-apps Azure Spring Apps instance, then choose “Apps” and “account-service”. Finally, go to the “Deployments” section and choose the dep1. It is the deployment slot used by our pipeline. As you see, our app is running in the staging environment.

azure-devops-spring-apps

info

Title

Before running the pipeline you should set the ‘dep1’ as the default staging deployment (option ‘Set as staging’)

Final Thoughts

This article shows the holistic approach to app deployment on Azure. We can use Terraform to define all the resources and services required by the app. After that, we can define the CI/CD pipeline with Azure DevOps. As a result, we have a fully automated way of managing all the aspects related to our Spring Boot app running in the Azure cloud.

The post Azure DevOps and Terraform for Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2024/01/03/azure-devops-and-terraform-for-spring-boot/feed/ 0 14759
Getting Started with Spring Cloud Azure https://piotrminkowski.com/2023/12/07/getting-started-with-spring-cloud-azure/ https://piotrminkowski.com/2023/12/07/getting-started-with-spring-cloud-azure/#respond Thu, 07 Dec 2023 14:19:12 +0000 https://piotrminkowski.com/?p=14725 This article will teach you how to use Spring Cloud to simplify integration between Spring Boot apps and Azure services. We will also see how to leverage the Azure Spring Apps service to deploy, run, and manage our app on Azure. Our sample Spring Boot app stores data in the Azure Cosmos DB service and […]

The post Getting Started with Spring Cloud Azure appeared first on Piotr's TechBlog.

]]>
This article will teach you how to use Spring Cloud to simplify integration between Spring Boot apps and Azure services. We will also see how to leverage the Azure Spring Apps service to deploy, run, and manage our app on Azure. Our sample Spring Boot app stores data in the Azure Cosmos DB service and exposes some REST endpoints under the public URL. We can run it locally and connect remote services or deploy it on the cloud and connect those services internally under the same virtual network.

If you need an introduction to Spring Cloud read my article about microservices with Spring Boot 3 and Spring Cloud available here. It is worth at least taking a look at the Spring Cloud Azure docs for a basic understanding of the main concepts.

Architecture

Our architecture is pretty simple. As I mentioned before, we have a single Spring Boot app (account-service in the diagram) that runs on Azure and connects to Cosmos DB. It exposes some REST endpoints for adding, deleting, or searching accounts backed by Cosmos DB. It also stores the whole required configuration (like Cosmos DB address and credentials) in the Azure App Configuration service. The app is managed by the Azure Spring Apps service. Here’s the diagram illustrating our architecture.

spring-cloud-azure-arch

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. The Spring Boot app used in the article is located in the microservices/account-service directory. After you go to that directory you should just follow my further instructions.

There are some prerequisites before you start the exercise. You need to install JDK17+ and Maven on your local machine. You also need to have an account on Azure and az CLI to interact with that account. In order to deploy the app on Azure we will use azure-spring-apps-maven-plugin, which requires az CLI.

Dependencies

Firstly, let’s take a look at the list of required Maven dependencies. Of course, we need to add the Spring Boot Web starter to enable REST support through the Spring MVC module. In order to integrate with Cosmos DB, we will use the Spring Data repositories. Spring Cloud Azure provides a dedicated starter spring-cloud-azure-starter-data-cosmos for it. The spring-cloud-azure-starter-actuator module is optional. It will enable a health indicator for Cosmos DB in the /actuator/health endpoint. After that, we will include the starter providing integration with the Azure App Configuration service. Finally, we can add the Springdoc OpenAPI project responsible for generating REST API documentation.

<dependencies>
  <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
  </dependency>
  <dependency>
    <groupId>com.azure.spring</groupId>
    <artifactId>spring-cloud-azure-starter-actuator</artifactId>
  </dependency>
  <dependency>
    <groupId>com.azure.spring</groupId>
    <artifactId>spring-cloud-azure-starter-data-cosmos</artifactId>
  </dependency>
  <dependency>
    <groupId>com.azure.spring</groupId>
    <artifactId>spring-cloud-azure-starter-appconfiguration-config</artifactId>
  </dependency>
  <dependency>
    <groupId>org.springdoc</groupId>
    <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId>
    <version>2.2.0</version>
  </dependency>
</dependencies>

Spring Cloud with Azure Cosmos DB

After including the Spring Data module with Cosmos DB support we may define the model class. The Account class contains three String fields: id (primary key), number, and customerId (partition key). The partition key is responsible for dividing data into distinct subsets called logical partitions. The model must be annotated with @Container. The containerName parameter inside the annotation corresponds to the name of the Cosmos DB container created in Azure.

@Container(containerName = "accounts")
public class Account {
   @Id
   @GeneratedValue
   private String id;
   private String number;
   @PartitionKey
   private String customerId;

   // GETTERS AND SETTERS ...
}

Now, let’s prepare our environment in Azure. After logging in with the az login CLI command we create the resource group for our exercise. The name of the group is sample-spring-cloud. The location depends on your preferences. For me it is eastus.

$ az group create -l eastus -n sample-spring-cloud

Then, we are going to create a new Azure Cosmos DB database account. The name of my account is sample-pminkows-cosmosdb. It is placed inside our sample-spring-cloud resource group. I’ll leave the default values in all other parameters. But you can consider overriding some parameters to decrease the instance cost. For example, we can set the Local backup redundancy type using the --backup-redundancy parameter.

$ az cosmosdb create -n sample-pminkows-cosmosdb -g sample-spring-cloud

Once we enable a database account we can create a database instance. The name of our database is sampled. Of course, it has to be placed in the previously created sample-pminkows-cosmosdb Cosmos DB account.

$ az cosmosdb sql database create \
    -a sample-pminkows-cosmosdb \
    -n sampledb \
    -g sample-spring-cloud

Finally, we need to create a container inside our database. The name of the container should be the same as the value of the containerName field declared in the model class. We also have to set the partition key path. As you probably remember, we are using the customerId field in the Account class for that.

$ az cosmosdb sql container create \
    -a sample-pminkows-cosmosdb \
    -g sample-spring-cloud \
    -n accounts \
    -d sampledb \
    -p /customerId

Everything is ready on the Azure side. Let’s back for a moment to the source code. In order to interact with the database, we will create the Spring Data repository interface. It has to extend the CosmosRepository interface provided within Spring Cloud Azure. It defines one additional method for searching by the customerId field.

public interface AccountRepository extends CosmosRepository<Account, String> {
   List<Account> findByCustomerId(String customerId);
}

Finally, we can create @RestController with the endpoints implementation. It injects and uses the AccountRepository bean.

@RestController
@RequestMapping("/accounts")
public class AccountController {

   private final static Logger LOG = LoggerFactory
      .getLogger(AccountController.class);
   private final AccountRepository repository;

   public AccountController(AccountRepository repository) {
      this.repository = repository;
   }

   @PostMapping
   public Account add(@RequestBody Account account) {
      LOG.info("add: {}", account.getNumber());
      return repository.save(account);
   }

   @GetMapping("/{id}")
   public Account findById(@PathVariable String id) {
      LOG.info("findById: {}", id);
      return repository.findById(id).orElseThrow();
   }

   @GetMapping
   public List<Account> findAll() {
      List<Account> accounts = new ArrayList<>();
      repository.findAll().forEach(accounts::add);
      return accounts;
   }

   @GetMapping("/customer/{customerId}")
   public List<Account> findByCustomerId(@PathVariable String customerId) {
      LOG.info("findByCustomerId: {}", customerId);
      return repository.findByCustomerId(customerId);
   }
}

Azure App Configuration with Spring Cloud

Once we finish the app implementation, we can run it and connect with Cosmos DB. Of course, we need to set the connection URL and credentials. Let’s switch to the Azure Portal. We need to find the “Azure Cosmos DB” service in the main menu. Then click your database account. You will see the address of the endpoint as shown below. You should also see the previously created container in the “Containers” section.

In order to obtain the connection key, we need to go to the “Data Explorer” item in the left-side menu. Then choose the “Connect” tile. You will find the key in the target window.

spring-cloud-azure-cosmosdb

We could easily set all the required connection parameters using the spring.cloud.azure.cosmos.* properties. However, I would like to store all the configuration settings on Azure. Spring Cloud comes with built-in support for Azure App Configuration service. We have already included the required Spring Cloud starter. So now, we need to enable the Azure App Configuration service and put our properties into the store. Here’s the command for creating an App Configuration under the sample-spring-cloud-config name:

$ az appconfig create \
    -g sample-spring-cloud \
    -n sample-spring-cloud-config \
    -l eastus \
    --sku Standard

Once we create the App Configuration we can put our configuration settings in the key/value form. By default, Spring Cloud Azure is loading configurations that start with the key /application/. We need to add three Spring Cloud properties: spring.cloud.azure.cosmos.key, spring.cloud.azure.cosmos.database, and spring.cloud.azure.cosmos.endpoint.

$ az appconfig kv set \
    -n sample-spring-cloud-config \
    --key /application/spring.cloud.azure.cosmos.key \
    --value <YOUR_PRIMARY_KEY>

$ az appconfig kv set \
    -n sample-spring-cloud-config \
    --key /application/spring.cloud.azure.cosmos.database \
    --value sampledb

$ az appconfig kv set \
    -n sample-spring-cloud-config \
    --key /application/spring.cloud.azure.cosmos.endpoint \
    --value <YOUR_ENDPOINT_URI>

Let’s switch to the Azure Portal to check the configuration settings. We need to find the “App Configuration” service in the main dashboard. Then go to the sample-spring-cloud-config details and choose the “Configuration explorer” menu item. You should have all your application properties prefixed by the /application/. I also overrode some Spring Actuator settings to enable health check details and additional management endpoints.

spring-cloud-azure-app-configuration

That’s all. Now, we are ready to run our app. We just need to connect it to the Azure App Configuration service. In order to do that, we need to obtain its connection endpoint and credentials. You can go to the “Access keys” menu item in the “Settings” section. Then you should copy the value from the “Connection string” field as shown below. Alternatively, you can obtain the same information by executing the following CLI command: az appconfig credential list --name sample-spring-cloud-config.

Let’s save the value inside the APP_CONFIGURATION_CONNECTION_STRING environment variable. After that, we just need to create the Spring bootstrap.properties file in the src/main/resources directory containing the spring.cloud.azure.appconfiguration.stores[0].connection-string property.

spring.cloud.azure.appconfiguration.stores[0].connection-string=${APP_CONFIGURATION_CONNECTION_STRING}

Running Spring Boot App Locally

Finally, we can run our sample Spring Boot app. For now, we will just run it locally. As a result, it will connect to the Azure App Configuration and Cosmos DB deployed on the cloud. We can execute the following Maven command to start the app:

$ mvn clean spring-boot:run

Once you start the app you should see that it loads property sources from the Azure store:

If everything works fine your app is loading settings from Azure App Configuration and connects to the Cosmos DB instance:

spring-cloud-azure-logs

Once you start the app you can access it under the 8080 local port. The Swagger UI is available under the /swagger-ui.html path:

spring-cloud-azure-swagger

We can some data using e.g. the curl command as shown below:

$ curl -X 'POST' 'http://localhost:8080/accounts' \
    -H 'Content-Type: application/json' \
    -d '{"number": "1234567893","customerId": "1"}'
{"id":"5301e9dd-0556-40b7-9ea3-96975492f00c","number":"1234567893","customerId":"1"}

Then, we can e.g. find accounts owned by a particular customer:

$ curl http://localhost:8080/accounts/customer/1

We can also delete an existing account by calling the DELETE /account/{id} endpoint. In that case, I received the HTTP 404 Not Found error. Interesting?

Let’s see what happened. If you take a look at the implementation of AccountController you will find the method for the DELETE endpoint, right? In the meantime, I added one method annotated with @FeatureGate. This annotation is provided by Spring Cloud Azure. The following fragment of code shows the usage of feature management with Azure App Configuration. In fact, I’m using the “Feature Gate” functionality, which allows us to call the endpoint only if a feature is enabled on the Azure side. The name of our feature is delete-account.

@DeleteMapping("/{id}")
@FeatureGate(feature = "delete-account")
public void deleteById(@PathVariable String id) {
   repository.deleteById(id);
}

Now, the only thing we need to do is to add a new feature to the sample-spring-cloud-config App Configuration.

$ az appconfig feature set -n sample-spring-cloud-config --feature test-2

Let’s switch to the Azure Portal. You should go to the “Feature manager” menu item in the “Operations” section. As you see, by default the feature flag is disabled. It means the feature is not active and the endpoint is disabled.

spring-cloud-azure-feature

You can enable the feature by clicking the checkbox button and then restart the app. After that, the DELETE endpoint should be available.

Deploy Spring Cloud App on Azure

We can deploy our sample app to Azure in several different ways. I’ll choose the service dedicated especially to Spring Boot – Azure Spring Apps.

The installation from Azure Portal is pretty straightforward. I won’t get into the details. The name of our instance (cluster) is sample-spring-cloud-apps. We don’t need to know anything more to be able to deploy our app there.

Azure provides several Maven plugins for deploying apps. For Azure Spring Apps we should use azure-spring-apps-maven-plugin. We need to set the Azure Spring Apps instance in the clusterName parameter. The name of our app is account-service. We should also choose SKU and set the Azure subscription ID (loaded from the SUBSCRIPTION environment variable). In the deployment section, we need to define the required resources (RAM and CPU), number of running instances, Java version, and a single environment variable containing the connection string to the Azure App Configuration instance.

<plugin>
  <groupId>com.microsoft.azure</groupId>
  <artifactId>azure-spring-apps-maven-plugin</artifactId>
  <version>1.19.0</version>
  <configuration>
    <subscriptionId>${env.SUBSCRIPTION}</subscriptionId>
    <resourceGroup>sample-spring-cloud</resourceGroup>
    <clusterName>sample-spring-cloud-apps</clusterName>
    <sku>Consumption</sku>
    <appName>account-service</appName>
    <isPublic>true</isPublic>
    <deployment>
      <cpu>0.5</cpu>
      <memoryInGB>1</memoryInGB>
      <instanceCount>1</instanceCount>
      <runtimeVersion>Java 17</runtimeVersion>
      <environment>
        <APP_CONFIGURATION_CONNECTION_STRING>
          ${env.APP_CONFIGURATION_CONNECTION_STRING}
        </APP_CONFIGURATION_CONNECTION_STRING>
      </environment>
      <resources>
        <resource>
          <directory>target/</directory>
          <includes>
            <include>*.jar</include>
          </includes>
        </resource>
      </resources>
    </deployment>
  </configuration>
</plugin>

Then we need to build the app and deploy it on Azure Spring Apps with the following command:

$ mvn clean package azure-spring-apps:deploy

You should have a similar result as shown below:

Does the name of the instance sound familiar? 🙂 Under the hood it’s Kubernetes. The Azure Spring Apps service uses Azure Container Apps for running containers. On the other hand, Azure Container Apps is hosted on the Kubernetes cluster. But these are the details. What is important here – our app has already been deployed on Azure.

spring-cloud-azure-spring-apps

We can display the account-service app details. The app is exposed under the public URL. We just need to copy the link.

Let’s take a look at the configuration section. As you see it contains the connection string to the App Configuration endpoint.

We can display the Swagger UI and perform some test calls.

Final Thoughts

That’s all in this article, but I’m planning to create several others about Spring Boot and Azure soon! Azure seems to be a friendly platform for the Spring Boot developer 🙂 I showed you how to easily integrate your Spring Boot app with the most popular Azure services like Cosmos DB. We also covered such topics as configuration management and feature flags (gates) with the App Configuration service. Finally, we deployed the app on… Kubernetes through the Azure Spring Apps service 🙂

The post Getting Started with Spring Cloud Azure appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/12/07/getting-started-with-spring-cloud-azure/feed/ 0 14725
Handle Traffic Bursts with Ephemeral OpenShift Clusters https://piotrminkowski.com/2023/10/06/handle-traffic-bursts-with-ephemeral-openshift-clusters/ https://piotrminkowski.com/2023/10/06/handle-traffic-bursts-with-ephemeral-openshift-clusters/#comments Fri, 06 Oct 2023 18:11:03 +0000 https://piotrminkowski.com/?p=14560 This article will teach you how to handle temporary traffic bursts with ephemeral OpenShift clusters provisioned in the public cloud. Such a solution should work in a fully automated way. We must forward part of that traffic to another cluster once we deal with unexpected or sudden network traffic volume peaks. Such a cluster is […]

The post Handle Traffic Bursts with Ephemeral OpenShift Clusters appeared first on Piotr's TechBlog.

]]>
This article will teach you how to handle temporary traffic bursts with ephemeral OpenShift clusters provisioned in the public cloud. Such a solution should work in a fully automated way. We must forward part of that traffic to another cluster once we deal with unexpected or sudden network traffic volume peaks. Such a cluster is called “ephemeral” since it works just for a specified period until the unexpected situation ends. Of course, we should be able to use ephemeral OpenShift as soon as possible after the event occurs. But on the other hand, we don’t want to pay for it if unnecessary.

In this article, I’ll show how you can achieve all the described things with the GitOps (Argo CD) approach and several tools around OpenShift/Kubernetes like Kyverno or Red Hat Service Interconnect (open-source Skupper project). We will also use Advanced Cluster Management for Kubernetes (ACM) to create and handle “ephemeral” OpenShift clusters. If you need an introduction to the GitOps approach in a multicluster OpenShift environment read the following article. It is also to familiarize with the idea behind multicluster communication through the Skupper project. In order to do that you can read the article about multicluster load balancing with Skupper on my blog.

Source Code

If you would like to try it by yourself, you can always take a look at my source code. In order to do that, you need to clone my GitHub repository. It contains several YAML manifests that allow us to manage OpenShift clusters in a GitOps way. For that exercise, we will use the manifests under the clusterpool directory. There are two subdirectories there: hub and managed. The manifests inside the hub directory should be applied to the management cluster, while the manifests inside the managed directory to the managed cluster. In our traffic bursts scenario, a single OpenShift acts as a hub and managed cluster, and it creates another managed (ephemeral) cluster.

Prerequisites

In order to start the exercise, we need a running Openshift that acts as a management cluster. It will create and configure the ephemeral cluster on AWS used to handle traffic volume peaks. In the first step, we need to install two operators on the management cluster: “Openshift GitOps” and “Advanced Cluster Management for Kubernetes”.

traffic-bursts-openshift-operators

After that, we have to create the MultiClusterHub object, which runs and configures ACM:

kind: MultiClusterHub
apiVersion: operator.open-cluster-management.io/v1
metadata:
  name: multiclusterhub
  namespace: open-cluster-management
spec: {}

We also need to install Kyverno. Since there is no official operator for it, we have to leverage the Helm chart. Firstly, let’s add the following Helm repository:

$ helm repo add kyverno https://kyverno.github.io/kyverno/

Then, we can install the latest version of Kyverno in the kyverno namespace using the following command:

$ helm install my-kyverno kyverno/kyverno -n kyverno --create-namespace

By the way, Openshift Console provides built-in support for Helm. In order to use it, you need to switch to the Developer perspective. Then, click the Helm menu and choose the Create -> Repository option. Once you do it you will be able to create a new Helm release of Kyverno.

Using OpenShift Cluster Pool

With ACM we can create a pool of Openshift clusters. That pool contains running or hibernated clusters. While a running cluster is just ready to work, a hibernated cluster needs to be resumed by ACM. We are defining a pool size and the number of running clusters inside that pool. Once we create the ClusterPool object ACM starts to provision new clusters on AWS. In our case, the pool size is 1, but the number of running clusters is 0. The object declaration also contains all things required to create a new cluster like the installation template (the aws-install-config Secret) or AWS account credentials reference (the aws-aws-creds Secret). Each cluster within that pool is automatically assigned to the interconnect ManagedClusterSet. The cluster set approach allows us to group multiple OpenShift clusters.

apiVersion: hive.openshift.io/v1
kind: ClusterPool
metadata:
  name: aws
  namespace: aws
  labels:
    cloud: AWS
    cluster.open-cluster-management.io/clusterset: interconnect
    region: us-east-1
    vendor: OpenShift
spec:
  baseDomain: sandbox449.opentlc.com
  imageSetRef:
    name: img4.12.36-multi-appsub
  installConfigSecretTemplateRef:
    name: aws-install-config
  platform:
    aws:
      credentialsSecretRef:
        name: aws-aws-creds
      region: us-east-1
  pullSecretRef:
    name: aws-pull-secret
  size: 1

So, as a result, there is only one cluster in the pool. ACM keeps that cluster in the hibernated state. It means that all the VMs with master and worker nodes are stopped. In order to resume the hibernated cluster we need to create the ClusterClaim object that refers to the ClusterPool. It is similar to clicking the Claim cluster link visible below. However, we don’t want to create that object directly, but as a reaction to the Kubernetes event.

traffic-bursts-openshift-cluster-pool

Before we proceed, let’s just take a look at a list of virtual machines on AWS related to our cluster. As you see they are not running.

Claim Cluster From the Pool on Scaling Event

Now, the question is – what kind of event should result in getting a cluster from the pool? A single app could rely on the scaling event. So once the number of deployment pods exceeds the assumed threshold we will resume a hibernated cluster and run the app there. With Kyverno we can react to such scaling events by creating the ClusterPolicy object. As you see our policy monitors the Deployment/scale resource. The assumed maximum allowed pod for our app on the main cluster is 4. We need to put such a value in the preconditions together with the Deployment name. Once all the conditions are met we may generate a new Kubernetes resource. That resource is the ClusterClaim which refers to the ClusterPool we created in the previous section. It will result in getting a hibernated cluster from the pool and resuming it.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: aws
spec:
  background: true
  generateExisting: true
  rules:
    - generate:
        apiVersion: hive.openshift.io/v1
        data:
          spec:
            clusterPoolName: aws
        kind: ClusterClaim
        name: aws
        namespace: aws
        synchronize: true
      match:
        any:
          - resources:
              kinds:
                - Deployment/scale
      preconditions:
        all:
          - key: '{{request.object.spec.replicas}}'
            operator: Equals
            value: 4
          - key: '{{request.object.metadata.name}}'
            operator: Equals
            value: sample-kotlin-spring
  validationFailureAction: Audit

Kyverno requires additional permission to create the ClusterClaim object. We can easily achieve this by creating a properly annotated ClusterRole:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kyverno:create-claim
  labels:
    app.kubernetes.io/component: background-controller
    app.kubernetes.io/instance: kyverno
    app.kubernetes.io/part-of: kyverno
rules:
  - verbs:
      - create
      - patch
      - update
      - delete
    apiGroups:
      - hive.openshift.io
    resources:
      - clusterclaims

Once the cluster is ready we are going to assign it to the interconnect group represented by the ManagedClusterSet object. This group of clusters is managed by our instance of Argo CD from the openshift-gitops namespace. In order to achieve it we need to apply the following objects to the management OpenShift cluster:

apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSetBinding
metadata:
  name: interconnect
  namespace: openshift-gitops
spec:
  clusterSet: interconnect
---
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name: interconnect
  namespace: openshift-gitops
spec:
  predicates:
    - requiredClusterSelector:
        labelSelector:
          matchExpressions:
            - key: vendor
              operator: In
              values:
                - OpenShift
---
apiVersion: apps.open-cluster-management.io/v1beta1
kind: GitOpsCluster
metadata:
  name: argo-acm-importer
  namespace: openshift-gitops
spec:
  argoServer:
    argoNamespace: openshift-gitops
    cluster: openshift-gitops
  placementRef:
    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    name: interconnect
    namespace: openshift-gitops

After applying the manifest visible above you should see that the openshift-gitops is managing the interconnect cluster group.

Automatically Sync Configuration for a New Cluster with Argo CD

In Argo CD we can define the ApplicationSet with the “Cluster Decision Resource Generator” (1). You can read more details about that type of generator here in the docs. It will create the Argo CD Application per each Openshift cluster in the interconnect group (2). Then, the newly created Argo CD Application will automatically apply manifests responsible for creating our sample Deployment. Of course, those manifests are available in the same repository inside the clusterpool/managed directory (3).

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: cluster-init
  namespace: openshift-gitops
spec:
  generators:
    - clusterDecisionResource: # (1)
        configMapRef: acm-placement
        labelSelector:
          matchLabels:
            cluster.open-cluster-management.io/placement: interconnect # (2)
        requeueAfterSeconds: 180
  template:
    metadata:
      name: 'cluster-init-{{name}}'
    spec:
      ignoreDifferences:
        - group: apps
          kind: Deployment
          jsonPointers:
            - /spec/replicas
      destination:
        server: '{{server}}'
        namespace: interconnect
      project: default
      source:
        path: clusterpool/managed # (3)
        repoURL: 'https://github.com/piomin/openshift-cluster-config.git'
        targetRevision: master
      syncPolicy:
        automated:
          selfHeal: true
        syncOptions:
          - CreateNamespace=true

Here’s the YAML manifest that contains the Deployment object and the Openshift Route definition. Pay attention to the three skupper.io/* annotations. We will let Skupper generate the Kubernetes Service to load balance between all running pods of our app. Finally, it will allow us to load balance between the pods spread across two Openshift clusters.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/instance: sample-kotlin-spring
  annotations:
    skupper.io/address: sample-kotlin-spring
    skupper.io/port: '8080'
    skupper.io/proxy: http
  name: sample-kotlin-spring
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sample-kotlin-spring
  template:
    metadata:
      labels:
        app: sample-kotlin-spring
    spec:
      containers:
        - image: 'quay.io/pminkows/sample-kotlin-spring:1.4.39'
          name: sample-kotlin-spring
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 1000m
              memory: 1024Mi
            requests:
              cpu: 100m
              memory: 128Mi
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  labels:
    app: sample-kotlin-spring
    app.kubernetes.io/component: sample-kotlin-spring
    app.kubernetes.io/instance: sample-spring-kotlin
  name: sample-kotlin-spring
spec:
  port:
    targetPort: port8080
  to:
    kind: Service
    name: sample-kotlin-spring
    weight: 100
  wildcardPolicy: None

Let’s check out how it works. I won’t simulate traffic bursts on OpenShift. However, you can easily imagine that our app is autoscaled with HPA (Horizontal Pod Autoscaler) and therefore is able to react to the traffic volume peak. I will just manually scale up the app to 4 pods:

Now, let’s switch to the All Clusters view. As you see Kyverno sent a cluster claim to the aws ClusterPool. The claim stays in the Pending status until the cluster won’t be resumed. In the meantime, ACM creates a new cluster to fill up the pool.

traffic-bursts-openshift-cluster-claim

Once the cluster is ready you will see it in the Clusters view.

ACM automatically adds a cluster from the aws pool to the interconnect group (ManagedClusterSet). Therefore Argo CD is seeing a new cluster and adding it as a managed.

Finally, Argo CD generates the Application for a new cluster to automatically install all required Kubernetes objects.

traffic-bursts-openshift-argocd

Using Red Hat Service Interconnect

In order to enable Skupper for our apps we first need to install the Red Hat Service Interconnect operator. We can also do it in the GitOps way. We need to define the Subscription object as shown below (1). The operator has to be installed on both hub and managed clusters. Once we install the operator we need to enable Skupper in the particular namespace. In order to do that we need to define the ConfigMap there with the skupper-site name (2). Those manifests are also applied by the Argo CD Application described in the previous section.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: skupper-operator
  namespace: openshift-operators
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  channel: alpha
  installPlanApproval: Automatic
  name: skupper-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: skupper-site

Here’s the result of synchronization for the managed cluster.

We can switch to the OpenShift Console of the new cluster. The Red Hat Service Interconnect operator is ready.

Finally, we are at the final phase of our exercise. Both our clusters are running. We have already installed our sample app and the Skupper operator on both of them. Now, we need to link the apps running on different clusters into a single Skupper network. In order to do that, we need to let Skupper generate a connection token. Here’s the Secret object responsible for that. It doesn’t contain any data – just the skupper.io/type label with the connection-token-request value. Argo CD has already applied it to the management cluster in the interconnect namespace.

apiVersion: v1
kind: Secret
metadata:
  labels:
    skupper.io/type: connection-token-request
  name: token-req
  namespace: interconnect

As a result, Skupper fills the Secret object with certificates and a private key. It also overrides the value of the skupper.io/type label.

So, now our goal is to copy that Secret to the managed cluster. We won’t do that in the GitOps way directly, since the object was dynamically generated on OpenShift. However, we may use the SelectorSyncSet object provided by ACM. It can copy the secrets between the hub and managed clusters.

apiVersion: hive.openshift.io/v1
kind: SelectorSyncSet
metadata:
  name: skupper-token-sync
spec:
  clusterDeploymentSelector:
    matchLabels:
      cluster.open-cluster-management.io/clusterset: interconnect
  secretMappings:
    - sourceRef:
        name: token-req
        namespace: interconnect
      targetRef:
        name: token-req
        namespace: interconnect

Once the token is copied into the managed cluster, it should connect to the Skupper network existing on the main cluster. We can verify that everything works fine with the skupper CLI command. The following command prints all the pods from the Skupper network. As you see, we have 4 pods on the main (local) cluster and 2 pods on the managed (linked) cluster.

traffic-bursts-openshift-skupper

Let’s display the route of our service:

$ oc get route sample-kotlin-spring

Now, we can make a final test. Here’s the siege request for my route and cluster domain. It will send 10k requests via the Route. After running it, you can verify the logs to see if the traffic comes to all six pods spread across our two clusters.

$ siege -r 1000 -c 10  http://sample-kotlin-spring-interconnect.apps.jaipxwuhcp.eastus.aroapp.io/persons

Final Thoughts

Handling traffic bursts is one of the more interesting scenarios for a hybrid-cloud environment with OpenShift. With the approach described in that article, we can dynamically provision clusters and redirect traffic from on-prem to the cloud. We can do it in a fully automated, GitOps-based way. The features and tools around OpenShift allow us to cut down the cloud costs and speed up cluster startup. Therefore it reduces system downtime in case of any failures or unexpected situations.

The post Handle Traffic Bursts with Ephemeral OpenShift Clusters appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/10/06/handle-traffic-bursts-with-ephemeral-openshift-clusters/feed/ 2 14560
Manage OpenShift with Terraform https://piotrminkowski.com/2023/09/29/manage-openshift-with-terraform/ https://piotrminkowski.com/2023/09/29/manage-openshift-with-terraform/#respond Fri, 29 Sep 2023 12:30:37 +0000 https://piotrminkowski.com/?p=14531 This article will teach you how to create and manage OpenShift clusters with Terraform. For the purpose of this exercise, we will run OpenShift on Azure using the managed service called ARO (Azure Red Hat OpenShift). Cluster creation is the first part of the exercise. After that, we are going to install several operators on […]

The post Manage OpenShift with Terraform appeared first on Piotr's TechBlog.

]]>
This article will teach you how to create and manage OpenShift clusters with Terraform. For the purpose of this exercise, we will run OpenShift on Azure using the managed service called ARO (Azure Red Hat OpenShift). Cluster creation is the first part of the exercise. After that, we are going to install several operators on OpenShift and some apps that use features provided by those operators. Of course, our main goal is to do all the required steps in the single Terraform command.

Let me clarify some things before we begin. In this article, I’m not promoting or recommending Terraform as the best tool for managing OpenShift or Kubernetes clusters at scale. Usually, I prefer the GitOps approach for that. If you are interested in how to leverage such tools like ACM (Advanced Cluster Management for Kubernetes) and Argo CD for managing multiple clusters with the GitOps approach read that article. It describes the idea of a cluster continuous management. From my perspective, Terraform fits better for one-time actions, like for example, creating and configuring OpenShift for the demo or PoC and then removing it. We can also use Terraform to install Argo CD and then delegate all the next steps there.

Anyway, let’s focus on our scenario. We will widely use those two Terraform providers: Azure and Kubernetes. So, it is worth at least taking a look at the documentation to familiarize yourself with the basics.

Prerequisites

Of course, you don’t have to perform that exercise on Azure with ARO. If you already have OpenShift running you can skip the part related to the cluster creation and just run the Terraform script responsible for installing operators and apps. For the whole exercise, you need to install:

  1. Azure CLI (instructions) – once you install the login to your Azure account and create the subscription. To check if all works run the following command: az account show
  2. Terraform CLI (instructions) – once you install the Terraform CLI you can verify it with the following command: terraform version

Source Code

If you would like to try it by yourself, you can always take a look at my source code. In order to do that, you need to clone my GitHub repository. Then you should follow my instructions 🙂

Terraform Providers

The Terraform scripts for cluster creation are available inside the aro directory, while the script for cluster configuration inside the servicemesh directory. Here’s the structure of our repository:

Firstly, let’s take a look at the list of Terraform providers used in our exercise. In general, we need providers to interact with Azure and OpenShift through the Kubernetes API. In most cases, the official Hashicorp Azure Provider for Azure Resource Manager will be enough (1). However, in a few cases, we will have to interact directly with Azure REST API (for example to create an OpenShift cluster object) through the azapi provider (2). The Hashicorp Random Provider will be used to generate a random domain name for our cluster (3). The rest of the providers allow us to interact with OpenShift. Once again, the official Hashicorp Kubernetes Provider is valid in most cases (4). We will also use the kubectl provider (5) and Helm for installing the Postgres database (6) used by the sample apps.

terraform {
  required_version = ">= 1.0"
  required_providers {
    // (1)
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">=3.3.0"
    }
    // (2)
    azapi = {
      source  = "Azure/azapi"
      version = ">=1.0.0"
    }
    // (3)
    random = {
      source = "hashicorp/random"
      version = "3.5.1"
    }
    local = {
      source = "hashicorp/local"
      version = "2.4.0"
    }
  }
}

provider "azurerm" {
  features {}
}

provider "azapi" {
}

provider "random" {}
provider "local" {}

Here’s the list of providers used in the Re Hat Service Mesh installation:

terraform {
  required_version = ">= 1.0"
  required_providers {
    // (4)
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "2.23.0"
    }
    // (5)
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.13.0"
    }
    // (6)
    helm = {
      source = "hashicorp/helm"
      version = "2.11.0"
    }
  }
}

provider "kubernetes" {
  config_path = "aro/kubeconfig"
  config_context = var.cluster-context
}

provider "kubectl" {
  config_path = "aro/kubeconfig"
  config_context = var.cluster-context
}

provider "helm" {
  kubernetes {
    config_path = "aro/kubeconfig"
    config_context = var.cluster-context
  }
}

In order to install providers, we need to run the following command (you don’t have to do it now):

$ terraform init

Create Azure Red Hat OpenShift Cluster with Terraform

Unfortunately, there is no dedicated, official Terraform provider for creating OpenShift clusters on Azure ARO. There are some discussions about such a feature (you can find it here), but still without a final effect. Maybe it will change in the future. However, creating an ARO cluster is not such a complicated thing since we may use existing providers listed in the previous section. You can find an interesting guide in the Microsoft docs here. It was also a starting point for my work. I improved several things there, for example, to avoid using the az CLI in the scripts and have the full configuration in Terraform HCL.

Let’s analyze our Terraform manifest step by step. Here’s a list of the most important elements we need to place in the HCL file:

  1. We have to read some configuration data from the Azure client
  2. I have an existing resource group with the openenv prefix, but you can put there any name you want. That’s our main resource group
  3. ARO requires a different resource group than a main resource group
  4. We need to create a virtual network for Openshift. There is a dedicated subnet for master nodes and another one for worker nodes. All the parameters visible there are required. You can change the IP address range as long as it doesn’t allow for conflicts between the master and worker nodes
  5. ARO requires the dedicated service principal to create a cluster. Let’s create the Azure application, and then the service principal with the password. The password is auto-generated by Azure.
  6. The newly created service principal requires some privileges. Let’s assign the “User Access Administrator” and network “Contributor”. Then, we need to search the service principal created by Azure under the “Azure Red Hat OpenShift RP” name and also assign a network “Contributor” there.
  7. All the required objects have already been created. There is no dedicated resource for the ARO cluster. In order to define the cluster resource we need to leverage the azapi provider.
  8. The definition of the OpenShift cluster is available inside the body section. All the fields you see there are required to successfully create the cluster.
// (1)
data "azurerm_client_config" "current" {}
data "azuread_client_config" "current" {}

// (2)
data "azurerm_resource_group" "my_group" {
  name = "openenv-${var.guid}"
}

resource "random_string" "random" {
  length           = 10
  numeric          = false
  special          = false
  upper            = false
}

// (3)
locals {
  resource_group_id = "/subscriptions/${data.azurerm_client_config.current.subscription_id}/resourceGroups/aro-${random_string.random.result}-${data.azurerm_resource_group.my_group.location}"
  domain            = random_string.random.result
}

// (4)
resource "azurerm_virtual_network" "virtual_network" {
  name                = "aro-vnet-${var.guid}"
  address_space       = ["10.0.0.0/22"]
  location            = data.azurerm_resource_group.my_group.location
  resource_group_name = data.azurerm_resource_group.my_group.name
}
resource "azurerm_subnet" "master_subnet" {
  name                 = "master_subnet"
  resource_group_name  = data.azurerm_resource_group.my_group.name
  virtual_network_name = azurerm_virtual_network.virtual_network.name
  address_prefixes     = ["10.0.0.0/23"]
  service_endpoints    = ["Microsoft.ContainerRegistry"]
  private_link_service_network_policies_enabled  = false
  depends_on = [azurerm_virtual_network.virtual_network]
}
resource "azurerm_subnet" "worker_subnet" {
  name                 = "worker_subnet"
  resource_group_name  = data.azurerm_resource_group.my_group.name
  virtual_network_name = azurerm_virtual_network.virtual_network.name
  address_prefixes     = ["10.0.2.0/23"]
  service_endpoints    = ["Microsoft.ContainerRegistry"]
  depends_on = [azurerm_virtual_network.virtual_network]
}

// (5)
resource "azuread_application" "aro_app" {
  display_name = "aro_app"
  owners       = [data.azuread_client_config.current.object_id]
}
resource "azuread_service_principal" "aro_app" {
  application_id               = azuread_application.aro_app.application_id
  app_role_assignment_required = false
  owners                       = [data.azuread_client_config.current.object_id]
}
resource "azuread_service_principal_password" "aro_app" {
  service_principal_id = azuread_service_principal.aro_app.object_id
}

// (6)
resource "azurerm_role_assignment" "aro_cluster_service_principal_uaa" {
  scope                = data.azurerm_resource_group.my_group.id
  role_definition_name = "User Access Administrator"
  principal_id         = azuread_service_principal.aro_app.id
  skip_service_principal_aad_check = true
}
resource "azurerm_role_assignment" "aro_cluster_service_principal_network_contributor_pre" {
  scope                = data.azurerm_resource_group.my_group.id
  role_definition_name = "Contributor"
  principal_id         = azuread_service_principal.aro_app.id
  skip_service_principal_aad_check = true
}
resource "azurerm_role_assignment" "aro_cluster_service_principal_network_contributor" {
  scope                = azurerm_virtual_network.virtual_network.id
  role_definition_name = "Contributor"
  principal_id         = azuread_service_principal.aro_app.id
  skip_service_principal_aad_check = true
}
data "azuread_service_principal" "aro_app" {
  display_name = "Azure Red Hat OpenShift RP"
  depends_on = [azuread_service_principal.aro_app]
}
resource "azurerm_role_assignment" "aro_resource_provider_service_principal_network_contributor" {
  scope                = azurerm_virtual_network.virtual_network.id
  role_definition_name = "Contributor"
  principal_id         = data.azuread_service_principal.aro_app.id
  skip_service_principal_aad_check = true
}

// (7)
resource "azapi_resource" "aro_cluster" {
  name      = "aro-cluster-${var.guid}"
  parent_id = data.azurerm_resource_group.my_group.id
  type      = "Microsoft.RedHatOpenShift/openShiftClusters@2023-07-01-preview"
  location  = data.azurerm_resource_group.my_group.location
  timeouts {
    create = "75m"
  }
  // (8)
  body = jsonencode({
    properties = {
      clusterProfile = {
        resourceGroupId      = local.resource_group_id
        pullSecret           = file("~/Downloads/pull-secret-latest.txt")
        domain               = local.domain
        fipsValidatedModules = "Disabled"
        version              = "4.12.25"
      }
      networkProfile = {
        podCidr              = "10.128.0.0/14"
        serviceCidr          = "172.30.0.0/16"
      }
      servicePrincipalProfile = {
        clientId             = azuread_service_principal.aro_app.application_id
        clientSecret         = azuread_service_principal_password.aro_app.value
      }
      masterProfile = {
        vmSize               = "Standard_D8s_v3"
        subnetId             = azurerm_subnet.master_subnet.id
        encryptionAtHost     = "Disabled"
      }
      workerProfiles = [
        {
          name               = "worker"
          vmSize             = "Standard_D8s_v3"
          diskSizeGB         = 128
          subnetId           = azurerm_subnet.worker_subnet.id
          count              = 3
          encryptionAtHost   = "Disabled"
        }
      ]
      apiserverProfile = {
        visibility           = "Public"
      }
      ingressProfiles = [
        {
          name               = "default"
          visibility         = "Public"
        }
      ]
    }
  })
  depends_on = [
    azurerm_subnet.worker_subnet,
    azurerm_subnet.master_subnet,
    azuread_service_principal_password.aro_app,
    azurerm_role_assignment.aro_resource_provider_service_principal_network_contributor
  ]
}

output "domain" {
  value = local.domain
}

Save Kubeconfig

Once we successfully create the OpenShift cluster, we need to obtain and save the kubeconfig file. It will allow Terraform to interact with the cluster through the master API. In order to get the kubeconfig content we need to call the Azure listAdminCredentials REST endpoint. It is the same as calling the az aro get-admin-kubeconfig command using CLI. It will return JSON with base64-encoded content. After decoding from JSON and Base64 we save the content inside the kubeconfig file in the current directory.

resource "azapi_resource_action" "test" {
  type        = "Microsoft.RedHatOpenShift/openShiftClusters@2023-07-01-preview"
  resource_id = "/subscriptions/${data.azurerm_client_config.current.subscription_id}/resourceGroups/openenv-${var.guid}/providers/Microsoft.RedHatOpenShift/openShiftClusters/aro-cluster-${var.guid}"
  action      = "listAdminCredentials"
  method      = "POST"
  response_export_values = ["*"]
}

output "kubeconfig" {
  value = base64decode(jsondecode(azapi_resource_action.test.output).kubeconfig)
}

resource "local_file" "kubeconfig" {
  content  =  base64decode(jsondecode(azapi_resource_action.test.output).kubeconfig)
  filename = "kubeconfig"
  depends_on = [azapi_resource_action.test]
}

Install OpenShift Operators with Terraform

Finally, we can interact with the existing OpenShift cluster via the kubeconfig file. In the first step, we will deploy some operators. In OpenShift operators are a preferred way of installing more advanced apps (for example consisting of several Deployments). Red Hat comes with a set of supported operators that allows us to extend OpenShift functionalities. It can be, for example, a service mesh, a clustered database, or a message broker.

Let’s imagine we want to install a service mesh on OpenShift. There are some dedicated operators for that. The OpenShift Service Mesh operator is built on top of the open-source project Istio. We will also install the OpenShift Distributed Tracing (Jaeger) and Kiali operators. In order to do that we need to define the Subscription CRD object. Also, if we install an operator in a different namespace than openshift-operators we have to create the OperatorGroup CRD object. Here’s the Terraform HCL script that installs our operators.

// (1)
resource "kubernetes_namespace" "openshift-distributed-tracing" {
  metadata {
    name = "openshift-distributed-tracing"
  }
}
resource "kubernetes_manifest" "tracing-group" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1"
    "kind"       = "OperatorGroup"
    "metadata"   = {
      "name"      = "openshift-distributed-tracing"
      "namespace" = "openshift-distributed-tracing"
    }
    "spec" = {
      "upgradeStrategy" = "Default"
    }
  }
}
resource "kubernetes_manifest" "tracing" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1alpha1"
    "kind"       = "Subscription"
    "metadata" = {
      "name"      = "jaeger-product"
      "namespace" = "openshift-distributed-tracing"
    }
    "spec" = {
      "channel"             = "stable"
      "installPlanApproval" = "Automatic"
      "name"                = "jaeger-product"
      "source"              = "redhat-operators"
      "sourceNamespace"     = "openshift-marketplace"
    }
  }
}

// (2)
resource "kubernetes_manifest" "kiali" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1alpha1"
    "kind"       = "Subscription"
    "metadata" = {
      "name"      = "kiali-ossm"
      "namespace" = "openshift-operators"
    }
    "spec" = {
      "channel"             = "stable"
      "installPlanApproval" = "Automatic"
      "name"                = "kiali-ossm"
      "source"              = "redhat-operators"
      "sourceNamespace"     = "openshift-marketplace"
    }
  }
}

// (3)
resource "kubernetes_manifest" "ossm" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1alpha1"
    "kind"       = "Subscription"
    "metadata"   = {
      "name"      = "servicemeshoperator"
      "namespace" = "openshift-operators"
    }
    "spec" = {
      "channel"             = "stable"
      "installPlanApproval" = "Automatic"
      "name"                = "servicemeshoperator"
      "source"              = "redhat-operators"
      "sourceNamespace"     = "openshift-marketplace"
    }
  }
}

// (4)
resource "kubernetes_manifest" "ossmconsole" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1alpha1"
    "kind"       = "Subscription"
    "metadata"   = {
      "name"      = "ossmconsole"
      "namespace" = "openshift-operators"
    }
    "spec" = {
      "channel"             = "candidate"
      "installPlanApproval" = "Automatic"
      "name"                = "ossmconsole"
      "source"              = "community-operators"
      "sourceNamespace"     = "openshift-marketplace"
    }
  }
}

After installing the operators we may proceed to the service mesh configuration (1). We need to use CRD objects installed by the operators. Kubernetes Terraform provider won’t be a perfect choice for that since it verifies the existence of an object before applying the whole script. Therefore we will switch to the kubectl provider that just applies the object without any initial verification. We need to create an Istio control plane using the ServiceMeshControlPlane object (2). As you see, it also enables distributed tracing with Jaeger and a dashboard with Kiali. Once a control plane is ready we may proceed to the next steps. We will create all the objects responsible for Istio configuration including VirtualService, DestinatioRule, and Gateway (3).

resource "kubernetes_namespace" "istio" {
  metadata {
    name = "istio"
  }
}

// (1)
resource "time_sleep" "wait_120_seconds" {
  depends_on = [kubernetes_manifest.ossm]

  create_duration = "120s"
}

// (2)
resource "kubectl_manifest" "basic" {
  depends_on = [time_sleep.wait_120_seconds, kubernetes_namespace.istio]
  yaml_body = <<YAML
kind: ServiceMeshControlPlane
apiVersion: maistra.io/v2
metadata:
  name: basic
  namespace: istio
spec:
  version: v2.4
  tracing:
    type: Jaeger
    sampling: 10000
  policy:
    type: Istiod
  telemetry:
    type: Istiod
  addons:
    jaeger:
      install:
        storage:
          type: Memory
    prometheus:
      enabled: true
    kiali:
      enabled: true
    grafana:
      enabled: true
YAML
}

resource "kubectl_manifest" "console" {
  depends_on = [time_sleep.wait_120_seconds, kubernetes_namespace.istio]
  yaml_body = <<YAML
kind: OSSMConsole
apiVersion: kiali.io/v1alpha1
metadata:
  name: ossmconsole
  namespace: istio
spec:
  kiali:
    serviceName: ''
    serviceNamespace: ''
    servicePort: 0
    url: ''
YAML
}

resource "time_sleep" "wait_60_seconds_2" {
  depends_on = [kubectl_manifest.basic]

  create_duration = "60s"
}

// (3)
resource "kubectl_manifest" "access" {
  depends_on = [time_sleep.wait_120_seconds, kubernetes_namespace.istio, kubernetes_namespace.demo-apps]
  yaml_body = <<YAML
apiVersion: maistra.io/v1
kind: ServiceMeshMemberRoll
metadata:
  name: default
  namespace: istio
spec:
  members:
    - demo-apps
YAML
}

resource "kubectl_manifest" "gateway" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: microservices-gateway
  namespace: demo-apps
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - quarkus-insurance-app.apps.${var.domain}
        - quarkus-person-app.apps.${var.domain}
YAML
}

resource "kubectl_manifest" "quarkus-insurance-app-vs" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: quarkus-insurance-app-vs
  namespace: demo-apps
spec:
  hosts:
    - quarkus-insurance-app.apps.${var.domain}
  gateways:
    - microservices-gateway
  http:
    - match:
        - uri:
            prefix: "/insurance"
      rewrite:
        uri: " "
      route:
        - destination:
            host: quarkus-insurance-app
          weight: 100
YAML
}

resource "kubectl_manifest" "quarkus-person-app-dr" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body  = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: quarkus-person-app-dr
  namespace: demo-apps
spec:
  host: quarkus-person-app
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2
YAML
}

resource "kubectl_manifest" "quarkus-person-app-vs-via-gw" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body  = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: quarkus-person-app-vs-via-gw
  namespace: demo-apps
spec:
  hosts:
    - quarkus-person-app.apps.${var.domain}
  gateways:
    - microservices-gateway
  http:
    - match:
      - uri:
          prefix: "/person"
      rewrite:
        uri: " "
      route:
        - destination:
            host: quarkus-person-app
            subset: v1
          weight: 100
        - destination:
            host: quarkus-person-app
            subset: v2
          weight: 0
YAML
}

resource "kubectl_manifest" "quarkus-person-app-vs" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body  = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: quarkus-person-app-vs
  namespace: demo-apps
spec:
  hosts:
    - quarkus-person-app
  http:
    - route:
        - destination:
            host: quarkus-person-app
            subset: v1
          weight: 100
        - destination:
            host: quarkus-person-app
            subset: v2
          weight: 0
YAML
}

Finally, we will run our sample Quarkus apps that communicate through the Istio mesh and connect to the Postgres database. The script is quite large. All the apps are running in the demo-apps namespace (1). They are connecting with the Postgres database installed using the Terraform Helm provider from Bitnami chart (2). Finally, we are creating the Deployments for two apps: person-service and insurance-service (3). There are two versions per microservice. Don’t focus on the features of the apps. They are here just to show the subsequent layers of the installation process. We are starting with the operators and CRDs, then moving to Istio configuration, and finally installing our custom apps.

// (1)
resource "kubernetes_namespace" "demo-apps" {
  metadata {
    name = "demo-apps"
  }
}

resource "kubernetes_secret" "person-db-secret" {
  depends_on = [kubernetes_namespace.demo-apps]
  metadata {
    name      = "person-db"
    namespace = "demo-apps"
  }
  data = {
    postgres-password = "123456"
    password          = "123456"
    database-user     = "person-db"
    database-name     = "person-db"
  }
}

resource "kubernetes_secret" "insurance-db-secret" {
  depends_on = [kubernetes_namespace.demo-apps]
  metadata {
    name      = "insurance-db"
    namespace = "demo-apps"
  }
  data = {
    postgres-password = "123456"
    password          = "123456"
    database-user     = "insurance-db"
    database-name     = "insurance-db"
  }
}

// (2)
resource "helm_release" "person-db" {
  depends_on = [kubernetes_namespace.demo-apps]
  chart            = "postgresql"
  name             = "person-db"
  namespace        = "demo-apps"
  repository       = "https://charts.bitnami.com/bitnami"

  values = [
    file("manifests/person-db-values.yaml")
  ]
}
resource "helm_release" "insurance-db" {
  depends_on = [kubernetes_namespace.demo-apps]
  chart            = "postgresql"
  name             = "insurance-db"
  namespace        = "demo-apps"
  repository       = "https://charts.bitnami.com/bitnami"

  values = [
    file("manifests/insurance-db-values.yaml")
  ]
}

// (3)
resource "kubernetes_deployment" "quarkus-insurance-app" {
  depends_on = [helm_release.insurance-db, time_sleep.wait_60_seconds_2]
  metadata {
    name      = "quarkus-insurance-app"
    namespace = "demo-apps"
    annotations = {
      "sidecar.istio.io/inject": "true"
    }
  }
  spec {
    selector {
      match_labels = {
        app = "quarkus-insurance-app"
        version = "v1"
      }
    }
    template {
      metadata {
        labels = {
          app = "quarkus-insurance-app"
          version = "v1"
        }
        annotations = {
          "sidecar.istio.io/inject": "true"
        }
      }
      spec {
        container {
          name = "quarkus-insurance-app"
          image = "piomin/quarkus-insurance-app:v1"
          port {
            container_port = 8080
          }
          env {
            name = "POSTGRES_USER"
            value_from {
              secret_key_ref {
                key = "database-user"
                name = "insurance-db"
              }
            }
          }
          env {
            name = "POSTGRES_PASSWORD"
            value_from {
              secret_key_ref {
                key = "password"
                name = "insurance-db"
              }
            }
          }
          env {
            name = "POSTGRES_DB"
            value_from {
              secret_key_ref {
                key = "database-name"
                name = "insurance-db"
              }
            }
          }
        }
      }
    }
  }
}

resource "kubernetes_service" "quarkus-insurance-app" {
  depends_on = [helm_release.insurance-db, time_sleep.wait_60_seconds_2]
  metadata {
    name = "quarkus-insurance-app"
    namespace = "demo-apps"
    labels = {
      app = "quarkus-insurance-app"
    }
  }
  spec {
    type = "ClusterIP"
    selector = {
      app = "quarkus-insurance-app"
    }
    port {
      port = 8080
      name = "http"
    }
  }
}

resource "kubernetes_deployment" "quarkus-person-app-v1" {
  depends_on = [helm_release.person-db, time_sleep.wait_60_seconds_2]
  metadata {
    name      = "quarkus-person-app-v1"
    namespace = "demo-apps"
    annotations = {
      "sidecar.istio.io/inject": "true"
    }
  }
  spec {
    selector {
      match_labels = {
        app = "quarkus-person-app"
        version = "v1"
      }
    }
    template {
      metadata {
        labels = {
          app = "quarkus-person-app"
          version = "v1"
        }
        annotations = {
          "sidecar.istio.io/inject": "true"
        }
      }
      spec {
        container {
          name = "quarkus-person-app"
          image = "piomin/quarkus-person-app:v1"
          port {
            container_port = 8080
          }
          env {
            name = "POSTGRES_USER"
            value_from {
              secret_key_ref {
                key = "database-user"
                name = "person-db"
              }
            }
          }
          env {
            name = "POSTGRES_PASSWORD"
            value_from {
              secret_key_ref {
                key = "password"
                name = "person-db"
              }
            }
          }
          env {
            name = "POSTGRES_DB"
            value_from {
              secret_key_ref {
                key = "database-name"
                name = "person-db"
              }
            }
          }
        }
      }
    }
  }
}

resource "kubernetes_deployment" "quarkus-person-app-v2" {
  depends_on = [helm_release.person-db, time_sleep.wait_60_seconds_2]
  metadata {
    name      = "quarkus-person-app-v2"
    namespace = "demo-apps"
    annotations = {
      "sidecar.istio.io/inject": "true"
    }
  }
  spec {
    selector {
      match_labels = {
        app = "quarkus-person-app"
        version = "v2"
      }
    }
    template {
      metadata {
        labels = {
          app = "quarkus-person-app"
          version = "v2"
        }
        annotations = {
          "sidecar.istio.io/inject": "true"
        }
      }
      spec {
        container {
          name = "quarkus-person-app"
          image = "piomin/quarkus-person-app:v2"
          port {
            container_port = 8080
          }
          env {
            name = "POSTGRES_USER"
            value_from {
              secret_key_ref {
                key = "database-user"
                name = "person-db"
              }
            }
          }
          env {
            name = "POSTGRES_PASSWORD"
            value_from {
              secret_key_ref {
                key = "password"
                name = "person-db"
              }
            }
          }
          env {
            name = "POSTGRES_DB"
            value_from {
              secret_key_ref {
                key = "database-name"
                name = "person-db"
              }
            }
          }
        }
      }
    }
  }
}

resource "kubernetes_service" "quarkus-person-app" {
  depends_on = [helm_release.person-db, time_sleep.wait_60_seconds_2]
  metadata {
    name = "quarkus-person-app"
    namespace = "demo-apps"
    labels = {
      app = "quarkus-person-app"
    }
  }
  spec {
    type = "ClusterIP"
    selector = {
      app = "quarkus-person-app"
    }
    port {
      port = 8080
      name = "http"
    }
  }
}

Applying Terraform Scripts

Finally, we can apply the whole Terraform configuration described in the article. Here’s the aro-with-servicemesh.sh script responsible for running required Terraform commands. It is placed in the repository root directory. In the first step, we go to the aro directory to apply the script responsible for creating the Openshift cluster. The domain name is automatically generated by Terraform, so we will export it using the terraform output command. After that, we may apply the scripts with operators and Istio configuration. In order to do everything automatically we pass the location of the kubeconfig file and the generated domain name as variables.

#! /bin/bash

cd aro
terraform init
terraform apply -auto-approve
domain="apps.$(terraform output -raw domain).eastus.aroapp.io"

cd ../servicemesh
terraform init
terraform apply -auto-approve -var kubeconfig=../aro/kubeconfig -var domain=$domain

Let’s run the aro-with-service-mesh.sh script. Once you will do it you should have a similar output as visible below. In the beginning, Terraform creates several objects required by the ARO cluster like a virtual network or service principal. Once those resources are ready, it starts the main part – ARO installation.

Let’s switch to Azure Portal. As you see the installation is in progress. There are several other newly created resources. Of course, there is also the resource representing the OpenShift cluster.

openshift-terraform-azure-portal

Now, arm yourself with patience. You can easily go get a coffee…

You can verify the progress, e.g. by displaying a list of virtual machines. If you see all the 3 master and 3 worker VMs running it means that we are slowly approaching the end.

openshift-terraform-virtual-machines

It may take even more than 40 minutes. That’s why I overridden a default timeout for azapi resource to 75 minutes. Once the cluster is ready, Terraform will connect to the instance of OpenShift to install operators there. In the meantime, we can switch to Azure Portal and see the details about the ARO cluster. It displays, among others, the OpenShift Console URL. Let’s log in to the console.

In order to obtain the admin password we need to run the following command (for my cluster and resource group name):

$ az aro list-credentials -n aro-cluster-p2pvg -g openenv-p2pvg

Here’s our OpenShift console:

Let’s back to the installation process. The first part has been just finished. Now, the script executes terraform commands in the servicemesh directory. As you see, it installed our operators.

Let’s check out how it looks in the OpenShift Console. Go to the Operators -> Installed Operators menu item.

openshift-terraform-operators

Of course, the installation is continued in the background. After installing the operators, it created the Istio Control Plane using the CRD object.

Let’s switch to the OpenShift Console once again. Go to the istio project. In the list of installed operators find Red Hat OpenShift Service Mesh and then go to the Istio Service Mesh Control Plane tab. You should see the basic object. As you see all 9 required components, including Istio, Kiali, and Jaeger instances, are successfully installed.

openshift-terraform-istio

And finally the last part of our exercise. Installation is finished. Terraform applied deployment with our Postgres databases and some Quarkus apps.

In order to see the list of apps we can go to the Topology view in the Developer perspective. All the pods are running. As you see there is also a Kiali console available. We can click that link.

openshift-terraform-apps

In the Kiali dashboard, we can see a detailed view of our service mesh. For example, there is a diagram showing a graphical visualization of traffic between the services.

Final Thoughts

If you use Terraform for managing your cloud infrastructure this article is for you. Did you already have doubts is it possible to easily create and configure the OpenShift cluster with Terraform? This article should dispel your doubts. You can also easily create your ARO cluster just by cloning this repository and running a single script on your cloud account. Enjoy 🙂

The post Manage OpenShift with Terraform appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/09/29/manage-openshift-with-terraform/feed/ 0 14531