istio Archives - Piotr's TechBlog https://piotrminkowski.com/tag/istio/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 06 Jan 2026 09:31:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 istio Archives - Piotr's TechBlog https://piotrminkowski.com/tag/istio/ 32 32 181738725 Istio Spring Boot Library Released https://piotrminkowski.com/2026/01/06/istio-spring-boot-library-released/ https://piotrminkowski.com/2026/01/06/istio-spring-boot-library-released/#respond Tue, 06 Jan 2026 09:31:45 +0000 https://piotrminkowski.com/?p=15957 This article explains how to use my Spring Boot Istio library to generate and create Istio resources on a Kubernetes cluster during application startup. The library is primarily intended for development purposes. It aims to make it easier for developers to quickly and easily launch their applications within the Istio mesh. Of course, you can […]

The post Istio Spring Boot Library Released appeared first on Piotr's TechBlog.

]]>
This article explains how to use my Spring Boot Istio library to generate and create Istio resources on a Kubernetes cluster during application startup. The library is primarily intended for development purposes. It aims to make it easier for developers to quickly and easily launch their applications within the Istio mesh. Of course, you can also use this library in production. However, its purpose is to generate resources from annotations in Java application code automatically.

You can also find many other articles on my blog about Istio. For example, this article is about Quarkus and tracing with Istio.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Two sample applications for this exercise are available in the spring-boot-istio directory.

Prerequisites

We will start with the most straightforward Istio installation on a local Kubernetes cluster. This could be Minikube, which you can run with the following command. You can set slightly lower resource limits than I did.

minikube start --memory='8gb' --cpus='6'
ShellSession

To complete the exercise below, you need to install istioctl in addition to kubectl. Here you will find the available distributions for the latest versions of kubectl and istioctl. I install them on my laptop using Homebrew.

$ brew install kubectl
$ brew install istioctl
ShellSession

To install Istio with default parameters, run the following command:

istioctl install
ShellSession

After a moment, Istio should be running in the istio-system namespace.

It is also worth installing Kiali to verify the Istio resources we have created. Kiali is an observability and management tool for Istio that provides a web-based dashboard for service mesh monitoring. It visualizes service-to-service traffic, validates Istio configuration, and integrates with tools like Prometheus, Grafana, and Jaeger.

kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.28/samples/addons/kiali.yaml
ShellSession

Once Kiali is successfully installed on Kubernetes, we can expose its web dashboard locally with the following istioctl command:

istioctl dashboard kiali
ShellSession

To test access to the Istio mesh from outside the cluster, you need to expose the ingress gateway. To do this, run the minikube tunnel command.

Use Spring Boot Istio Library

To test our library’s functionality, we will create a simple Spring Boot application that exposes a single REST endpoint.

@RestController
@RequestMapping("/callme")
public class CallmeController {

    private static final Logger LOGGER = LoggerFactory
        .getLogger(CallmeController.class);

    @Autowired
    Optional<BuildProperties> buildProperties;
    @Value("${VERSION}")
    private String version;

    @GetMapping("/ping")
    public String ping() {
        LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() ?
            buildProperties.get().getName() : "callme-service", version);
        return "I'm callme-service " + version;
    }
    
}
Java

Then, in addition to the standard Spring Web starter, add the istio-spring-boot-starter dependency.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>com.github.piomin</groupId>
  <artifactId>istio-spring-boot-starter</artifactId>
  <version>1.2.1</version>
</dependency>
XML

Finally, we must add the @EnableIstio annotation to our application’s main class. We can also enable Istio Gateway to expose the REST endpoint outside the cluster. An Istio Gateway is a component that controls how external traffic enters or leaves a service mesh.

@SpringBootApplication
@EnableIstio(enableGateway = true)
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

Let’s deploy our application on the Kubernetes cluster. To do this, we must first create a role with the necessary permissions to manage Istio resources in the cluster. The role must be assigned to the ServiceAccount used by the application.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: callme-service-with-starter
rules:
  - apiGroups: ["networking.istio.io"]
    resources: ["virtualservices", "destinationrules", "gateways"]
    verbs: ["create", "get", "list", "watch", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: callme-service-with-starter
subjects:
  - kind: ServiceAccount
    name: callme-service-with-starter
    namespace: spring
roleRef:
  kind: Role
  name: callme-service-with-starter
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: callme-service-with-starter
YAML

Here are the Deployment and Service resources.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-with-starter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service-with-starter
  template:
    metadata:
      labels:
        app: callme-service-with-starter
    spec:
      containers:
        - name: callme-service-with-starter
          image: piomin/callme-service-with-starter
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
      serviceAccountName: callme-service-with-starter
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service-with-starter
  labels:
    app: callme-service-with-starter
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service-with-starter
YAML

The application repository is configured to run it with Skaffold. Of course, you can apply YAML manifests to the cluster with kubectl apply. To do this, simply navigate to the callme-service-with-starter/k8s directory and apply the deployment.yaml file. As part of the exercise, we will run our applications in the spring namespace.

skaffold dev -n spring
ShellSession

The sample Spring Boot application creates two Istio objects at startup: a VirtualService and a Gateway. We can verify them in the Kiali dashboard.

spring-boot-istio-kiali

The default host name generated for the gateway includes the deployment name and the .ext suffix. We can change the suffix name using the domain field in the @EnableIstio annotation. Assuming you run the minikube tunnel command, you can call the service using the Host header with the hostname in the following way:

$ curl http://localhost/callme/ping -H "Host:callme-service-with-starter.ext"
I'm callme-service v1
ShellSession

Additional Capabilities with Spring Boot Istio

The library’s behavior can be customized by modifying the @EnableIstio annotation. For example, you can enable the fault injection mechanism using the fault field. Both abort and delay are possible. You can make this change without redeploying the app. The library updates the existing VirtualService.

@SpringBootApplication
@EnableIstio(enableGateway = true, fault = @Fault(percentage = 50))
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

Now, you can call the same endpoint through the gateway. There is a 50% chance that you will receive this response.

spring-boot-istio-curl

Now let’s analyze a slightly more complex scenario. Let’s assume that we are running two different versions of the same application on Kubernetes. We use Istio to manage its versioning. Traffic is forwarded to the particular version based on the X-Version header in the incoming request. If the header value is v1 the request is sent to the application Pod with the label version=v1, and similarly, for the version v2. Here’s the annotation for the v1 application main class.

@SpringBootApplication
@EnableIstio(enableGateway = true, version = "v1",
	matches = { 
	  @Match(type = MatchType.HEADERS, key = "X-Version", value = "v1") })
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

The Deployment manifest for the callme-service-with-starter-v1 should define two labels: app and version.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-with-starter-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service-with-starter
      version: v1
  template:
    metadata:
      labels:
        app: callme-service-with-starter
        version: v1
    spec:
      containers:
        - name: callme-service-with-starter
          image: piomin/callme-service-with-starter
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
      serviceAccountName: callme-service-with-starter
YAML

Unlike the skaffold dev command, the skaffold run command simply launches the application on the cluster and terminates. Let’s first release version v1, and then move on to version v2.

skaffold run -n spring
ShellSession

Then, we can deploy the v2 version of our sample Spring Boot application. In this exercise, it is just an “artificial” version, since we deploy the same source code, but with different labels and environment variables injected in the Deployment manifest.

@SpringBootApplication
@EnableIstio(enableGateway = true, version = "v2",
	matches = { 
	  @Match(type = MatchType.HEADERS, key = "X-Version", value = "v2") })
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

Here’s the callme-service-with-starter-v2 Deployment manifest.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-with-starter-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service-with-starter
      version: v2
  template:
    metadata:
      labels:
        app: callme-service-with-starter
        version: v2
    spec:
      containers:
        - name: callme-service-with-starter
          image: piomin/callme-service-with-starter
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v2"
      serviceAccountName: callme-service-with-starter
YAML

Service, on the other hand, remains unchanged. It still refers to Pods labeled with app=callme-service-with-starter. However, this time it includes both application instances marked as v1 or v2.

apiVersion: v1
kind: Service
metadata:
  name: callme-service-with-starter
  labels:
    app: callme-service-with-starter
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service-with-starter
YAML

Version v2 should be run in the same way as before, using the skaffold run command. There are three Istio objects generated during apps startup: Gateway, VirtualService and DestinationRule.

spring-boot-istio-versioning

A generated DestinationRule contains two subsets for both v1 and v2 versions.

spring-boot-istio-subsets

The automatically generated VirtualService looks as follows.

kind: VirtualService
apiVersion: networking.istio.io/v1
metadata:
  name: callme-service-with-starter-route
spec:
  hosts:
  - callme-service-with-starter
  - callme-service-with-starter.ext
  gateways:
  - callme-service-with-starter
  http:
  - match:
    - headers:
        X-Version:
          prefix: v1
    route:
    - destination:
        host: callme-service-with-starter
        subset: v1
    timeout: 6s
    retries:
      attempts: 3
      perTryTimeout: 2s
      retryOn: 5xx
  - match:
    - headers:
        X-Version:
          prefix: v2
    route:
    - destination:
        host: callme-service-with-starter
        subset: v2
      weight: 100
    timeout: 6s
    retries:
      attempts: 3
      perTryTimeout: 2s
      retryOn: 5xx
YAML

To test the versioning mechanism generated using the Spring Boot Istio library, set the X-Version header for each call.

$ curl http://localhost/callme/ping -H "Host:callme-service-with-starter.ext" -H "X-Version:v1"
I'm callme-service v1

$ curl http://localhost/callme/ping -H "Host:callme-service-with-starter.ext" -H "X-Version:v2"
I'm callme-service v2
ShellSession

Conclusion

I am still working on this library, and new features will be added in the near future. I hope it will be helpful for those of you who want to get started with Istio without getting into the details of its configuration.

The post Istio Spring Boot Library Released appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2026/01/06/istio-spring-boot-library-released/feed/ 0 15957
Manage OpenShift with Terraform https://piotrminkowski.com/2023/09/29/manage-openshift-with-terraform/ https://piotrminkowski.com/2023/09/29/manage-openshift-with-terraform/#respond Fri, 29 Sep 2023 12:30:37 +0000 https://piotrminkowski.com/?p=14531 This article will teach you how to create and manage OpenShift clusters with Terraform. For the purpose of this exercise, we will run OpenShift on Azure using the managed service called ARO (Azure Red Hat OpenShift). Cluster creation is the first part of the exercise. After that, we are going to install several operators on […]

The post Manage OpenShift with Terraform appeared first on Piotr's TechBlog.

]]>
This article will teach you how to create and manage OpenShift clusters with Terraform. For the purpose of this exercise, we will run OpenShift on Azure using the managed service called ARO (Azure Red Hat OpenShift). Cluster creation is the first part of the exercise. After that, we are going to install several operators on OpenShift and some apps that use features provided by those operators. Of course, our main goal is to do all the required steps in the single Terraform command.

Let me clarify some things before we begin. In this article, I’m not promoting or recommending Terraform as the best tool for managing OpenShift or Kubernetes clusters at scale. Usually, I prefer the GitOps approach for that. If you are interested in how to leverage such tools like ACM (Advanced Cluster Management for Kubernetes) and Argo CD for managing multiple clusters with the GitOps approach read that article. It describes the idea of a cluster continuous management. From my perspective, Terraform fits better for one-time actions, like for example, creating and configuring OpenShift for the demo or PoC and then removing it. We can also use Terraform to install Argo CD and then delegate all the next steps there.

Anyway, let’s focus on our scenario. We will widely use those two Terraform providers: Azure and Kubernetes. So, it is worth at least taking a look at the documentation to familiarize yourself with the basics.

Prerequisites

Of course, you don’t have to perform that exercise on Azure with ARO. If you already have OpenShift running you can skip the part related to the cluster creation and just run the Terraform script responsible for installing operators and apps. For the whole exercise, you need to install:

  1. Azure CLI (instructions) – once you install the login to your Azure account and create the subscription. To check if all works run the following command: az account show
  2. Terraform CLI (instructions) – once you install the Terraform CLI you can verify it with the following command: terraform version

Source Code

If you would like to try it by yourself, you can always take a look at my source code. In order to do that, you need to clone my GitHub repository. Then you should follow my instructions 🙂

Terraform Providers

The Terraform scripts for cluster creation are available inside the aro directory, while the script for cluster configuration inside the servicemesh directory. Here’s the structure of our repository:

Firstly, let’s take a look at the list of Terraform providers used in our exercise. In general, we need providers to interact with Azure and OpenShift through the Kubernetes API. In most cases, the official Hashicorp Azure Provider for Azure Resource Manager will be enough (1). However, in a few cases, we will have to interact directly with Azure REST API (for example to create an OpenShift cluster object) through the azapi provider (2). The Hashicorp Random Provider will be used to generate a random domain name for our cluster (3). The rest of the providers allow us to interact with OpenShift. Once again, the official Hashicorp Kubernetes Provider is valid in most cases (4). We will also use the kubectl provider (5) and Helm for installing the Postgres database (6) used by the sample apps.

terraform {
  required_version = ">= 1.0"
  required_providers {
    // (1)
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">=3.3.0"
    }
    // (2)
    azapi = {
      source  = "Azure/azapi"
      version = ">=1.0.0"
    }
    // (3)
    random = {
      source = "hashicorp/random"
      version = "3.5.1"
    }
    local = {
      source = "hashicorp/local"
      version = "2.4.0"
    }
  }
}

provider "azurerm" {
  features {}
}

provider "azapi" {
}

provider "random" {}
provider "local" {}

Here’s the list of providers used in the Re Hat Service Mesh installation:

terraform {
  required_version = ">= 1.0"
  required_providers {
    // (4)
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "2.23.0"
    }
    // (5)
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.13.0"
    }
    // (6)
    helm = {
      source = "hashicorp/helm"
      version = "2.11.0"
    }
  }
}

provider "kubernetes" {
  config_path = "aro/kubeconfig"
  config_context = var.cluster-context
}

provider "kubectl" {
  config_path = "aro/kubeconfig"
  config_context = var.cluster-context
}

provider "helm" {
  kubernetes {
    config_path = "aro/kubeconfig"
    config_context = var.cluster-context
  }
}

In order to install providers, we need to run the following command (you don’t have to do it now):

$ terraform init

Create Azure Red Hat OpenShift Cluster with Terraform

Unfortunately, there is no dedicated, official Terraform provider for creating OpenShift clusters on Azure ARO. There are some discussions about such a feature (you can find it here), but still without a final effect. Maybe it will change in the future. However, creating an ARO cluster is not such a complicated thing since we may use existing providers listed in the previous section. You can find an interesting guide in the Microsoft docs here. It was also a starting point for my work. I improved several things there, for example, to avoid using the az CLI in the scripts and have the full configuration in Terraform HCL.

Let’s analyze our Terraform manifest step by step. Here’s a list of the most important elements we need to place in the HCL file:

  1. We have to read some configuration data from the Azure client
  2. I have an existing resource group with the openenv prefix, but you can put there any name you want. That’s our main resource group
  3. ARO requires a different resource group than a main resource group
  4. We need to create a virtual network for Openshift. There is a dedicated subnet for master nodes and another one for worker nodes. All the parameters visible there are required. You can change the IP address range as long as it doesn’t allow for conflicts between the master and worker nodes
  5. ARO requires the dedicated service principal to create a cluster. Let’s create the Azure application, and then the service principal with the password. The password is auto-generated by Azure.
  6. The newly created service principal requires some privileges. Let’s assign the “User Access Administrator” and network “Contributor”. Then, we need to search the service principal created by Azure under the “Azure Red Hat OpenShift RP” name and also assign a network “Contributor” there.
  7. All the required objects have already been created. There is no dedicated resource for the ARO cluster. In order to define the cluster resource we need to leverage the azapi provider.
  8. The definition of the OpenShift cluster is available inside the body section. All the fields you see there are required to successfully create the cluster.
// (1)
data "azurerm_client_config" "current" {}
data "azuread_client_config" "current" {}

// (2)
data "azurerm_resource_group" "my_group" {
  name = "openenv-${var.guid}"
}

resource "random_string" "random" {
  length           = 10
  numeric          = false
  special          = false
  upper            = false
}

// (3)
locals {
  resource_group_id = "/subscriptions/${data.azurerm_client_config.current.subscription_id}/resourceGroups/aro-${random_string.random.result}-${data.azurerm_resource_group.my_group.location}"
  domain            = random_string.random.result
}

// (4)
resource "azurerm_virtual_network" "virtual_network" {
  name                = "aro-vnet-${var.guid}"
  address_space       = ["10.0.0.0/22"]
  location            = data.azurerm_resource_group.my_group.location
  resource_group_name = data.azurerm_resource_group.my_group.name
}
resource "azurerm_subnet" "master_subnet" {
  name                 = "master_subnet"
  resource_group_name  = data.azurerm_resource_group.my_group.name
  virtual_network_name = azurerm_virtual_network.virtual_network.name
  address_prefixes     = ["10.0.0.0/23"]
  service_endpoints    = ["Microsoft.ContainerRegistry"]
  private_link_service_network_policies_enabled  = false
  depends_on = [azurerm_virtual_network.virtual_network]
}
resource "azurerm_subnet" "worker_subnet" {
  name                 = "worker_subnet"
  resource_group_name  = data.azurerm_resource_group.my_group.name
  virtual_network_name = azurerm_virtual_network.virtual_network.name
  address_prefixes     = ["10.0.2.0/23"]
  service_endpoints    = ["Microsoft.ContainerRegistry"]
  depends_on = [azurerm_virtual_network.virtual_network]
}

// (5)
resource "azuread_application" "aro_app" {
  display_name = "aro_app"
  owners       = [data.azuread_client_config.current.object_id]
}
resource "azuread_service_principal" "aro_app" {
  application_id               = azuread_application.aro_app.application_id
  app_role_assignment_required = false
  owners                       = [data.azuread_client_config.current.object_id]
}
resource "azuread_service_principal_password" "aro_app" {
  service_principal_id = azuread_service_principal.aro_app.object_id
}

// (6)
resource "azurerm_role_assignment" "aro_cluster_service_principal_uaa" {
  scope                = data.azurerm_resource_group.my_group.id
  role_definition_name = "User Access Administrator"
  principal_id         = azuread_service_principal.aro_app.id
  skip_service_principal_aad_check = true
}
resource "azurerm_role_assignment" "aro_cluster_service_principal_network_contributor_pre" {
  scope                = data.azurerm_resource_group.my_group.id
  role_definition_name = "Contributor"
  principal_id         = azuread_service_principal.aro_app.id
  skip_service_principal_aad_check = true
}
resource "azurerm_role_assignment" "aro_cluster_service_principal_network_contributor" {
  scope                = azurerm_virtual_network.virtual_network.id
  role_definition_name = "Contributor"
  principal_id         = azuread_service_principal.aro_app.id
  skip_service_principal_aad_check = true
}
data "azuread_service_principal" "aro_app" {
  display_name = "Azure Red Hat OpenShift RP"
  depends_on = [azuread_service_principal.aro_app]
}
resource "azurerm_role_assignment" "aro_resource_provider_service_principal_network_contributor" {
  scope                = azurerm_virtual_network.virtual_network.id
  role_definition_name = "Contributor"
  principal_id         = data.azuread_service_principal.aro_app.id
  skip_service_principal_aad_check = true
}

// (7)
resource "azapi_resource" "aro_cluster" {
  name      = "aro-cluster-${var.guid}"
  parent_id = data.azurerm_resource_group.my_group.id
  type      = "Microsoft.RedHatOpenShift/openShiftClusters@2023-07-01-preview"
  location  = data.azurerm_resource_group.my_group.location
  timeouts {
    create = "75m"
  }
  // (8)
  body = jsonencode({
    properties = {
      clusterProfile = {
        resourceGroupId      = local.resource_group_id
        pullSecret           = file("~/Downloads/pull-secret-latest.txt")
        domain               = local.domain
        fipsValidatedModules = "Disabled"
        version              = "4.12.25"
      }
      networkProfile = {
        podCidr              = "10.128.0.0/14"
        serviceCidr          = "172.30.0.0/16"
      }
      servicePrincipalProfile = {
        clientId             = azuread_service_principal.aro_app.application_id
        clientSecret         = azuread_service_principal_password.aro_app.value
      }
      masterProfile = {
        vmSize               = "Standard_D8s_v3"
        subnetId             = azurerm_subnet.master_subnet.id
        encryptionAtHost     = "Disabled"
      }
      workerProfiles = [
        {
          name               = "worker"
          vmSize             = "Standard_D8s_v3"
          diskSizeGB         = 128
          subnetId           = azurerm_subnet.worker_subnet.id
          count              = 3
          encryptionAtHost   = "Disabled"
        }
      ]
      apiserverProfile = {
        visibility           = "Public"
      }
      ingressProfiles = [
        {
          name               = "default"
          visibility         = "Public"
        }
      ]
    }
  })
  depends_on = [
    azurerm_subnet.worker_subnet,
    azurerm_subnet.master_subnet,
    azuread_service_principal_password.aro_app,
    azurerm_role_assignment.aro_resource_provider_service_principal_network_contributor
  ]
}

output "domain" {
  value = local.domain
}

Save Kubeconfig

Once we successfully create the OpenShift cluster, we need to obtain and save the kubeconfig file. It will allow Terraform to interact with the cluster through the master API. In order to get the kubeconfig content we need to call the Azure listAdminCredentials REST endpoint. It is the same as calling the az aro get-admin-kubeconfig command using CLI. It will return JSON with base64-encoded content. After decoding from JSON and Base64 we save the content inside the kubeconfig file in the current directory.

resource "azapi_resource_action" "test" {
  type        = "Microsoft.RedHatOpenShift/openShiftClusters@2023-07-01-preview"
  resource_id = "/subscriptions/${data.azurerm_client_config.current.subscription_id}/resourceGroups/openenv-${var.guid}/providers/Microsoft.RedHatOpenShift/openShiftClusters/aro-cluster-${var.guid}"
  action      = "listAdminCredentials"
  method      = "POST"
  response_export_values = ["*"]
}

output "kubeconfig" {
  value = base64decode(jsondecode(azapi_resource_action.test.output).kubeconfig)
}

resource "local_file" "kubeconfig" {
  content  =  base64decode(jsondecode(azapi_resource_action.test.output).kubeconfig)
  filename = "kubeconfig"
  depends_on = [azapi_resource_action.test]
}

Install OpenShift Operators with Terraform

Finally, we can interact with the existing OpenShift cluster via the kubeconfig file. In the first step, we will deploy some operators. In OpenShift operators are a preferred way of installing more advanced apps (for example consisting of several Deployments). Red Hat comes with a set of supported operators that allows us to extend OpenShift functionalities. It can be, for example, a service mesh, a clustered database, or a message broker.

Let’s imagine we want to install a service mesh on OpenShift. There are some dedicated operators for that. The OpenShift Service Mesh operator is built on top of the open-source project Istio. We will also install the OpenShift Distributed Tracing (Jaeger) and Kiali operators. In order to do that we need to define the Subscription CRD object. Also, if we install an operator in a different namespace than openshift-operators we have to create the OperatorGroup CRD object. Here’s the Terraform HCL script that installs our operators.

// (1)
resource "kubernetes_namespace" "openshift-distributed-tracing" {
  metadata {
    name = "openshift-distributed-tracing"
  }
}
resource "kubernetes_manifest" "tracing-group" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1"
    "kind"       = "OperatorGroup"
    "metadata"   = {
      "name"      = "openshift-distributed-tracing"
      "namespace" = "openshift-distributed-tracing"
    }
    "spec" = {
      "upgradeStrategy" = "Default"
    }
  }
}
resource "kubernetes_manifest" "tracing" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1alpha1"
    "kind"       = "Subscription"
    "metadata" = {
      "name"      = "jaeger-product"
      "namespace" = "openshift-distributed-tracing"
    }
    "spec" = {
      "channel"             = "stable"
      "installPlanApproval" = "Automatic"
      "name"                = "jaeger-product"
      "source"              = "redhat-operators"
      "sourceNamespace"     = "openshift-marketplace"
    }
  }
}

// (2)
resource "kubernetes_manifest" "kiali" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1alpha1"
    "kind"       = "Subscription"
    "metadata" = {
      "name"      = "kiali-ossm"
      "namespace" = "openshift-operators"
    }
    "spec" = {
      "channel"             = "stable"
      "installPlanApproval" = "Automatic"
      "name"                = "kiali-ossm"
      "source"              = "redhat-operators"
      "sourceNamespace"     = "openshift-marketplace"
    }
  }
}

// (3)
resource "kubernetes_manifest" "ossm" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1alpha1"
    "kind"       = "Subscription"
    "metadata"   = {
      "name"      = "servicemeshoperator"
      "namespace" = "openshift-operators"
    }
    "spec" = {
      "channel"             = "stable"
      "installPlanApproval" = "Automatic"
      "name"                = "servicemeshoperator"
      "source"              = "redhat-operators"
      "sourceNamespace"     = "openshift-marketplace"
    }
  }
}

// (4)
resource "kubernetes_manifest" "ossmconsole" {
  manifest = {
    "apiVersion" = "operators.coreos.com/v1alpha1"
    "kind"       = "Subscription"
    "metadata"   = {
      "name"      = "ossmconsole"
      "namespace" = "openshift-operators"
    }
    "spec" = {
      "channel"             = "candidate"
      "installPlanApproval" = "Automatic"
      "name"                = "ossmconsole"
      "source"              = "community-operators"
      "sourceNamespace"     = "openshift-marketplace"
    }
  }
}

After installing the operators we may proceed to the service mesh configuration (1). We need to use CRD objects installed by the operators. Kubernetes Terraform provider won’t be a perfect choice for that since it verifies the existence of an object before applying the whole script. Therefore we will switch to the kubectl provider that just applies the object without any initial verification. We need to create an Istio control plane using the ServiceMeshControlPlane object (2). As you see, it also enables distributed tracing with Jaeger and a dashboard with Kiali. Once a control plane is ready we may proceed to the next steps. We will create all the objects responsible for Istio configuration including VirtualService, DestinatioRule, and Gateway (3).

resource "kubernetes_namespace" "istio" {
  metadata {
    name = "istio"
  }
}

// (1)
resource "time_sleep" "wait_120_seconds" {
  depends_on = [kubernetes_manifest.ossm]

  create_duration = "120s"
}

// (2)
resource "kubectl_manifest" "basic" {
  depends_on = [time_sleep.wait_120_seconds, kubernetes_namespace.istio]
  yaml_body = <<YAML
kind: ServiceMeshControlPlane
apiVersion: maistra.io/v2
metadata:
  name: basic
  namespace: istio
spec:
  version: v2.4
  tracing:
    type: Jaeger
    sampling: 10000
  policy:
    type: Istiod
  telemetry:
    type: Istiod
  addons:
    jaeger:
      install:
        storage:
          type: Memory
    prometheus:
      enabled: true
    kiali:
      enabled: true
    grafana:
      enabled: true
YAML
}

resource "kubectl_manifest" "console" {
  depends_on = [time_sleep.wait_120_seconds, kubernetes_namespace.istio]
  yaml_body = <<YAML
kind: OSSMConsole
apiVersion: kiali.io/v1alpha1
metadata:
  name: ossmconsole
  namespace: istio
spec:
  kiali:
    serviceName: ''
    serviceNamespace: ''
    servicePort: 0
    url: ''
YAML
}

resource "time_sleep" "wait_60_seconds_2" {
  depends_on = [kubectl_manifest.basic]

  create_duration = "60s"
}

// (3)
resource "kubectl_manifest" "access" {
  depends_on = [time_sleep.wait_120_seconds, kubernetes_namespace.istio, kubernetes_namespace.demo-apps]
  yaml_body = <<YAML
apiVersion: maistra.io/v1
kind: ServiceMeshMemberRoll
metadata:
  name: default
  namespace: istio
spec:
  members:
    - demo-apps
YAML
}

resource "kubectl_manifest" "gateway" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: microservices-gateway
  namespace: demo-apps
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - quarkus-insurance-app.apps.${var.domain}
        - quarkus-person-app.apps.${var.domain}
YAML
}

resource "kubectl_manifest" "quarkus-insurance-app-vs" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: quarkus-insurance-app-vs
  namespace: demo-apps
spec:
  hosts:
    - quarkus-insurance-app.apps.${var.domain}
  gateways:
    - microservices-gateway
  http:
    - match:
        - uri:
            prefix: "/insurance"
      rewrite:
        uri: " "
      route:
        - destination:
            host: quarkus-insurance-app
          weight: 100
YAML
}

resource "kubectl_manifest" "quarkus-person-app-dr" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body  = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: quarkus-person-app-dr
  namespace: demo-apps
spec:
  host: quarkus-person-app
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2
YAML
}

resource "kubectl_manifest" "quarkus-person-app-vs-via-gw" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body  = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: quarkus-person-app-vs-via-gw
  namespace: demo-apps
spec:
  hosts:
    - quarkus-person-app.apps.${var.domain}
  gateways:
    - microservices-gateway
  http:
    - match:
      - uri:
          prefix: "/person"
      rewrite:
        uri: " "
      route:
        - destination:
            host: quarkus-person-app
            subset: v1
          weight: 100
        - destination:
            host: quarkus-person-app
            subset: v2
          weight: 0
YAML
}

resource "kubectl_manifest" "quarkus-person-app-vs" {
  depends_on = [time_sleep.wait_60_seconds_2, kubernetes_namespace.demo-apps]
  yaml_body  = <<YAML
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: quarkus-person-app-vs
  namespace: demo-apps
spec:
  hosts:
    - quarkus-person-app
  http:
    - route:
        - destination:
            host: quarkus-person-app
            subset: v1
          weight: 100
        - destination:
            host: quarkus-person-app
            subset: v2
          weight: 0
YAML
}

Finally, we will run our sample Quarkus apps that communicate through the Istio mesh and connect to the Postgres database. The script is quite large. All the apps are running in the demo-apps namespace (1). They are connecting with the Postgres database installed using the Terraform Helm provider from Bitnami chart (2). Finally, we are creating the Deployments for two apps: person-service and insurance-service (3). There are two versions per microservice. Don’t focus on the features of the apps. They are here just to show the subsequent layers of the installation process. We are starting with the operators and CRDs, then moving to Istio configuration, and finally installing our custom apps.

// (1)
resource "kubernetes_namespace" "demo-apps" {
  metadata {
    name = "demo-apps"
  }
}

resource "kubernetes_secret" "person-db-secret" {
  depends_on = [kubernetes_namespace.demo-apps]
  metadata {
    name      = "person-db"
    namespace = "demo-apps"
  }
  data = {
    postgres-password = "123456"
    password          = "123456"
    database-user     = "person-db"
    database-name     = "person-db"
  }
}

resource "kubernetes_secret" "insurance-db-secret" {
  depends_on = [kubernetes_namespace.demo-apps]
  metadata {
    name      = "insurance-db"
    namespace = "demo-apps"
  }
  data = {
    postgres-password = "123456"
    password          = "123456"
    database-user     = "insurance-db"
    database-name     = "insurance-db"
  }
}

// (2)
resource "helm_release" "person-db" {
  depends_on = [kubernetes_namespace.demo-apps]
  chart            = "postgresql"
  name             = "person-db"
  namespace        = "demo-apps"
  repository       = "https://charts.bitnami.com/bitnami"

  values = [
    file("manifests/person-db-values.yaml")
  ]
}
resource "helm_release" "insurance-db" {
  depends_on = [kubernetes_namespace.demo-apps]
  chart            = "postgresql"
  name             = "insurance-db"
  namespace        = "demo-apps"
  repository       = "https://charts.bitnami.com/bitnami"

  values = [
    file("manifests/insurance-db-values.yaml")
  ]
}

// (3)
resource "kubernetes_deployment" "quarkus-insurance-app" {
  depends_on = [helm_release.insurance-db, time_sleep.wait_60_seconds_2]
  metadata {
    name      = "quarkus-insurance-app"
    namespace = "demo-apps"
    annotations = {
      "sidecar.istio.io/inject": "true"
    }
  }
  spec {
    selector {
      match_labels = {
        app = "quarkus-insurance-app"
        version = "v1"
      }
    }
    template {
      metadata {
        labels = {
          app = "quarkus-insurance-app"
          version = "v1"
        }
        annotations = {
          "sidecar.istio.io/inject": "true"
        }
      }
      spec {
        container {
          name = "quarkus-insurance-app"
          image = "piomin/quarkus-insurance-app:v1"
          port {
            container_port = 8080
          }
          env {
            name = "POSTGRES_USER"
            value_from {
              secret_key_ref {
                key = "database-user"
                name = "insurance-db"
              }
            }
          }
          env {
            name = "POSTGRES_PASSWORD"
            value_from {
              secret_key_ref {
                key = "password"
                name = "insurance-db"
              }
            }
          }
          env {
            name = "POSTGRES_DB"
            value_from {
              secret_key_ref {
                key = "database-name"
                name = "insurance-db"
              }
            }
          }
        }
      }
    }
  }
}

resource "kubernetes_service" "quarkus-insurance-app" {
  depends_on = [helm_release.insurance-db, time_sleep.wait_60_seconds_2]
  metadata {
    name = "quarkus-insurance-app"
    namespace = "demo-apps"
    labels = {
      app = "quarkus-insurance-app"
    }
  }
  spec {
    type = "ClusterIP"
    selector = {
      app = "quarkus-insurance-app"
    }
    port {
      port = 8080
      name = "http"
    }
  }
}

resource "kubernetes_deployment" "quarkus-person-app-v1" {
  depends_on = [helm_release.person-db, time_sleep.wait_60_seconds_2]
  metadata {
    name      = "quarkus-person-app-v1"
    namespace = "demo-apps"
    annotations = {
      "sidecar.istio.io/inject": "true"
    }
  }
  spec {
    selector {
      match_labels = {
        app = "quarkus-person-app"
        version = "v1"
      }
    }
    template {
      metadata {
        labels = {
          app = "quarkus-person-app"
          version = "v1"
        }
        annotations = {
          "sidecar.istio.io/inject": "true"
        }
      }
      spec {
        container {
          name = "quarkus-person-app"
          image = "piomin/quarkus-person-app:v1"
          port {
            container_port = 8080
          }
          env {
            name = "POSTGRES_USER"
            value_from {
              secret_key_ref {
                key = "database-user"
                name = "person-db"
              }
            }
          }
          env {
            name = "POSTGRES_PASSWORD"
            value_from {
              secret_key_ref {
                key = "password"
                name = "person-db"
              }
            }
          }
          env {
            name = "POSTGRES_DB"
            value_from {
              secret_key_ref {
                key = "database-name"
                name = "person-db"
              }
            }
          }
        }
      }
    }
  }
}

resource "kubernetes_deployment" "quarkus-person-app-v2" {
  depends_on = [helm_release.person-db, time_sleep.wait_60_seconds_2]
  metadata {
    name      = "quarkus-person-app-v2"
    namespace = "demo-apps"
    annotations = {
      "sidecar.istio.io/inject": "true"
    }
  }
  spec {
    selector {
      match_labels = {
        app = "quarkus-person-app"
        version = "v2"
      }
    }
    template {
      metadata {
        labels = {
          app = "quarkus-person-app"
          version = "v2"
        }
        annotations = {
          "sidecar.istio.io/inject": "true"
        }
      }
      spec {
        container {
          name = "quarkus-person-app"
          image = "piomin/quarkus-person-app:v2"
          port {
            container_port = 8080
          }
          env {
            name = "POSTGRES_USER"
            value_from {
              secret_key_ref {
                key = "database-user"
                name = "person-db"
              }
            }
          }
          env {
            name = "POSTGRES_PASSWORD"
            value_from {
              secret_key_ref {
                key = "password"
                name = "person-db"
              }
            }
          }
          env {
            name = "POSTGRES_DB"
            value_from {
              secret_key_ref {
                key = "database-name"
                name = "person-db"
              }
            }
          }
        }
      }
    }
  }
}

resource "kubernetes_service" "quarkus-person-app" {
  depends_on = [helm_release.person-db, time_sleep.wait_60_seconds_2]
  metadata {
    name = "quarkus-person-app"
    namespace = "demo-apps"
    labels = {
      app = "quarkus-person-app"
    }
  }
  spec {
    type = "ClusterIP"
    selector = {
      app = "quarkus-person-app"
    }
    port {
      port = 8080
      name = "http"
    }
  }
}

Applying Terraform Scripts

Finally, we can apply the whole Terraform configuration described in the article. Here’s the aro-with-servicemesh.sh script responsible for running required Terraform commands. It is placed in the repository root directory. In the first step, we go to the aro directory to apply the script responsible for creating the Openshift cluster. The domain name is automatically generated by Terraform, so we will export it using the terraform output command. After that, we may apply the scripts with operators and Istio configuration. In order to do everything automatically we pass the location of the kubeconfig file and the generated domain name as variables.

#! /bin/bash

cd aro
terraform init
terraform apply -auto-approve
domain="apps.$(terraform output -raw domain).eastus.aroapp.io"

cd ../servicemesh
terraform init
terraform apply -auto-approve -var kubeconfig=../aro/kubeconfig -var domain=$domain

Let’s run the aro-with-service-mesh.sh script. Once you will do it you should have a similar output as visible below. In the beginning, Terraform creates several objects required by the ARO cluster like a virtual network or service principal. Once those resources are ready, it starts the main part – ARO installation.

Let’s switch to Azure Portal. As you see the installation is in progress. There are several other newly created resources. Of course, there is also the resource representing the OpenShift cluster.

openshift-terraform-azure-portal

Now, arm yourself with patience. You can easily go get a coffee…

You can verify the progress, e.g. by displaying a list of virtual machines. If you see all the 3 master and 3 worker VMs running it means that we are slowly approaching the end.

openshift-terraform-virtual-machines

It may take even more than 40 minutes. That’s why I overridden a default timeout for azapi resource to 75 minutes. Once the cluster is ready, Terraform will connect to the instance of OpenShift to install operators there. In the meantime, we can switch to Azure Portal and see the details about the ARO cluster. It displays, among others, the OpenShift Console URL. Let’s log in to the console.

In order to obtain the admin password we need to run the following command (for my cluster and resource group name):

$ az aro list-credentials -n aro-cluster-p2pvg -g openenv-p2pvg

Here’s our OpenShift console:

Let’s back to the installation process. The first part has been just finished. Now, the script executes terraform commands in the servicemesh directory. As you see, it installed our operators.

Let’s check out how it looks in the OpenShift Console. Go to the Operators -> Installed Operators menu item.

openshift-terraform-operators

Of course, the installation is continued in the background. After installing the operators, it created the Istio Control Plane using the CRD object.

Let’s switch to the OpenShift Console once again. Go to the istio project. In the list of installed operators find Red Hat OpenShift Service Mesh and then go to the Istio Service Mesh Control Plane tab. You should see the basic object. As you see all 9 required components, including Istio, Kiali, and Jaeger instances, are successfully installed.

openshift-terraform-istio

And finally the last part of our exercise. Installation is finished. Terraform applied deployment with our Postgres databases and some Quarkus apps.

In order to see the list of apps we can go to the Topology view in the Developer perspective. All the pods are running. As you see there is also a Kiali console available. We can click that link.

openshift-terraform-apps

In the Kiali dashboard, we can see a detailed view of our service mesh. For example, there is a diagram showing a graphical visualization of traffic between the services.

Final Thoughts

If you use Terraform for managing your cloud infrastructure this article is for you. Did you already have doubts is it possible to easily create and configure the OpenShift cluster with Terraform? This article should dispel your doubts. You can also easily create your ARO cluster just by cloning this repository and running a single script on your cloud account. Enjoy 🙂

The post Manage OpenShift with Terraform appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/09/29/manage-openshift-with-terraform/feed/ 0 14531
HTTPS on Kubernetes with Spring Boot, Istio and Cert Manager https://piotrminkowski.com/2022/06/01/https-on-kubernetes-with-spring-boot-istio-and-cert-manager/ https://piotrminkowski.com/2022/06/01/https-on-kubernetes-with-spring-boot-istio-and-cert-manager/#respond Wed, 01 Jun 2022 11:15:26 +0000 https://piotrminkowski.com/?p=11568 In this article, you will learn how to create secure HTTPS gateways on Kubernetes. We will use Cert Manager to generate TLS/SSL certificates. With Istio we can create secure HTTPS gateways and expose them outside a Kubernetes cluster. Our test application is built on top of Spring Boot. We will consider two different ways of […]

The post HTTPS on Kubernetes with Spring Boot, Istio and Cert Manager appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create secure HTTPS gateways on Kubernetes. We will use Cert Manager to generate TLS/SSL certificates. With Istio we can create secure HTTPS gateways and expose them outside a Kubernetes cluster. Our test application is built on top of Spring Boot. We will consider two different ways of securing that app. In the first of them, we are going to set mutual TLS on the gateway and a plain port on the app side. Then, we run a scenario with a secure 2-way SSL configured on the Spring Boot app. Let’s see how it looks.

kubernetes-https-arch

There are several topics you should be familiar with before start reading the following article. At least, it is worth reading about the basics related to service mesh on Kubernetes with Istio and Spring Boot here. You may also be interested in the article about security best practices for Spring Boot apps.

Prerequisites

Before we begin we need to prepare a test cluster. Personally, I’m using a local OpenShift cluster that simplifies the installation of several tools we need. But you can as well use any other Kubernetes distribution like minikube or kind. No matter which version you choose, you need to install at least:

  1. Istio – you can do it in several ways, here’s an instruction with Istio CLI
  2. Cert Manager – the simplest way to install it with the kubectl
  3. Skaffold (optionally) – a CLI tool to simplify deploying the Spring Boot app on Kubernetes and applying all the manifests in that exercise using a single command. You can find installation instructions here
  4. Helm – used to install additional tools on Kubernetes (or OpenShift)

Since I’m using OpenShift I can install everything using the UI console and Operator Hub. Here’s the list of my tools:

kubernetes-https-openshift

Source Code

If you would like to try this exercise yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. Then switch to the tls branch. After that, you should just follow my instructions. Let’s begin.

Generate Certificates with Cert Manager

You can as well generate TLS/SSL certs using e.g. openssl and then apply them on Kubernetes. However, Cert Manager simplifies that process. It automatically creates a Kubernetes Secret with all the required staff. Before we create a certificate we should configure a default ClusterIssuer. I’m using the simplest option – self-signed.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-cluster-issuer
spec:
  selfSigned: {}

In the next step, we are going to create the Certificate object. It is important to set the commonName parameter properly. It should be the same as the hostname of our gateway. Since I’m running the local instance of OpenShift the default domain suffix is apps-crc.testing. The name of the application is sample-spring-kotlin in that case. We will also need to generate a keystore and truststore for configuring 2-way SSL in the second scenario with the passthrough gateway. Finally, we should have the same Kubernetes Secret with a certificate placed in the namespace with the app (demo-apps) and the namespace with Istio workloads (istio-system in our case). We can sync secrets across namespaces in several different ways. I’ll use the built-in feature of Cert Manager based on the secretTemplate field and a tool called reflector. Here’s the final version of the Certificate object:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: sample-spring-boot-cert
  namespace: demo-apps
spec:
  keystores:
    jks:
      passwordSecretRef:
        name: jks-password-secret
        key: password
      create: true
  issuerRef:
    name: selfsigned-cluster-issuer
    group: cert-manager.io
    kind: ClusterIssuer
  privateKey:
    algorithm: ECDSA
    size: 256
  dnsNames:
    - sample-spring-kotlin.apps-crc.testing
  secretName: sample-spring-boot-cert
  commonName: sample-spring-kotlin.apps-crc.testing
  secretTemplate:
    annotations:
      reflector.v1.k8s.emberstack.com/reflection-allowed: "true"  
      reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "istio-system"
      reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
      reflector.v1.k8s.emberstack.com/reflection-auto-namespaces: "istio-system"

Before we apply the Certificate object we need to create the Secret with a keystore password:

kind: Secret
apiVersion: v1
metadata:
  name: jks-password-secret
  namespace: demo-apps
data:
  password: MTIzNDU2
type: Opaque

Of course, we also need to install the reflector tool on our Kubernetes cluster. We can easily do it using the following Helm commands:

$ helm repo add emberstack https://emberstack.github.io/helm-charts
$ helm repo update
$ helm upgrade --install reflector emberstack/reflector

Here’s the final result. There is the Secret sample-spring-kotlin-cert in the demo-apps namespace. It contains a TLS certificate, private key, CA, JKS keystore, and truststore. You can also verify that the same Secret is available in the istio-system namespace.

kubernetes-https-certs

Create Istio Gateway with Mutual TLS

Let’s begin with our first scenario. In order to create a gateway with mTLS, we should set MUTUAL as a mode and set the name of the Secret containing the certificate and private key. The name of the Istio Gateway host is sample-spring-kotlin.apps-crc.testing. Gateway is available on Kubernetes under the default HTTPS port.

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: microservices-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 443
        name: https
        protocol: HTTPS
      hosts:
        - sample-spring-kotlin.apps-crc.testing
      tls:
        mode: MUTUAL
        credentialName: sample-spring-boot-cert

The Spring Boot application is available under the HTTP port. Therefore we should create a standard Istio VirtualService that refers to the already created gateway.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: sample-spring-boot-vs-via-gw
spec:
  hosts:
    - sample-spring-kotlin.apps-crc.testing
  gateways:
    - microservices-gateway
  http:
    - route:
        - destination:
            host: sample-spring-kotlin-microservice
            port:
              number: 8080

Finally, we can run the Spring Boot app on Kubernetes using skaffold. We need to activate the istio-mutual profile that exposes the HTTP (8080) port.

$ skaffold dev -p istio-mutual

OpenShift Service Mesh automatically exposes Istio Gateway as the Route object. Let’s just verify if everything works fine. Assuming that the app has started successfully we can display a list of Istio gateways in the demo-apps namespace:

$ kubectl get gw -n demo-apps
NAME                    AGE
microservices-gateway   3m4s

Then let’s display a list of Istio virtual services in the same namespace:

$ kubectl get vs -n demo-apps
NAME                           GATEWAYS                    HOSTS                                       AGE
sample-spring-boot-vs-via-gw   ["microservices-gateway"]   ["sample-spring-kotlin.apps-crc.testing"]   4m2s

If you are running it on OpenShift you should also check the Route object in the istio-system namespace:

$ oc get route -n istio-system | grep sample-spring-kotlin

If you test it on Kubernetes you just need to set the Host header on your request. Here’s the curl command for testing our secure gateway. Since we enabled mutual TLS auth we need to provide the client key and certificate. We can copy them to the local machine from Kubernetes Secret generated by the Cert Manager.

Let’s call the REST endpoint GET /persons exposed by our sample Spring Boot app:

$ curl -v https://sample-spring-kotlin.apps-crc.testing/persons \
    --key tls.key \
    --cert tls.crt

You will probably receive the following response:

Ok, we forgot to add our CA to the trusted certificates. Let’s do that. Alternatively, we can set the parameter --cacert on the curl command.

Now, we can run again the same curl command as before. The secure communication with our app should work perfectly fine. We can proceed to the second scenario.

Mutual Auth for Spring Boot and Passthrough Istio Gateway

Let’s proceed to the second scenario. Now, our sample Spring Boot is exposing the HTTPS port with the client cert verification. In order to enable it on the app side, we need to provide some configuration settings.

Here’s our application.yml file. Firstly, we need to enable SSL and set the default port to 8443. It is important to force client certificate authentication with the server.ssl.client-auth property. As a result, we also need to provide a truststore file location. Finally, we don’t want to expose Spring Boot Actuator endpoint over SSL, so we force to expose them under the default plain port.

server.port: 8443
server.ssl.enabled: true
server.ssl.key-store: /opt/secret/keystore.jks
server.ssl.key-store-password: ${PASSWORD}

server.ssl.trust-store: /opt/secret/truststore.jks
server.ssl.trust-store-password: ${PASSWORD}
server.ssl.client-auth: NEED

management.server.port: 8080
management.server.ssl.enabled: false

In the next step, we need to inject both keystore.jks and truststore.jks into the app container. Therefore we need to modify Deployment to mount a secret generated by the Cert Manager as a volume. Once again, the name of that Secret is sample-spring-boot-cert. The content of that Secret would be available for the app under the /opt/secret directory. Of course, we also need to expose the port 8443 outside and inject a secret with the keystone and truststore password.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-spring-kotlin-microservice
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: sample-spring-kotlin-microservice
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "true"
      labels:
        app.kubernetes.io/name: sample-spring-kotlin-microservice
    spec:
      containers:
      - image: piomin/sample-spring-kotlin
        name: sample-spring-kotlin-microservice
        ports:
        - containerPort: 8080
          name: http
        - containerPort: 8443
          name: https
        env:
          - name: PASSWORD
            valueFrom:
              secretKeyRef:
                key: password
                name: jks-password-secret
        volumeMounts:
          - mountPath: /opt/secret
            name: sample-spring-boot-cert
      volumes:
        - name: sample-spring-boot-cert
          secret:
            secretName: sample-spring-boot-cert

Here’s the definition of our Istio Gateway:

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: microservices-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 443
        name: https
        protocol: HTTPS
      hosts:
        - sample-spring-kotlin.apps-crc.testing
      tls:
        mode: PASSTHROUGH

The definition of the Istio VirtualService is slightly different than before. It contains the TLS route with the name of the host to match SNI.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: sample-spring-boot-vs-via-gw
spec:
  hosts:
    - sample-spring-kotlin.apps-crc.testing
  gateways:
    - microservices-gateway
  tls:
    - route:
        - destination:
            host: sample-spring-kotlin-microservice
            port:
              number: 8443
          weight: 100
      match:
        - port: 443
          sniHosts:
            - sample-spring-kotlin.apps-crc.testing

Finally, we can deploy the current version of the app using the following skaffold command to enable HTTPS port for the app running on Kubernetes:

$ skaffold dev -p istio-passthrough

Now, you can repeat the same steps as in the previous section to verify that the current configuration works fine.

Final Thoughts

In this article, I showed how to use some interesting tools that simplify the configuration of HTTPS for apps running on Kubernetes. You could see how Istio, Cert Manager, or reflector can work together. We considered two variants of making secure HTTPS Istio gateways for the Spring Boot application.

The post HTTPS on Kubernetes with Spring Boot, Istio and Cert Manager appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/06/01/https-on-kubernetes-with-spring-boot-istio-and-cert-manager/feed/ 0 11568
Distributed Tracing with Istio, Quarkus and Jaeger https://piotrminkowski.com/2022/01/31/distributed-tracing-with-istio-quarkus-and-jaeger/ https://piotrminkowski.com/2022/01/31/distributed-tracing-with-istio-quarkus-and-jaeger/#respond Mon, 31 Jan 2022 10:26:05 +0000 https://piotrminkowski.com/?p=10540 In this article, you will learn how to configure distributed tracing for your service mesh with Istio and Quarkus. For test purposes, we will build and run Quarkus microservices on Kubernetes. The communication between them is going to be managed by Istio. Istio service mesh uses Jaeger as a distributed tracing system. This time I […]

The post Distributed Tracing with Istio, Quarkus and Jaeger appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to configure distributed tracing for your service mesh with Istio and Quarkus. For test purposes, we will build and run Quarkus microservices on Kubernetes. The communication between them is going to be managed by Istio. Istio service mesh uses Jaeger as a distributed tracing system.

This time I won’t tell you about Istio basics. Although our configuration is not complicated you may read the following introduction before we start.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should go to the mesh-with-db directory. After that, you should just follow my instructions 🙂

Service Mesh Architecture

Let’s start with our microservices architecture. There are two applications: person-app and insurance-app. As you probably guessed, the person-app stores and returns information about insured people. On the other hand, the insurance-app keeps insurances data. Each service has a separate database. We deploy the person-app in two versions. The v2 version contains one additional field externalId.

The following picture illustrates our scenario. Istio splits traffic between two versions of the person-app. By default, it splits the traffic 50% to 50%. If it receives the X-Version header in the request it calls the particular version of the person-app. Of course, the possible values of the header are v1 or v2.

quarkus-istio-tracing-arch

Distributed Tracing with Istio

Istio generates distributed trace spans for each managed service. It means that every request sent inside the Istio will have the following HTTP headers:

So, every single request incoming from the Istio gateway contains X-B3-SpanId, X-B3-TraceId, and some other B3 headers. The X-B3-SpanId indicates the position of the current operation in the trace tree. On the other hand, every span in a trace should share the X-B3-TraceId header. At first glance, you can feel surprised that Istio does not propagate B3 headers in client calls. To clarify, if one service communicates with another service using e.g. REST client, you will see two different traces. The first of them is related to the API endpoint call, while the second with the client call of another API endpoint. That’s not exactly what we would like to achieve, right?

Let’s visualize our problem. If you call the insurance-app through the Istio gateway you will have the first trace in Jaeger. During that call the insurance-app calls endpoint from the person-app using Quarkus REST client. That’s another separate trace in Jeager. Our goal here is to propagate all required B3 headers to the person-app also. You can find a list of required headers here in the Istio documentation.

quarkus-istio-tracing-details

Of course, that’s not the only thing we will do today. We will also prepare Istio rules in order to simulate latency in our communication. It is a good scenario to use the tracing tool. Also, I’m going to show you how to easily deploy microservice on Kubernetes using Quarkus features for that.

I’m running Istio and Jaeger on OpenShift. More precisely, I’m using OpenShift Service Mesh that is the RedHat’s SM implementation based on Istio. I doesn’t have any impact on the exercise, so you as well repeat all the steps on Kubernetes.

Create Microservices with Quarkus

Let’s begin with the insurance-app. Here’s the class responsible for the REST endpoints implementation. There are several methods there. However, the most important for us is the getInsuranceDetailsById method that calls the GET /persons/{id} endpoint in the person-app. In order to use the Quarkus REST client extension, we need to inject client bean with @RestClient annotation.

@Path("/insurances")
public class InsuranceResource {

   @Inject
   Logger log;
   @Inject
   InsuranceRepository insuranceRepository;
   @Inject @RestClient
   PersonService personService;

   @POST
   @Transactional
   public Insurance addInsurance(Insurance insurance) {
      insuranceRepository.persist(insurance);
      return insurance;
   }

   @GET
   public List<Insurance> getInsurances() {
      return insuranceRepository.listAll();
   }

   @GET
   @Path("/{id}")
   public Insurance getInsuranceById(@PathParam("id") Long id) {
      return insuranceRepository.findById(id);
   }

   @GET
   @Path("/{id}/details")
   public InsuranceDetails getInsuranceDetailsById(@PathParam("id") Long id, @HeaderParam("X-Version") String version) {
      log.infof("getInsuranceDetailsById: id=%d, version=%s", id, version);
      Insurance insurance = insuranceRepository.findById(id);
      InsuranceDetails insuranceDetails = new InsuranceDetails();
      insuranceDetails.setPersonId(insurance.getPersonId());
      insuranceDetails.setAmount(insurance.getAmount());
      insuranceDetails.setType(insurance.getType());
      insuranceDetails.setExpiry(insurance.getExpiry());
      insuranceDetails.setPerson(personService.getPersonById(insurance.getPersonId()));
      return insuranceDetails;
   }

}

As you probably remember from the previous section, we need to propagate several headers responsible for Istio tracing to the downstream Quarkus service. Let’s take a look at the implementation of the REST client in the PersonService interface. In order to send some additional headers to the request, we need to annotate the interface with @RegisterClientHeaders. Then we have two options. We can provide our custom headers factory as shown below. Otherwise, we may use the property org.eclipse.microprofile.rest.client.propagateHeaders with a list of headers.

@Path("/persons")
@RegisterRestClient
@RegisterClientHeaders(RequestHeaderFactory.class)
public interface PersonService {

   @GET
   @Path("/{id}")
   Person getPersonById(@PathParam("id") Long id);
}

Let’s just take a look at the implementation of the REST endpoint in the person-app. The method getPersonId at the bottom is responsible for finding a person by the id field. It is a target method called by the PersonService client.

@Path("/persons")
public class PersonResource {

   @Inject
   Logger log;
   @Inject
   PersonRepository personRepository;

   @POST
   @Transactional
   public Person addPerson(Person person) {
      personRepository.persist(person);
      return person;
   }

   @GET
   public List<Person> getPersons() {
      return personRepository.listAll();
   }

   @GET
   @Path("/{id}")
   public Person getPersonById(@PathParam("id") Long id) {
      log.infof("getPersonById: id=%d", id);
      Person p = personRepository.findById(id);
      log.infof("getPersonById: %s", p);
      return p;
   }
}

Finally, the implementation of our customer client headers factory. It needs to implement the ClientHeadersFactory interface and its update() method. We are not doing anything complicated here. We are forwarding B3 tracing headers and the X-Version used by Istio to route between two versions of the person-app. I also added some logs. There I didn’t use the already mentioned option of header propagation based on the property org.eclipse.microprofile.rest.client.propagateHeaders.

@ApplicationScoped
public class RequestHeaderFactory implements ClientHeadersFactory {

   @Inject
   Logger log;

   @Override
   public MultivaluedMap<String, String> update(MultivaluedMap<String, String> inHeaders,
                                                 MultivaluedMap<String, String> outHeaders) {
      String version = inHeaders.getFirst("x-version");
      log.infof("Version Header: %s", version);
      String traceId = inHeaders.getFirst("x-b3-traceid");
      log.infof("Trace Header: %s", traceId);
      MultivaluedMap<String, String> result = new MultivaluedHashMap<>();
      result.add("X-Version", version);
      result.add("X-B3-TraceId", traceId);
      result.add("X-B3-SpanId", inHeaders.getFirst("x-b3-spanid"));
      result.add("X-B3-ParentSpanId", inHeaders.getFirst("x-b3-parentspanid"));
      return result;
   }
}

Run Quarkus Applications on Kubernetes

Before we test Istio tracing we need to deploy our Quarkus microservices on Kubernetes. We may do it in several different ways. One of the methods is provided directly by Quarkus. Thanks to that we can generate the Deployment manifests automatically during the build. It turns out that we can also apply Istio manifests to the Kubernetes cluster as well. Firstly, we need to include the following two modules.

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-openshift</artifactId>
</dependency>
<dependency>
  <groupId>me.snowdrop</groupId>
  <artifactId>istio-client</artifactId>
  <version>1.7.7.1</version>
</dependency>

If you deploy on Kubernetes just replace the quarkus-openshift module with the quarkus-kubernetes module. In the next step, we need to provide some configuration settings in the application.properties file. In order to enable deployment during the build, we need to set the property quarkus.kubernetes.deploy to true. We can configure several aspects of the Kubernetes Deployment. For example, we may enable the Istio proxy by setting the annotation sidecar.istio.io/inject to true or add some labels required for routing: app and version (in that case). Finally, our application connects to the database, so we need to inject values from the Kubernetes Secret.

quarkus.container-image.group = demo-mesh
quarkus.container-image.build = true
quarkus.kubernetes.deploy = true
quarkus.kubernetes.deployment-target = openshift
quarkus.kubernetes-client.trust-certs = true

quarkus.openshift.deployment-kind = Deployment
quarkus.openshift.labels.app = quarkus-insurance-app
quarkus.openshift.labels.version = v1
quarkus.openshift.annotations."sidecar.istio.io/inject" = true
quarkus.openshift.env.mapping.postgres_user.from-secret = insurance-db
quarkus.openshift.env.mapping.postgres_user.with-key = database-user
quarkus.openshift.env.mapping.postgres_password.from-secret = insurance-db
quarkus.openshift.env.mapping.postgres_password.with-key = database-password
quarkus.openshift.env.mapping.postgres_db.from-secret = insurance-db
quarkus.openshift.env.mapping.postgres_db.with-key = database-name

Here’s a part of the configuration responsible for database connection settings. The name of the key after quarkus.openshift.env.mappings maps to the name of the environment variable, for example, the postgres_password property to the POSTGRES_PASSWORD env.

quarkus.datasource.db-kind = postgresql
quarkus.datasource.username = ${POSTGRES_USER}
quarkus.datasource.password = ${POSTGRES_PASSWORD}
quarkus.datasource.jdbc.url = 
jdbc:postgresql://person-db:5432/${POSTGRES_DB}

If there are any additional manifests to apply we should place them inside the src/main/kubernetes directory. This applies to, for example, the Istio configuration. So, now the only thing we need to do is to application build. Firstly go to the quarkus-person-app directory and run the following command. Then, go to the quarkus-insurance-app directory, and do the same.

$ mvn clean package

Traffic Management with Istio

There are two versions of the person-app application. So, let’s create the DestinationRule object containing two subsets v1 and v2 based on the version label.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: quarkus-person-app-dr
spec:
  host: quarkus-person-app
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

In the next step, we need to create the VirtualService object for the quarkus-person-app service. The routing between versions can be based on the X-Version header. If that is not set Istio sends 50% to the v1 version, and 50% to the v2 version. Also, we will inject the delay into the v2 route using the Istio HTTPFaultInjection object. It adds 3 seconds delay for 50% of incoming requests.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: quarkus-person-app-vs
spec:
  hosts:
    - quarkus-person-app
  http:
    - match:
        - headers:
            X-Version:
              exact: v1
      route:
        - destination:
            host: quarkus-person-app
            subset: v1
    - match:
        - headers:
            X-Version:
              exact: v2
      route:
        - destination:
            host: quarkus-person-app
            subset: v2
      fault:
        delay:
          fixedDelay: 3s
          percentage:
            value: 50
    - route:
        - destination:
            host: quarkus-person-app
            subset: v1
          weight: 50
        - destination:
            host: quarkus-person-app
            subset: v2
          weight: 50

Now, let’s create the Istio Gateway object. Replace the CLUSTER_DOMAIN variable with your cluster’s domain name:

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: microservices-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - quarkus-insurance-app.apps.$CLUSTER_DOMAIN
        - quarkus-person-app.apps.$CLUSTER_DOMAIN

In order to forward traffic from the gateway, the VirtualService needs to refer to that gateway.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: quarkus-insurance-app-vs
spec:
  hosts:
    - quarkus-insurance-app.apps.$CLUSTER_DOMAIN
  gateways:
    - microservices-gateway
  http:
    - match:
        - uri:
            prefix: "/insurance"
      rewrite:
        uri: " "
      route:
        - destination:
            host: quarkus-insurance-app
          weight: 100

Now you can call the insurance-app service:

$ curl http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/1/details

We can verify all the existing Istio objects using Kiali.

Testing Istio Tracing with Quarkus

First of all, you can generate many requests using the siege tool. There are multiple ways to run it. We can prepare the file with example requests as shown below:

http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/1
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/2
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/3
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/4
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/5
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/6
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/7
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/8
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons/9
http://quarkus-person-app.apps.${CLUSTER_DOMAIN}/person/persons
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/1/details
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/2/details
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/3/details
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/4/details
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/5/details
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/6/details
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/1
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/2
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/3
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/4
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/5
http://quarkus-insurance-app.apps.${CLUSTER_DOMAIN}/insurance/insurances/6

Now we need to set that file as an input for the siege command. We can also set the number of repeats (-r) and concurrent threads (-c).

$ siege -f k8s/traffic/urls.txt -i -v -r 500 -c 10 --no-parser

It takes some time before the command will finish. In the meanwhile, let’s try to send a single request to the insurance-app with the X-Version header set to v2. 50% of such requests are delayed by the Istio quarkus-person-app-vs VirtualService. Repeat the request until you have the response with 3s delay:

Here’s the access log from the Quarkus application:

Then, let’s switch to the Jaeger console. We can find the request by the guid:x-request-id tag.

Here’s the result of our search:

quarkus-istio-tracing-jaeger

We have a full trace of the request including communication between Istio gateway and insurance-app, and also between the insurance-app and person-app. In order to print the details of the trace just click the record. You will see a trace timeline and requests/responses structure. You can easily verify that latency occurs somewhere between insurance-app and person-app, because the request has been processed only 4.46 ms by the person-app.

quarkus-istio-tracing-jaeger-timeline

The post Distributed Tracing with Istio, Quarkus and Jaeger appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/01/31/distributed-tracing-with-istio-quarkus-and-jaeger/feed/ 0 10540
Multicluster Traffic Mirroring with Istio and Kind https://piotrminkowski.com/2021/07/12/multicluster-traffic-mirroring-with-istio-and-kind/ https://piotrminkowski.com/2021/07/12/multicluster-traffic-mirroring-with-istio-and-kind/#comments Mon, 12 Jul 2021 08:27:13 +0000 https://piotrminkowski.com/?p=9902 In this article, you will learn how to create an Istio mesh with mirroring between multiple Kubernetes clusters running on Kind. We will deploy the same application in two Kubernetes clusters, and then we will mirror the traffic between those clusters. When such a scenario might be useful? Let’s assume we have two Kubernetes clusters. […]

The post Multicluster Traffic Mirroring with Istio and Kind appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create an Istio mesh with mirroring between multiple Kubernetes clusters running on Kind. We will deploy the same application in two Kubernetes clusters, and then we will mirror the traffic between those clusters. When such a scenario might be useful?

Let’s assume we have two Kubernetes clusters. The first of them is a production cluster, while the second is a test cluster. While there is huge incoming traffic to the production cluster, there is no traffic to the test cluster. What can we do in such a situation? We can just send a portion of production traffic to the test cluster. With Istio, you can also mirror internal traffic between e.g. microservices.

To simulate the scenario described above we will create two Kubernetes clusters locally with Kind. Then, we will install Istio mesh in multi-primary mode between different networks. The Kubernetes API Server and Istio Gateway need to be accessible by pods running on a different cluster. We have two applications. The caller-service application is running on the c1 cluster. It calls callme-service. The v1 version of the callme-service application is deployed on the c1 cluster, while the v2 version of the application is deployed on the c2 cluster. We will mirror 50% of traffic coming from the v1 version of our application to the v2 version running on the different clusters. The following picture illustrates our architecture.

istio-mirroring-arch

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

Both applications are configured to be deployed with Skaffold. In that case, you just need to download Skaffold CLI following the instructions available here. Of course, you also need to have Java and Maven available on your PC.

Create Kubernetes clusters with Kind

Firstly, let’s create two Kubernetes clusters using Kind. We don’t have to override any default settings, so we can just use the following command to create clusters.

$ kind create cluster --name c1
$ kind create cluster --name c2

Kind automatically creates a Kubernetes context and adds it to the config file. Just to verify, let’s display a list of running clusters.

$ kind get clusters
c1
c2

Also, we can display a list of contexts created by Kind.

$ kubectx | grep kind
kind-c1
kind-c2

Install MetalLB on Kubernetes clusters

To establish a connection between multiple clusters locally we need to expose some services as LoadBalancer. That’s why we need to install MetalLB. MetalLB is a load-balancer implementation for bare metal Kubernetes clusters. Firstly, we have to create the metalb-system namespace. Those operations should be performed on both our clusters.

$ kubectl apply -f \
  https://raw.githubusercontent.com/metallb/metallb/master/manifests/namespace.yaml

Then, we are going to create the memberlists secret required by MetalLB.

$ kubectl create secret generic -n metallb-system memberlist \
  --from-literal=secretkey="$(openssl rand -base64 128)" 

Finally, let’s install MetalLB.

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/metallb.yaml

In order to complete the configuration, we need to provide a range of IP addresses MetalLB controls. We want this range to be on the docker kind network.

$ docker network inspect -f '{{.IPAM.Config}}' kind

For me it is CIDR 172.20.0.0/16. Basing on it, we can configure the MetalLB IP pool per each cluster. For the first cluster c1 I’m setting addresses starting from 172.20.255.200 and ending with 172.20.255.250.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.200-172.20.255.250

We need to apply the configuration to the first cluster.

$ kubectl apply -f k8s/metallb-c1.yaml --context kind-c1

For the second cluster c2 I’m setting addresses starting from 172.20.255.150 and ending with 172.20.255.199.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.150-172.20.255.199

Finally, we can apply the configuration to the second cluster.

$ kubectl apply -f k8s/metallb-c2.yaml --context kind-c2

Install Istio on Kubernetes in multicluster mode

A multicluster service mesh deployment requires establishing trust between all clusters in the mesh. In order to do that we should configure the Istio certificate authority (CA) with a root certificate, signing certificate, and key. We can easily do it using Istio tools. First, we to go to the Istio installation directory on your PC. After that, we may use the Makefile.selfsigned.mk script available inside the tools/certs directory

$ cd $ISTIO_HOME/tools/certs/

The following command generates the root certificate and key.

$ make -f Makefile.selfsigned.mk root-ca

The following command generates an intermediate certificate and key for the Istio CA for each cluster. This will generate the required files in a directory named with a cluster name.

$ make -f Makefile.selfsigned.mk kind-c1-cacerts
$ make -f Makefile.selfsigned.mk kind-c2-cacerts

Then we may create Kubernetes Secret basing on the generated certificates. The same operation should be performed for the second cluster kind-c2.

$ kubectl create namespace istio-system
$ kubectl create secret generic cacerts -n istio-system \
      --from-file=kind-c1/ca-cert.pem \
      --from-file=kind-c1/ca-key.pem \
      --from-file=kind-c1/root-cert.pem \
      --from-file=kind-c1/cert-chain.pem

We are going to install Istio using the operator. It is important to set the same meshID for both clusters and different networks. We also need to create Istio Gateway for communication between two clusters inside a single mesh. It should be labeled with topology.istio.io/network=network1. The Gateway definition also contains two environment variables ISTIO_META_ROUTER_MODE and ISTIO_META_REQUESTED_NETWORK_VIEW. The first variable is responsible for setting the sni-dnat value that adds the clusters required for AUTO_PASSTHROUGH mode.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: kind-c1
      network: network1
  components:
    ingressGateways:
      - name: istio-eastwestgateway
        label:
          istio: eastwestgateway
          app: istio-eastwestgateway
          topology.istio.io/network: network1
        enabled: true
        k8s:
          env:
            - name: ISTIO_META_ROUTER_MODE
              value: "sni-dnat"
            - name: ISTIO_META_REQUESTED_NETWORK_VIEW
              value: network1
          service:
            ports:
              - name: status-port
                port: 15021
                targetPort: 15021
              - name: tls
                port: 15443
                targetPort: 15443
              - name: tls-istiod
                port: 15012
                targetPort: 15012
              - name: tls-webhook
                port: 15017
                targetPort: 15017

Before installing Istio we should label the istio-system namespace with topology.istio.io/network=network1. The Istio installation manifest is available in the repository as the k8s/istio-c1.yaml file.

$ kubectl --context kind-c1 label namespace istio-system \
      topology.istio.io/network=network1
$ istioctl install --config k8s/istio-c1.yaml \
      --context kind-c

There a similar IstioOperator definition for the second cluster. The only difference is in the name of the network, which is now network2 , and the name of the cluster.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: kind-c2
      network: network2
  components:
    ingressGateways:
      - name: istio-eastwestgateway
        label:
          istio: eastwestgateway
          app: istio-eastwestgateway
          topology.istio.io/network: network2
        enabled: true
        k8s:
          env:
            - name: ISTIO_META_ROUTER_MODE
              value: "sni-dnat"
            - name: ISTIO_META_REQUESTED_NETWORK_VIEW
              value: network2
          service:
            ports:
              - name: status-port
                port: 15021
                targetPort: 15021
              - name: tls
                port: 15443
                targetPort: 15443
              - name: tls-istiod
                port: 15012
                targetPort: 15012
              - name: tls-webhook
                port: 15017
                targetPort: 15017

The same as for the first cluster, let’s label the istio-system namespace with topology.istio.io/network and install Istio using operator manifest.

$ kubectl --context kind-c2 label namespace istio-system \
      topology.istio.io/network=network2
$ istioctl install --config k8s/istio-c2.yaml \
      --context kind-c2

Configure multicluster connectivity

Since the clusters are on separate networks, we need to expose all local services on the gateway in both clusters. Services behind that gateway can be accessed only by services with a trusted TLS certificate and workload ID. The definition of the cross gateway is exactly the same for both clusters. You can that manifest in the repository as k8s/istio-cross-gateway.yaml.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: cross-network-gateway
spec:
  selector:
    istio: eastwestgateway
  servers:
    - port:
        number: 15443
        name: tls
        protocol: TLS
      tls:
        mode: AUTO_PASSTHROUGH
      hosts:
        - "*.local"

Let’s apply the Gateway object to both clusters.

$ kubectl apply -f k8s/istio-cross-gateway.yaml \
      --context kind-c1
$ kubectl apply -f k8s/istio-cross-gateway.yaml \
      --context kind-c2

In the last step in this scenario, we enable endpoint discovery between Kubernetes clusters. To do that, we have to install a remote secret in the kind-c2 cluster that provides access to the kind-c1 API server. And vice versa. Fortunately, Istio provides an experimental feature for generating remote secrets.

$ istioctl x create-remote-secret --context=kind-c1 --name=kind-c1 
$ istioctl x create-remote-secret --context=kind-c2 --name=kind-c2

Before applying generated secrets we need to change the address of the cluster. Instead of localhost and dynamically generated port, we have to use c1-control-plane:6443 for the first cluster, and respectively c2-control-plane:6443 for the second cluster. The remote secrets generated for my clusters are committed in the project repository as k8s/secret1.yaml and k8s/secret2.yaml. You compare them with secrets generated for your clusters. Replace them with your secrets, but remember to change the address of your clusters.

$ kubectl apply -f k8s/secret1.yaml --context kind-c2
$ kubectl apply -f k8s/secret2.yaml --context kind-c1

Configure Mirroring with Istio

We are going to deploy our sample applications in the default namespace. Therefore,  automatic sidecar injection should be enabled for that namespace.

$ kubectl label --context kind-c1 namespace default \
    istio-injection=enabled
$ kubectl label --context ind-c2 namespace default \
    istio-injection=enabled

Before configuring Istio rules let’s deploy v1 version of the callme-service application on the kind-c1 cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

Then, we will deploy the v2 version of the callme-service application on the kind-c2 cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v2
  template:
    metadata:
      labels:
        app: callme-service
        version: v2
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v2"

Of course, we should also create Kubernetes Service on both clusters.

apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

The Istio DestinationRule defines two Subsets for callme-service basing on the version label.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: callme-service-destination
spec:
  host: callme-service
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

Finally, we may configure traffic mirroring with Istio. The 50% of traffic coming to the callme-service deployed on the kind-c1 cluster is sent to the callme-service deployed on the kind-c2 cluster.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: callme-service-route
spec:
  hosts:
    - callme-service
  http:
    - route:
      - destination:
          host: callme-service
          subset: v1
        weight: 100
      mirror:
        host: callme-service
        subset: v2
      mirrorPercentage:
        value: 50.

We will also deploy the caller-service application on the kind-c1 cluster. It calls endpoint GET /callme/ping exposed by the callme-service application.

$ kubectl get pod --context kind-c1
NAME                                 READY   STATUS    RESTARTS   AGE
caller-service-b9dbbd6c8-q6dpg       2/2     Running   0          1h
callme-service-v1-7b65795f48-w7zlq   2/2     Running   0          1h

Let’s verify the list of running pods in the default namespace in the kind-c2 cluster.

$ kubectl get pod --context kind-c2
NAME                                 READY   STATUS    RESTARTS   AGE
callme-service-v2-665b876579-rsfks   2/2     Running   0          1h

In order to test Istio mirroring through multiple Kubernetes clusters, we call the endpoint GET /caller/ping exposed by caller-service. As I mentioned before it calls the similar endpoint exposed by the callme-service application with the HTTP client. The simplest way to test it is through enabling port-forwarding. Thanks to that, the caller-service Service is available on the local port 8080. Let’s call that endpoint 20 times with siege.

$ siege -r 20 -c 1 http://localhost:8080/caller/service

After that, you can verify the logs for callme-service-v1 and callme-service-v2 deployments.

$ kubectl logs pod/callme-service-v1-7b65795f48-w7zlq --context kind-c1
$ kubectl logs pod/callme-service-v2-665b876579-rsfks --context kind-c2

You should see the following log 20 times for the kind-c1 cluster.

I'm callme-service v1

And respectively you should see the following log 10 times for the kind-c2 cluster, because we mirror 50% of traffic from v1 to v2.

I'm callme-service v2

Final Thoughts

This article shows how to create Istio multicluster with traffic mirroring between different networks. If you would like to simulate a similar scenario in the same network you may use a tool called Submariner. You may find more details about running Submariner on Kubernetes in this article Kubernetes Multicluster with Kind and Submariner.

The post Multicluster Traffic Mirroring with Istio and Kind appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/07/12/multicluster-traffic-mirroring-with-istio-and-kind/feed/ 16 9902
Blue-green deployment with a database on Kubernetes https://piotrminkowski.com/2021/02/18/blue-green-deployment-with-a-database-on-kubernetes/ https://piotrminkowski.com/2021/02/18/blue-green-deployment-with-a-database-on-kubernetes/#comments Thu, 18 Feb 2021 11:03:33 +0000 https://piotrminkowski.com/?p=9450 In this article, you will learn how to use a blue-green deployment strategy on Kubernetes to propagate changes in the database. The database change process is an essential task for every software project. If your application connects to a database, the code with a database schema is as important as the application source code. Therefore, […]

The post Blue-green deployment with a database on Kubernetes appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to use a blue-green deployment strategy on Kubernetes to propagate changes in the database. The database change process is an essential task for every software project. If your application connects to a database, the code with a database schema is as important as the application source code. Therefore, you should store it in your version control system. It also should be a part of your CI/CD process. How Kubernetes may help in this process?

First of all, you may easily implement various deployment strategies on Kubernetes. One of them is a blue-green deployment. In this approach, you maintain two copies of your production environment: blue and green. As a result, this technique reduces risk and minimizes downtime. Also, it perfectly fits in the database schema and the application model change process. Databases can often be a challenge, particularly if you need to change the schema to support a new version of the software.

However, our scenario will be very simple. After changing the data model we release the second version of our application. Of course, the whole traffic is still forwarded to the first version of the application (“blue”). Then, we will migrate a database to a new version. Finally, we switch the whole traffic to the latest version (“green”).

Before starting with this article it is a good idea to read a little bit more about Istio. You can find some interesting information about technologies used in this example like Istio, Kubernetes, or Spring Boot in this article.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂

Tools used for blue-green deployment and database changes

We need two tools to perform a blue-green deployment with the database on Kubernetes. The first of them is Liquibase. It automates database schema changes management. It also allows versioning of those changes. Moreover, with Liquibase we can easily roll back all the previously performed modifications of your schema.

The second essential tool is Istio. It allows us to easily switch TCP traffic between various versions of deployments.

Should we use framework integration with Liquibase?

Modern Java frameworks like Spring Boot or Quarkus offer built-in integration with Liquibase. In that concept, we just need to create a Liquibase changelog and set its location. Our framework is able to run such a script on application startup. Since it is a very useful approach in development, I would not recommend it for production deployment. Especially if you deploy your application on Kubernetes. Why?

Firstly, you are not able to determine how long does it take to run such a script on your database. It depends on the size of a database, the number of changes, or the current load. It makes it difficult to set an initial delay on the liveness probe. Defining liveness probe for a deployment is obviously a good practice on Kubernetes. But if you set a too long initial delay value it will slow down your redeployment process. On the other hand, if you set a too low value Kubernetes may kill your pod before the application starts.

Even if everything goes well you may have downtime after applying changes to a database and before a new version of the application starts. Ok, so what’s the best solution in that case? We should include a Liquibase script into our deployment pipeline. It has to be executed after deploying the latest version of the application, but just before switching traffic to that version (“green”).

Prerequisites

Before proceeding to the further steps you need to:

  1. Start your Kubernetes cluster (local or remote)
  2. Install Istio on Kubernetes with that guide
  3. Run Postgresql on Kubernetes. You can use that script from my GitHub repository.
  4. Prepare a Docker image with Liquibase update command (instructions below)

Prepare a Docker image with Liquibase

In the first step, we need to create a Docker image with Liquibase that may be easily run on Kubernetes. It needs to execute the update command. We will use an official Liquibase image as a base image in our Dockerfile. There are four parameters that might be overridden. We will set the address of a target database, username, password, and location of the Liquibase changelog file. Here’s our Dockerfile.

FROM liquibase/liquibase
ENV URL=jdbc:postgresql://postgresql:5432/test
ENV USERNAME=postgres
ENV PASSWORD=postgres
ENV CHANGELOGFILE=changelog.xml
CMD ["sh", "-c", "docker-entrypoint.sh --url=${URL} --username=${USERNAME} --password=${PASSWORD} --classpath=/liquibase/changelog --changeLogFile=${CHANGELOGFILE} update"]

Then, we just need to build it. However, you can omit that step. I have already pushed that version of the image to my Docker public repository.

$ docker build -t piomin/liquibase .

Step 1. Create a table in the database with Liquibase

Let’s create a first version of the database schema for our application. To do that we need to define a Liquibase script. We will put that script inside the Kubernetes ConfigMap with the liquibase-changelog-v1 name. It is a simple CREATE TABLE SQL command.

apiVersion: v1
kind: ConfigMap
metadata:
  name: liquibase-changelog-v1
data:
  changelog.sql: |-
    --liquibase formatted sql

    --changeset piomin:1
    create table person (
      id serial primary key,
      firstname varchar(255),
      lastname varchar(255),
      age int
    );
    --rollback drop table person;

Then, let’s create a Kubernetes Job that loads the ConfigMap created in the previous step. The job is performed just after creation with the kubectl apply command.

apiVersion: batch/v1
kind: Job
metadata:
  name: liquibase-job-v1
spec:
  template:
    spec:
      containers:
        - name: liquibase
          image: piomin/liquibase
          env:
            - name: URL
              value: jdbc:postgresql://postgres:5432/bluegreen
            - name: USERNAME
              value: bluegreen
            - name: PASSWORD
              value: bluegreen
            - name: CHANGELOGFILE
              value: changelog.sql
          volumeMounts:
            - name: config-vol
              mountPath: /liquibase/changelog
      restartPolicy: Never
      volumes:
        - name: config-vol
          configMap:
            name: liquibase-changelog-v1

Finally, we can verify if the job has been successfully executed. To do that we need to check out the logs from the pod created by the liquibase-job-v1 job.

blue-green-deployment-on-kubernetes-liquibase

Step 2. Deploy the first version of the application

In the next step, we are proceeding to the deployment of our application. This simple Spring Boot application exposes a REST API and connects to a PostgresSQL database. Here’s the entity class that corresponds to the previously created database schema.

@Entity
@Getter
@Setter
@NoArgsConstructor
public class Person {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    @Column(name = "firstname")
    private String firstName;
    @Column(name = "lastname")
    private String lastName;
    private int age;
}

In short, that is the first version (v1) of our application. Let’s take a look at the Deployment manifest. We need to inject database connection settings with environment variables. We will also expose liveness and readiness probes using Spring Boot Actuator.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: person-v1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: person
      version: v1
  template:
    metadata:
      labels:
        app: person
        version: v1
    spec:
      containers:
      - name: person
        image: piomin/person-service
        ports:
        - containerPort: 8080
        env:
          - name: DATABASE_USER
            valueFrom:
              configMapKeyRef:
                key: POSTGRES_USER
                name: postgres-config
          - name: DATABASE_NAME
            valueFrom:
              configMapKeyRef:
                key: POSTGRES_DB
                name: postgres-config
          - name: DATABASE_PASSWORD
            valueFrom:
              secretKeyRef:
                key: POSTGRES_PASSWORD
                name: postgres-secret
        livenessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/liveness
        readinessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/readiness

The readiness health check exposed by the application includes a status of a connection with the PostgresSQL database. Therefore, you may be sure that it works properly.

spring:
  application:
    name: person-service
  datasource:
    url: jdbc:postgresql://postgres:5432/${DATABASE_NAME}
    username: ${DATABASE_USER}
    password: ${DATABASE_PASSWORD}

management:
  endpoint:
    health:
      show-details: always
      group:
        readiness:
          include: db
      probes:
        enabled: true

We are running 2 instances of our application.

blue-green-deployment-on-kubernetes-kubectl

Just to conclude. Here’s our current status after Step 2.

blue-green-deployment-on-kubernetes-picture-arch

Step 3. Deploy the second version of the application with a blue-green strategy

Firstly, we perform a very trivial modification in our entity model class. We will change the name of two columns in the database using @Column annotation. We replace firstname with first_name, and lastname with last_name as shown below.

@Entity
@Getter
@Setter
@NoArgsConstructor
public class Person {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    @Column(name = "first_name")
    private String firstName;
    @Column(name = "last_name")
    private String lastName;
    private int age;
}

The Deployment manifest is very similar to the previous version of our application. Of course, the only difference is in the version label.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: person-v2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: person
      version: v2
  template:
    metadata:
      labels:
        app: person
        version: v2
    spec:
      containers:
      - name: person
        image: piomin/person-service
        ports:
        - containerPort: 8080
        env:
          - name: DATABASE_USER
            valueFrom:
              configMapKeyRef:
                key: POSTGRES_USER
                name: postgres-config
          - name: DATABASE_NAME
            valueFrom:
              configMapKeyRef:
                key: POSTGRES_DB
                name: postgres-config
          - name: DATABASE_PASSWORD
            valueFrom:
              secretKeyRef:
                key: POSTGRES_PASSWORD
                name: postgres-secret
        livenessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/liveness
        readinessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/readiness

However, before deploying that version of the application we need to apply Istio rules. Istio should forward the whole traffic to the person-v1. Firstly, let’s define a DestinationRule with two subsets related to versions v1 and v2.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: person-destination
spec:
  host: person
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

Then, we will apply the following Istio rule. It forwards the 100% of incoming traffic to the person pods labelled with version=v1.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: person-virtualservice
spec:
  hosts:
    - person
  http:
    - route:
      - destination:
          host: person
          subset: v1
        weight: 100
      - destination:
          host: person
          subset: v2
        weight: 0

Here’s our current status after applying changes in Step 3.

blue-green-deployment-on-kubernetes-picture-arch-new

Also, let’s verify a current list of deployments.

Step 4. Modify database and switch traffic to the latest version

Currently, both versions of our application are running. So, if we modify the database schema and then we switch traffic to the latest version, we achieve a zero-downtime deployment. So, the same as before we will create a ConfigMap that contains the Liquibase changelog file.

apiVersion: v1
kind: ConfigMap
metadata:
  name: liquibase-changelog-v2
data:
  changelog.sql: |-
    --liquibase formatted sql

    --changeset piomin:2
    alter table rename column firstName to first_name;
    alter table rename column lastName to last_name;

Then, we create Kubernetes Job that uses a changelog file from the liquibase-changelog-v2 ConfigMap.

apiVersion: batch/v1
kind: Job
metadata:
  name: liquibase-job-v2
spec:
  template:
    spec:
      containers:
        - name: liquibase
          image: piomin/liquibase
          env:
            - name: URL
              value: jdbc:postgresql://postgres:5432/bluegreen
            - name: USERNAME
              value: bluegreen
            - name: PASSWORD
              value: bluegreen
            - name: CHANGELOGFILE
              value: changelog.sql
          volumeMounts:
            - name: config-vol
              mountPath: /liquibase/changelog
      restartPolicy: Never
      volumes:
        - name: config-vol
          configMap:
            name: liquibase-changelog-v2

Once, the Kubernetes job is finished we just need to update Istio VirtualService to forward the whole traffic to the v2 version of the application.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: person-virtualservice
spec:
  hosts:
    - person
  http:
    - route:
      - destination:
          host: person
          subset: v1
        weight: 0
      - destination:
          host: person
          subset: v2
        weight: 100

That is the last step in our blue-green deployment process on Kubernetes. The picture visible below illustrates the current status. The database schema has been updated. Also, the whole traffic is now sent to the v2 version of person-service.

Also, here’s a current list of deployments.

Testing Blue-green deployment on Kubernetes

In order to easily test our blue-green deployment process, and I created the second application caller-service. It calls endpoint GET /persons/{id} exposed by the person-service.

@RestController
@RequestMapping("/caller")
public class CallerController {

    private RestTemplate restTemplate;

    public CallerController(RestTemplate restTemplate) {
        this.restTemplate = restTemplate;
    }

    @GetMapping("/call")
    public String call() {
        ResponseEntity<String> response = restTemplate
                .getForEntity("http://person:8080/persons/1", String.class);
        if (response.getStatusCode().is2xxSuccessful())
            return response.getBody();
        else
            return "Error: HTTP " + response.getStatusCodeValue();
    }
}

Before testing, you should add at least a single person to the database. You can use POST /persons for that. In this example, I’m using the port forwarding feature.

$ curl http://localhost:8081/persons -H "Content-Type: application/json" -d '{"firstName":"John","lastName":"Smith","age":33}'

Ok, so here’s a list of deployments you need to start with Step 4. You see that every application has two containers inside a pod (except PostgreSQL). It means that Istio sidecar has been injected into those pods.

Finally, just before executing Step 4 run the following script that calls the caller-service endpoint. The same as before I’m using a port forwarding feature.

$ siege -r 200 -c 1 http://localhost:8080/caller/call

Conclusion

In this article, I described step-by-step how to update your database and application data model on Kubernetes using a blue-green deployment strategy. I chose a scenario with conflicting changes like modification of table column name.

The post Blue-green deployment with a database on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/02/18/blue-green-deployment-with-a-database-on-kubernetes/feed/ 6 9450
Intro to OpenShift Service Mesh https://piotrminkowski.com/2020/08/06/intro-to-openshift-service-mesh-with-istio/ https://piotrminkowski.com/2020/08/06/intro-to-openshift-service-mesh-with-istio/#respond Thu, 06 Aug 2020 15:03:22 +0000 http://piotrminkowski.com/?p=8307 OpenShift 4 has introduced official support for service mesh based on the Istio framework. This support is built on top of Maistra operator. Maistra is an opinionated distribution of Istio designed to work with Openshift. It combines Kiali, Jaeger, and Prometheus into a platform managed by the operator. The current version of OpenShift Service Mesh […]

The post Intro to OpenShift Service Mesh appeared first on Piotr's TechBlog.

]]>
OpenShift 4 has introduced official support for service mesh based on the Istio framework. This support is built on top of Maistra operator. Maistra is an opinionated distribution of Istio designed to work with Openshift. It combines Kiali, Jaeger, and Prometheus into a platform managed by the operator. The current version of OpenShift Service Mesh is 1.1.5. According to the documentation, this version of the service mesh supports Istio 1.4.8. Also, for creating this tutorial I was using OpenShift 4.4 installed on Azure.

In this article, I will not explain the basics of Istio framework. If you do not have experience in using Istio for building service mesh on Kubernetes you may refer to my article Service Mesh on Kubernetes with Istio and Spring Boot.

1. Install OpenShift Service Mesh operators

OpenShift 4 provides extensive support for Kubernetes operators. You can install them using OpenShift Console. To do that you must navigate to Operators -> Operator Hub, and then find Red Hat OpenShift Service Mesh. You can install some other operators to enable integration between the service mesh and additional components like Jeager, Prometheus, or Kiali. In this tutorial, we are going to discuss the Kiali component. That’s why we have to search and install Kiali Operator.

kiali

2. OpenShift Service mesh configuration

By default, Istio is always installed inside istio-system namespace. Although you can install it on OpenShift in any project, we will use istio-system – according to the best practices. Once the operator is installed inside istio-system namespace we may create Service Mesh Control Plane. The only component that will be enabled is Kiali.

istio-controlplane

In the next step, we are going to create Service Mesh Member Roll and Service Mesh Member components. This time we have to prepare the YAML manifest. It is important to start from a ServiceMeshMemberRoll object. In this object, we should define a list of projects being a part of our service mesh. Currently, there is the only project – microservices.

apiVersion: maistra.io/v1
kind: ServiceMeshMemberRoll
metadata:
  name: default
  namespace: istio-system
spec:
  members:
    - microservices

In the ServiceMeshMember definition, it is important to set the right namespace for the control plane. In our case that is istio-system namespace.

apiVersion: maistra.io/v1
kind: ServiceMeshMember
metadata:
  name: default
spec:
  controlPlaneRef:
    name: basic-install
    namespace: istio-system

Finally, we can take a look at the configuration of OpenShift Service Mesh Operator inside istio-system project.

openshift-service-mesh-configuration

Here’s the list of deployments inside istio-system namespace after installation of OpenShift Service Mesh.

openshift-service-mesh-deployments

3. Deploy applications on OpenShift

Let’s switch to the microservices namespace. We will deploy our example microservices that communicate with each other. Each application would be deployed in two versions. We are using the same codebase, so each version would be distinguished based on the label version. Labels may be injected into the container using DownwardAPI.
The most important thing in the following Deployment definition is annotation sidecar.istio.io/inject. It is responsible for enabling Istio sidecar injection for the application. Other applications and their versions have a similar deployment manifest structure.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: department-deployment-v1
spec:
  selector:
    matchLabels:
      app: department
      version: v1
  template:
    metadata:
      labels:
        app: department
        version: v1
	  annotations:
        sidecar.istio.io/inject: "true"
    spec:
      containers:
      - name: department
        image: piomin/department-service
        ports:
        - containerPort: 8080
        volumeMounts:
          - mountPath: /etc/podinfo
            name: podinfo
      volumes:
        - name: podinfo
          downwardAPI:
            items:
              - path: "labels"
                fieldRef:
                  fieldPath: metadata.labels

The sample system consists of three microservices: employee-service, department-service, and organization-service. The source code of those applications is available on GitHub within the repository https://github.com/piomin/course-kubernetes-microservices/tree/openshift/simple-microservices. Each application is built on top of Spring Boot, and uses the H2 database as an in-memory data store. You can build and deploy them on OpenShift using Skaffold (by executing command skaffold dev), which manifest is configured inside the repository (skaffold.yaml).

I won’t describe the implementation details about example applications. They are written in Kotlin, and use OpenJDK as a base image. If you are interested in more detailed pieces of information you may watch two parts of my online course Microservices on Kubernetes: Inter-communication & gateway, and Microservices on Kubernetes: Service mesh.

Here’s a list of applications deployed inside project microservices.

openshift-service-mesh-apps

4. Istio configuration

After running all the sample applications we may proceed to the Istio configuration. Because each application is deployed in two versions, we will define DestinationRule component that defines a list of two subsets per application based on the value of the version label. Here’s the example DestinationRule for employee-service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: employee-service-destination
spec:
  host: employee-service.microservices.svc.cluster.local
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

Now, we may proceed to the definition of VirtualService. Routing between different versions of the application will be based on the value of HTTP header X-Version.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: employee-service-route
spec:
  hosts:
    - employee-service.microservices.svc.cluster.local
  http:
    - match:
        - headers:
            X-Version:
              exact: v1
      route:
        - destination:
            host: employee-service.microservices.svc.cluster.local
            subset: v1
    - match:
        - headers:
            X-Version:
              exact: v2
      route:
        - destination:
            host: employee-service.microservices.svc.cluster.local
            subset: v2
    - route:
        - destination:
            host: employee-service.microservices.svc.cluster.local
            subset: v1

The similar YAML manifests will be prepared for other microservices: department-service and organization-service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: department-service-destination
spec:
  host: department-service.microservices.svc.cluster.local
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

And VirtualService for department-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: department-service-route
spec:
  hosts:
    - department-service.microservices.svc.cluster.local
  http:
    - match:
        - headers:
            X-Version:
              exact: v1
      route:
        - destination:
            host: department-service.microservices.svc.cluster.local
            subset: v1
    - match:
        - headers:
            X-Version:
              exact: v2
      route:
        - destination:
            host: department-service.microservices.svc.cluster.local
            subset: v2
    - route:
        - destination:
            host: department-service.microservices.svc.cluster.local
            subset: v1

The whole configuration created until now was responsible for internal communication. Now, we will expose our application outside the OpenShift cluster. To do that we need to create Istio Gateway. It is referencing the Service called ingressgateway available in the namespace istio-system. That service is exposed outside the cluster using OpenShift Route.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: microservices-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "*"

Let’s take a look at the list of available routes. Besides Istio Gateway we can also access Kiali console outside the OpenShift cluster.

mesh-routes

The last thing we need to do is to create Istio virtual services responsible for routing from Istio gateway to the applications. The configuration is pretty similar to the internal virtual services. The difference is that it is referencing the Istio Gateway and performs routing to the downstream services basing on path prefix. If the path starts with /employee the request is forwarded toemployee-service etc. Here’s the configuration for employee-service. The similar configuration has been prepared for both department-service, and organization-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: employee-service-gateway-route
spec:
  hosts:
    - "*"
  gateways:
    - microservices-gateway
  http:
    - match:
        - headers:
            X-Version:
              exact: v1
          uri:
            prefix: "/employee"
      rewrite:
        uri: " "
      route:
        - destination:
            host: employee-service.microservices.svc.cluster.local
            subset: v1
    - match:
        - uri:
            prefix: "/employee"
          headers:
            X-Version:
              exact: v2
      rewrite:
        uri: " "
      route:
        - destination:
            host: employee-service.microservices.svc.cluster.local
            subset: v2

5. Test requests

The default hostname for my OpenShift cluster is istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io. To add some test data I’m sending the following requests.

$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/department/departments -d "{\"name\":\"Test1\"}" -H "Content-Type: application/json" -H "X-Version:v1"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/department/departments -d "{\"name\":\"Test1\"}" -H "Content-Type: application/json" -H "X-Version:v2"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/organization/organizations -d "{\"name\":\"Test1\"}" -H "Content-Type: application/json" -H "X-Version:v1"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/organization/organizations -d "{\"name\":\"Test1\"}" -H "Content-Type: application/json" -H "X-Version:v2"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/employee/employees -d "{\"firstName\":\"John\",\"lastName\":\"Smith\",\"position\":\"director\",\"organizationId\":1,\"departmentId\":1}" -H "Content-Type: application/json" -H "X-Version:v1"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/employee/employees -d "{\"firstName\":\"Paul\",\"lastName\":\"Walker\",\"position\":\"architect\",\"organizationId\":1,\"departmentId\":1}" -H "Content-Type: application/json" -H "X-Version:v1"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/employee/employees -d "{\"firstName\":\"John\",\"lastName\":\"Smith\",\"position\":\"director\",\"organizationId\":1,\"departmentId\":1}" -H "Content-Type: application/json" -H "X-Version:v2"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/employee/employees -d "{\"firstName\":\"Paul\",\"lastName\":\"Walker\",\"position\":\"architect\",\"organizationId\":1,\"departmentId\":1}" -H "Content-Type: application/json" -H "X-Version:v2"

Now, we can test internal communication between microservices. The following requests verifies communication between department-service, and employee-service.

$ curl http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/department/departments/1/with-employees -H "X-Version:v1"
$ curl http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/department/departments/1/with-employees -H "X-Version:v2"

6. Kiali

Finally, we access Kiali to take a look at the communication diagram. We had to generate some test traffic before.

openshift-service-mesh-kiali

Kiali allows us to verify Istio configuration. For example we may see the list of virtual services per project.

openshift-service-mesh-kiali-services

We may also take a look on the details of each VirtualService.

openshift-service-mesh-kiali-service

The post Intro to OpenShift Service Mesh appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/08/06/intro-to-openshift-service-mesh-with-istio/feed/ 0 8307
Spring Boot Library for integration with Istio https://piotrminkowski.com/2020/06/10/spring-boot-library-for-integration-with-istio/ https://piotrminkowski.com/2020/06/10/spring-boot-library-for-integration-with-istio/#comments Wed, 10 Jun 2020 15:17:10 +0000 http://piotrminkowski.com/?p=8102 In this article I’m going to present an annotation-based Spring Boot library for integration with Istio. The Spring Boot Istio library provides auto-configuration, so you don’t have to do anything more than including it to your dependencies to be able to use it. The library is using Istio Java Client me.snowdrop:istio-client for communication with Istio […]

The post Spring Boot Library for integration with Istio appeared first on Piotr's TechBlog.

]]>
In this article I’m going to present an annotation-based Spring Boot library for integration with Istio. The Spring Boot Istio library provides auto-configuration, so you don’t have to do anything more than including it to your dependencies to be able to use it.
The library is using Istio Java Client me.snowdrop:istio-client for communication with Istio API on Kubernetes. The following picture illustrates an architecture of the presented solution on Kubernetes. The Spring Boot Istio is working just during application startup. It is able to modify existing Istio resources or create the new one if there are no matching rules found.
spring-boot-istio-arch

Source code

The source code of library is available on my GitHub repository https://github.com/piomin/spring-boot-istio.git.

How to use it

To use in your Spring Boot application you include the following dependency.

<dependency>
   <groupId>com.github.piomin</groupId>
   <artifactId>spring-boot-istio</artifactId>
   <version>0.1.0.RELEASE</version>
</dependency>

After that you should annotate one of your class with @EnableIstio. The annotation contains several fields used for Istio DestinationRule and VirtualService objects.

@RestController
@RequestMapping("/caller")
@EnableIstio(version = "v1", timeout = 3, numberOfRetries = 3)
public class CallerController {

    private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

    @Autowired
    BuildProperties buildProperties;
    @Autowired
    RestTemplate restTemplate;
    @Value("${VERSION}")
    private String version;

    @GetMapping("/ping")
    public String ping() {
        LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), version);
        String response = restTemplate.getForObject("http://callme-service:8080/callme/ping", String.class);
        LOGGER.info("Calling: response={}", response);
        return "I'm caller-service " + version + ". Calling... " + response;
    }
   
}

The name of Istio objects is generated based on spring.application.name. So you need to provide that name in your application.yml.

spring:
  application:
    name: caller-service

Currently there are five available fields that may be used for @EnableIstio.

@Target({ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface EnableIstio {
    int timeout() default 0;
    String version() default "";
    int weight() default 0;
    int numberOfRetries() default 0;
    int circuitBreakerErrors() default 0;
}

Here’s the detailed description of available parameters.

  • version – it indicates the version of IstioSubset. We may define multiple versions of the same application. The name of label is version
  • weight – it sets a weight assigned to the Subset indicated by the version label
  • timeout – a total read timeout in seconds on the client side – including retries
  • numberOfRetries – it enables retry mechanism. By default we are retrying all 5XX HTTP codes. The timeout of a single retry is calculated as timeout / numberOfRetries
  • circuitBreakerErrors – it enables circuit breaker mechanism. It is based on a number of consecutive HTTP 5XX errors. If circuit is open for a single application it is ejected from the pool for 30 seconds

How it works

Here’s the Deployment definition of our sample service. It should be labelled with the same version as set inside @EnableIstio.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
        version: v1
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080

Let’s deploy our sample application on Kubernetes. After deploy we may verify the status of Deployment.

spring-boot-istio-deployment

The name of created DestinationRule is a concatenation of spring.application.name property value and word -destination.

spring-boot-istio-destinationrule

The name of created VirtualService is a concatenation of spring.application.name property value and word -route. Here’s the definition of VirtualService created for annotation @EnableIstio(version = "v1", timeout = 3, numberOfRetries = 3) and caller-service application.

spring-boot-istio-virtualservice

How it is implemented

We need to define a bean that implements BeanPostProcessor interface. On application startup it is trying to find the annotation @EnableIstio. If such annotation exists it takes a value of its fields and then it is creating new Istio objects or editing the currently existing objects.

public class EnableIstioAnnotationProcessor implements BeanPostProcessor {

    private final Logger LOGGER = LoggerFactory.getLogger(EnableIstioAnnotationProcessor.class);
    private ConfigurableListableBeanFactory configurableBeanFactory;
    private IstioClient istioClient;
    private IstioService istioService;

    public EnableIstioAnnotationProcessor(ConfigurableListableBeanFactory configurableBeanFactory, IstioClient istioClient, IstioService istioService) {
        this.configurableBeanFactory = configurableBeanFactory;
        this.istioClient = istioClient;
        this.istioService = istioService;
    }

    public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
        EnableIstio enableIstioAnnotation =  bean.getClass().getAnnotation(EnableIstio.class);
        if (enableIstioAnnotation != null) {
            LOGGER.info("Istio feature enabled: {}", enableIstioAnnotation);

            Resource<DestinationRule, DoneableDestinationRule> resource = istioClient.v1beta1DestinationRule()
                    .withName(istioService.getDestinationRuleName());
            if (resource.get() == null) {
                createNewDestinationRule(enableIstioAnnotation);
            } else {
                editDestinationRule(enableIstioAnnotation, resource);
            }

            Resource<VirtualService, DoneableVirtualService> resource2 = istioClient.v1beta1VirtualService()
                    .withName(istioService.getVirtualServiceName());
            if (resource2.get() == null) {
                 createNewVirtualService(enableIstioAnnotation);
            } else {
                editVirtualService(enableIstioAnnotation, resource2);
            }
        }
        return bean;
    }
   
}

We are using the API provided by Istio Client library. It provides a set of builders dedicated for creating elements of Istio objects.

private void createNewDestinationRule(EnableIstio enableIstioAnnotation) {
   DestinationRule dr = new DestinationRuleBuilder()
      .withMetadata(istioService.buildDestinationRuleMetadata())
      .withNewSpec()
      .withNewHost(istioService.getApplicationName())
      .withSubsets(istioService.buildSubset(enableIstioAnnotation))
      .withTrafficPolicy(istioService.buildCircuitBreaker(enableIstioAnnotation))
      .endSpec()
      .build();
   istioClient.v1beta1DestinationRule().create(dr);
   LOGGER.info("New DestinationRule created: {}", dr);
}

private void editDestinationRule(EnableIstio enableIstioAnnotation, Resource<DestinationRule, DoneableDestinationRule> resource) {
   LOGGER.info("Found DestinationRule: {}", resource.get());
   if (!enableIstioAnnotation.version().isEmpty()) {
      Optional<Subset> subset = resource.get().getSpec().getSubsets().stream()
         .filter(s -> s.getName().equals(enableIstioAnnotation.version()))
         .findAny();
      resource.edit()
         .editSpec()
         .addAllToSubsets(subset.isEmpty() ? List.of(istioService.buildSubset(enableIstioAnnotation)) :
                  Collections.emptyList())
            .editOrNewTrafficPolicyLike(istioService.buildCircuitBreaker(enableIstioAnnotation)).endTrafficPolicy()
         .endSpec()
         .done();
   }
}

private void createNewVirtualService(EnableIstio enableIstioAnnotation) {
   VirtualService vs = new VirtualServiceBuilder()
         .withNewMetadata().withName(istioService.getVirtualServiceName()).endMetadata()
      .withNewSpec()
      .addToHosts(istioService.getApplicationName())
      .addNewHttp()
      .withTimeout(enableIstioAnnotation.timeout() == 0 ? null : new Duration(0, (long) enableIstioAnnotation.timeout()))
      .withRetries(istioService.buildRetry(enableIstioAnnotation))
         .addNewRoute().withNewDestinationLike(istioService.buildDestination(enableIstioAnnotation)).endDestination().endRoute()
      .endHttp()
      .endSpec()
      .build();
   istioClient.v1beta1VirtualService().create(vs);
   LOGGER.info("New VirtualService created: {}", vs);
}

private void editVirtualService(EnableIstio enableIstioAnnotation, Resource<VirtualService, DoneableVirtualService> resource) {
   LOGGER.info("Found VirtualService: {}", resource.get());
   if (!enableIstioAnnotation.version().isEmpty()) {
      istioClient.v1beta1VirtualService().withName(istioService.getVirtualServiceName())
         .edit()
         .editSpec()
         .editFirstHttp()
         .withTimeout(enableIstioAnnotation.timeout() == 0 ? null : new Duration(0, (long) enableIstioAnnotation.timeout()))
         .withRetries(istioService.buildRetry(enableIstioAnnotation))
         .editFirstRoute()
         .withWeight(enableIstioAnnotation.weight() == 0 ? null: enableIstioAnnotation.weight())
            .editOrNewDestinationLike(istioService.buildDestination(enableIstioAnnotation)).endDestination()
         .endRoute()
         .endHttp()
         .endSpec()
         .done();
   }
}

The post Spring Boot Library for integration with Istio appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/06/10/spring-boot-library-for-integration-with-istio/feed/ 2 8102
Circuit breaker and retries on Kubernetes with Istio and Spring Boot https://piotrminkowski.com/2020/06/03/circuit-breaker-and-retries-on-kubernetes-with-istio-and-spring-boot/ https://piotrminkowski.com/2020/06/03/circuit-breaker-and-retries-on-kubernetes-with-istio-and-spring-boot/#comments Wed, 03 Jun 2020 07:29:41 +0000 http://piotrminkowski.com/?p=8072 An ability to handle communication failures in an inter-service communication is an absolute necessity for every single service mesh framework. It includes handling of timeouts and HTTP error codes. In this article I’m going to show how to configure retry and circuit breaker mechanisms using Istio. The same as for the previous article about Istio […]

The post Circuit breaker and retries on Kubernetes with Istio and Spring Boot appeared first on Piotr's TechBlog.

]]>
An ability to handle communication failures in an inter-service communication is an absolute necessity for every single service mesh framework. It includes handling of timeouts and HTTP error codes. In this article I’m going to show how to configure retry and circuit breaker mechanisms using Istio. The same as for the previous article about Istio Service mesh on Kubernetes with Istio and Spring Boot we will analyze a communication between two simple Spring Boot applications deployed on Kubernetes. But instead of very basic example we are going to discuss more advanced topics.

Example

For demonstrating usage of Istio and Spring Boot I created a repository on GitHub with two sample applications: callme-service and caller-service. The address of this repository is https://github.com/piomin/sample-istio-services.git. The same repository has been for the first article about service mesh with Istio already mentioned in the preface.

Architecture

The architecture of our sample system is pretty similar to those in the previous article. However, there are some differences. We are not injecting a fault or delay using Istio components, but directly on the application inside the source code. Why? Now, we will be able to handle directly the rules created for callme-service, not on the client side as before. Also we are running two instances of version v2 of callme-service application to test how circuit breaker works for more than instances of the same service (or rather the same Deployment). The following picture illustrates the currently described architecture.

retries-and-circuit-breaker-on-kubernetes-istio-springboot-arch (1)

Spring Boot applications

We are starting from an implementation of the sample applications. The application callme-service is exposing two endpoints that return information about version and instance id. The endpoint GET /ping-with-random-error sets HTTP 504 error code as a response for ~50% of requests. The endpoint GET /ping-with-random-delay returns response with random delay between 0s and 3s. Here’s the implementation of @RestController on the callme-service side.

@RestController
@RequestMapping("/callme")
public class CallmeController {

    private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);
    private static final String INSTANCE_ID = UUID.randomUUID().toString();
    private Random random = new Random();

    @Autowired
    BuildProperties buildProperties;
    @Value("${VERSION}")
    private String version;

    @GetMapping("/ping-with-random-error")
    public ResponseEntity<String> pingWithRandomError() {
        int r = random.nextInt(100);
        if (r % 2 == 0) {
            LOGGER.info("Ping with random error: name={}, version={}, random={}, httpCode={}",
                    buildProperties.getName(), version, r, HttpStatus.GATEWAY_TIMEOUT);
            return new ResponseEntity<>("Surprise " + INSTANCE_ID + " " + version, HttpStatus.GATEWAY_TIMEOUT);
        } else {
            LOGGER.info("Ping with random error: name={}, version={}, random={}, httpCode={}",
                    buildProperties.getName(), version, r, HttpStatus.OK);
            return new ResponseEntity<>("I'm callme-service" + INSTANCE_ID + " " + version, HttpStatus.OK);
        }
    }

    @GetMapping("/ping-with-random-delay")
    public String pingWithRandomDelay() throws InterruptedException {
        int r = new Random().nextInt(3000);
        LOGGER.info("Ping with random delay: name={}, version={}, delay={}", buildProperties.getName(), version, r);
        Thread.sleep(r);
        return "I'm callme-service " + version;
    }

}

The application caller-service is also exposing two GET endpoints. It is using RestTemplate to call the corresponding GET endpoints exposed by callme-service. It also returns the version of caller-service, but there is only a single Deployment of that application labeled with version=v1.

@RestController
@RequestMapping("/caller")
public class CallerController {

    private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

    @Autowired
    BuildProperties buildProperties;
    @Autowired
    RestTemplate restTemplate;
    @Value("${VERSION}")
    private String version;


    @GetMapping("/ping-with-random-error")
    public ResponseEntity<String> pingWithRandomError() {
        LOGGER.info("Ping with random error: name={}, version={}", buildProperties.getName(), version);
        ResponseEntity<String> responseEntity =
                restTemplate.getForEntity("http://callme-service:8080/callme/ping-with-random-error", String.class);
        LOGGER.info("Calling: responseCode={}, response={}", responseEntity.getStatusCode(), responseEntity.getBody());
        return new ResponseEntity<>("I'm caller-service " + version + ". Calling... " + responseEntity.getBody(), responseEntity.getStatusCode());
    }

    @GetMapping("/ping-with-random-delay")
    public String pingWithRandomDelay() {
        LOGGER.info("Ping with random delay: name={}, version={}", buildProperties.getName(), version);
        String response = restTemplate.getForObject("http://callme-service:8080/callme/ping-with-random-delay", String.class);
        LOGGER.info("Calling: response={}", response);
        return "I'm caller-service " + version + ". Calling... " + response;
    }

}

Handling retries in Istio

The definition of Istio DestinationRule is the same as before in my article Service mesh on Kubernetes with Istio and Spring Boot. There two subsets created for instances labeled with version=v1 and version=v2. Retries and timeout may be configured on VirtualService. We may set the number of retries and the conditions under which retry takes place (a list of enum strings). The following configuration is also setting 3s timeout for the whole request. Both these settings are available inside HTTPRoute object. We also need to set a timeout per single attempt. In that case I set 1s. How does it work in practice? We will analyze it in a simple example.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: callme-service-destination
spec:
  host: callme-service
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: callme-service-route
spec:
  hosts:
    - callme-service
  http:
    - route:
      - destination:
          host: callme-service
          subset: v2
        weight: 80
      - destination:
          host: callme-service
          subset: v1
        weight: 20
      retries:
        attempts: 3
        perTryTimeout: 1s
        retryOn: 5xx
      timeout: 3s

Before deploying sample applications we should increase a level of logging. We may easily enable Istio access logging. Thanks to that Envoy proxies print access logs with all incoming requests and outgoing responses to their standard output. Analyze of logging entries will be especially usable for detecting retry attempts.

$ istioctl manifest apply --set profile=default --set meshConfig.accessLogFile="/dev/stdout"

Now, let’s send a test request to the HTTP endpoint GET /caller/ping-with-random-delay. It calls the randomly delayed callme-service endpoint GET /callme/ping-with-random-delay. Here’s the request and response for that operation.

circuit-breaker-and-retries-on-kubernetes-istio-springboot-retry-curl

Seemingly it’s a very clear situation. But let’s check out what is happening under the hood. I have highlighted the sequence of retries. As you see Istio has performed two retries, since the first two attempts were longer than perTryTimoeut which has been set to 1s. Both two attempts were timeout by Istio, which can be verified in its access logs. The third attempt was successful, since it took around 400ms.

kubernetes-istio-springboot-retry-timeout

Retrying on timeout is not the only available option of retrying in Istio. In fact, we may retry all 5XX or even 4XX codes. The VirtualService for testing just error codes is much simpler, since we don’t have to configure any timeouts.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: callme-service-route
spec:
  hosts:
    - callme-service
  http:
    - route:
      - destination:
          host: callme-service
          subset: v2
        weight: 80
      - destination:
          host: callme-service
          subset: v1
        weight: 20
      retries:
        attempts: 3
        retryOn: gateway-error,connect-failure,refused-stream

We are going to call HTTP endpoint with GET /caller/ping-with-random-error, that is calling endpoint GET /callme/ping-with-random-error exposed by callme-service. It is returning HTTP 504 for around 50% of incoming requests. Here’s the request and successful response with 200 OK HTTP code.

kubernetes-istio-springboot-retry-curl2

Here are the logs, which illustrate what happened on the callme-service side. The requests have been retried 2 times, since the two first attempts result in HTTP error code.

circuit-breaker-and-retries-on-kubernetes-istio-springboot-retry-error

Istio circuit breaker

A circuit breaker is configured on the DestinationRule object. We are using TrafficPolicy for that. First we will not set any retries used for the previous sample, so we need to remove it from VirtualService definition. We should also disable any retries on the connectionPool inside TrafficPolicy. And now the most important. For configuring a circuit breaker in Istio we are using OutlierDetection object. Istio circuit breaker implementation is based on consecutive errors returned by the downstream service. The number of subsequent errors may be configured using properties consecutive5xxErrors or consecutiveGatewayErrors. The only difference between them is in the HTTP errors they are able to handle. While consecutiveGatewayErrors is just for 502, 503 and 504, the consecutive5xxErrors is used for 5XX codes. In the following configuration of callme-service-destination I used set consecutive5xxErrors on 3. It means that after 3 errors in row an instance (pod) of application is removed from load balancing for 1 minute (baseEjectionTime=1m). Because we are running two pods of callme-service in version v2 we also need to override a default value of maxEjectionPercent to 100%. A default value of that property is 10%, and it indicates a maximum % of hosts in the load balancing pool that can be ejected.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: callme-service-destination
spec:
  host: callme-service
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2
  trafficPolicy:
    connectionPool:
      http:
        http1MaxPendingRequests: 1
        maxRequestsPerConnection: 1
        maxRetries: 0
    outlierDetection:
      consecutive5xxErrors: 3
      interval: 30s
      baseEjectionTime: 1m
      maxEjectionPercent: 100
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: callme-service-route
spec:
  hosts:
    - callme-service
  http:
    - route:
      - destination:
          host: callme-service
          subset: v2
        weight: 80
      - destination:
          host: callme-service
          subset: v1
        weight: 20

The fastest way of deploying both applications is with Jib and Skaffold. First you go to directory callme-service and execute skaffold dev command with optional --port-forward parameter.

$ cd callme-service
$ skaffold dev --port-forward

Then do the same for caller-service.

$ cd caller-service
$ skaffold dev --port-forward

Before sending some test requests let’s run the second instance of v2 version of callme-service, since Deployment sets parameter replicas to 1. To do that we need to run the following command.

$ kubectl scale --replicas=2 deployment/callme-service-v2

Now, let’s verify the status of deployment on Kubernetes. There are 3 deployments. The deployment callme-service-v2 has to running pods.

istio-circuit-breaker-and-retries-on-kubernetes-springboot-deployment

After that we are ready to send some test requests. We are calling endpoint GET /caller/ping-with-random-error exposed by caller-service, that is calling endpoint GET /callme/ping-with-random-error exposed by callme-service. Endpoint exposed by callme-service returns HTTP 504 for 50% of requests. I have already set port forwarding for callme-service on 8080, so the command used calling application is: curl http://localhost:8080/caller/ping-with-random-error.
Now, let’s analyze responses from caller-service. I have highlighted the responses with HTTP 504 error code from instance of callme-service with version v2 and generated id 98c068bb-8d02-4d2a-9999-23951bbed6ad. After 3 error responses in row from that instance, it is immediately removed from load balancing pool, what results in sending all other requests to the second instance of callme-service v2 having id 00653617-58e1-4d59-9e36-3f98f9d403b8. Of course there is still available a single instance of callme-service v1, that is receiving 20% of total requests send by caller-service.

istio-circuit-breaker-and-retries-on-kubernetes-springboot-logs1

Ok, let’s check what will happen if a single instance callme-service v1 returns 3 errors in row. I have also highlighted those error responses in the picture with logs visible below. Because there is only one instance of callme-service v1 in the pool, there is no chance to redirect an incoming traffic to other instances. That’s why Istio is returning HTTP 503 for the next request sent to callme-service v1. The same response is returned within 1 next minute since the circuit is open.

circuit-breaker-and-retries-on-kubernetes-istio-springboot-logs2

The post Circuit breaker and retries on Kubernetes with Istio and Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/06/03/circuit-breaker-and-retries-on-kubernetes-with-istio-and-spring-boot/feed/ 2 8072
Service mesh on Kubernetes with Istio and Spring Boot https://piotrminkowski.com/2020/06/01/service-mesh-on-kubernetes-with-istio-and-spring-boot/ https://piotrminkowski.com/2020/06/01/service-mesh-on-kubernetes-with-istio-and-spring-boot/#comments Mon, 01 Jun 2020 07:26:39 +0000 http://piotrminkowski.com/?p=8017 Istio is currently the leading solution for building service mesh on Kubernetes. Thanks to Istio you can take control of a communication process between microservices. It also lets you secure and observe your services. Spring Boot is still the most popular JVM framework for building microservice applications. In this article, I’m going to show how […]

The post Service mesh on Kubernetes with Istio and Spring Boot appeared first on Piotr's TechBlog.

]]>
Istio is currently the leading solution for building service mesh on Kubernetes. Thanks to Istio you can take control of a communication process between microservices. It also lets you secure and observe your services. Spring Boot is still the most popular JVM framework for building microservice applications. In this article, I’m going to show how to use both these tools to build applications and provide communication between them over HTTP on Kubernetes.

Example of Istio Spring Boot

For demonstrating usage of Istio and Spring Boot I created repository on GitHub with two sample applications: callme-service and caller-service. The address of this repository is https://github.com/piomin/sample-istio-services.git. The same repository has been used for my previous article about Istio: Service Mesh on Kubernetes with Istio in 5 steps. I moved this example to branch old_master, so if you for any reason would be interested in traffic management with a previous major version of Istio (0.X) please refer to that branch and article on my blog.
The source code is prepared to be used with Skaffold and Jib tools. Both these tools are simplifying development on local Kubernetes. All you need to do to use them is to download and install Skaffold, because the Jib plugin is already included in Maven pom.xml as shown below. For details about development using both these tools please refer to my article Local Java development on Kubernetes.

<plugin>
   <groupId>com.google.cloud.tools</groupId>
   <artifactId>jib-maven-plugin</artifactId>
   <version>2.1.0</version>
</plugin>

Installing Istio

To install Istio on your Kubernetes cluster you need to run two commands after downloading it. First of them is istioctl command.


$ istioctl manifest apply --set profile=demo

For executing a second command you also need to have kubectl tool. I was running my samples on Kubernetes with Docker Desktop, and I had to set 4 CPUs with 8GB RAM, which are recommended settings for testing Istio. Depending on the namespace, where you are deploying your applications you should run the following command. I’m using namespace default.

$ kubectl label namespace default istio-injection=enabled

Create Spring Boot applications

Now, let’s consider the architecture visible in the picture below. There are two running instances of application callme-service. These are two different versions of this application v1 and v2. In our case the only difference is in Deployment – not in the code. Application caller-service is communicating with callme-service. That traffic is managed by Istio, which sends 20% of requests to the v1 version of the application, and 80% to the v2 version. Tt also adds 3s delay to 33% of traffic.

service-mesh-on-kubernetes-istio-springboot-arch1

Here’s the structure of application callme-service.

service-mesh-on-kubernetes-istio-spring-boot-sourcecode

Like I mentioned before there is no difference in the code, there is just a difference in the environment variables injected into the application. The implementation of Spring @Controller responsible for handling incoming HTTP requests is very simple. It just injects the value of environment variable VERSION and returns it a response from GET /ping endpoint.

@RestController
@RequestMapping("/callme")
public class CallmeController {

    private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);
    private static final String INSTANCE_ID = UUID.randomUUID().toString();
    private Random random = new Random();

    @Autowired
    BuildProperties buildProperties;
    @Value("${VERSION}")
    private String version;

    @GetMapping("/ping")
    public String ping() {
        LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), version);
        return "I'm callme-service " + version;
    }
   
}

On the other side there is caller-service with the similar GET /ping endpoint that calls endpoint exposed by callme-service using Spring RestTemplate. It uses the name of Kubernetes Service as the address of the target application.

@RestController
@RequestMapping("/caller")
public class CallerController {

    private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

    @Autowired
    BuildProperties buildProperties;
    @Autowired
    RestTemplate restTemplate;
    @Value("${VERSION}")
    private String version;
   
    @GetMapping("/ping")
    public String ping() {
        LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), version);
        String response = restTemplate.getForObject("http://callme-service:8080/callme/ping", String.class);
        LOGGER.info("Calling: response={}", response);
        return "I'm caller-service " + version + ". Calling... " + response;
    }
   
}

Deploy Spring Boot application on Kubernetes

We are creating two Deployment on Kubernetes for two different versions of the same application with names callme-service-v1 and callme-service-v2. For the fist of them we are injecting env to the container VERSION=v1, while for the second VERSION=v2.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v2
  template:
    metadata:
      labels:
        app: callme-service
        version: v2
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v2"
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

Of course there is also caller-service. We also need to deploy it. But this time there is only a single Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
        version: v1
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        env:
          - name: VERSION
            value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: caller-service
  labels:
    app: caller-service
spec:
  type: NodePort
  ports:
    - port: 8080
      name: http
  selector:
    app: caller-service

Istio rules

Finally, we are creating two Istio components DestinationRule and VirtualService. The callme-service-destination destination rule contains definitions of subsets based on label version from Deployment. The callme-service-route virtual service uses these rules and sets weight for each subset. Additionally it injects 3s delay to route for 33% of requests.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: callme-service-destination
spec:
  host: callme-service
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: callme-service-route
spec:
  hosts:
    - callme-service
  http:
    - route:
      - destination:
          host: callme-service
          subset: v2
        weight: 80
      - destination:
          host: callme-service
          subset: v1
        weight: 20
      fault:
        delay:
          percentage:
            value: 33
          fixedDelay: 3s

Since a delay is injected into the route by Istio, we have to set timeout on the client side (caller-service). To test that timeout on caller-service we can’t use port forwarding to call endpoint directly from the pod. It also won’t work if we call Kubernetes Service. That’s why we will also create Istio Gateway for caller-service. It is exposed on port 80 and it is using hostname caller.example.com.

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: caller-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "caller.example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: caller-service-destination
spec:
  host: caller-service
  subsets:
    - name: v1
      labels:
        version: v1
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: caller-service-route
spec:
  hosts:
    - "caller.example.com"
  gateways:
    - caller-gateway
  http:
    - route:
        - destination:
            host: caller-service
            subset: v1
      timeout: 0.5s

Testing Istio Spring Boot communication

The fastest way of deploying an application is with Jib and Skaffold. First you go to directory callme-service and execute skaffold dev command with optional --port-forward parameter.

$ cd callme-service
$ skaffold dev --port-forward

Then do the same for caller-service.

$ cd caller-service
$ skaffold dev --port-forward

Our both applications should be succesfully built and deployed on Kubernetes. The Kubernetes and Istio manifests should be applied. Let’s check out the list of deployments and running pods.

kubectl

We can also verify a list of Istio components.

service-mesh-on-kubernetes-istio-spring-boot-istio

How can we access our Istio Ingress Gateway? Let’s take a look at its configuration.

service-mesh-on-kubernetes-istio-spring-boot-gateway

Ingress Gateway is available on localhost:80. We just need to set HTTP header Host during call on value taken from caller-gatewaycaller.example.com. Here’s the successful call without any delay.

curl-ok

Here’s a call that has been delayed 3s on the callme-service side. Since we have set timeout to 0.5s on the caller-service it is finished with HTTP 504.

service-mesh-on-kubernetes-istio-spring-boot-curl-failed

Now let’s perform some same calls in row. Because traffic from caller-service to callme-service is split 80% to 20% between v2 and v1 versions most of logs is I’m caller-service v1. Calling… I’m callme-service v2. Additionally around ⅓ of calls is finishing with 0.5s timeout.

service-mesh-on-kubernetes-istio-spring-boot-curls

The post Service mesh on Kubernetes with Istio and Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/06/01/service-mesh-on-kubernetes-with-istio-and-spring-boot/feed/ 12 8017