service mesh Archives - Piotr's TechBlog https://piotrminkowski.com/tag/service-mesh/ Java, Spring, Kotlin, microservices, Kubernetes, containers Tue, 06 Jan 2026 09:31:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 service mesh Archives - Piotr's TechBlog https://piotrminkowski.com/tag/service-mesh/ 32 32 181738725 Istio Spring Boot Library Released https://piotrminkowski.com/2026/01/06/istio-spring-boot-library-released/ https://piotrminkowski.com/2026/01/06/istio-spring-boot-library-released/#respond Tue, 06 Jan 2026 09:31:45 +0000 https://piotrminkowski.com/?p=15957 This article explains how to use my Spring Boot Istio library to generate and create Istio resources on a Kubernetes cluster during application startup. The library is primarily intended for development purposes. It aims to make it easier for developers to quickly and easily launch their applications within the Istio mesh. Of course, you can […]

The post Istio Spring Boot Library Released appeared first on Piotr's TechBlog.

]]>
This article explains how to use my Spring Boot Istio library to generate and create Istio resources on a Kubernetes cluster during application startup. The library is primarily intended for development purposes. It aims to make it easier for developers to quickly and easily launch their applications within the Istio mesh. Of course, you can also use this library in production. However, its purpose is to generate resources from annotations in Java application code automatically.

You can also find many other articles on my blog about Istio. For example, this article is about Quarkus and tracing with Istio.

Source Code

Feel free to use my source code if you’d like to try it out yourself. To do that, you must clone my sample GitHub repository. Two sample applications for this exercise are available in the spring-boot-istio directory.

Prerequisites

We will start with the most straightforward Istio installation on a local Kubernetes cluster. This could be Minikube, which you can run with the following command. You can set slightly lower resource limits than I did.

minikube start --memory='8gb' --cpus='6'
ShellSession

To complete the exercise below, you need to install istioctl in addition to kubectl. Here you will find the available distributions for the latest versions of kubectl and istioctl. I install them on my laptop using Homebrew.

$ brew install kubectl
$ brew install istioctl
ShellSession

To install Istio with default parameters, run the following command:

istioctl install
ShellSession

After a moment, Istio should be running in the istio-system namespace.

It is also worth installing Kiali to verify the Istio resources we have created. Kiali is an observability and management tool for Istio that provides a web-based dashboard for service mesh monitoring. It visualizes service-to-service traffic, validates Istio configuration, and integrates with tools like Prometheus, Grafana, and Jaeger.

kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.28/samples/addons/kiali.yaml
ShellSession

Once Kiali is successfully installed on Kubernetes, we can expose its web dashboard locally with the following istioctl command:

istioctl dashboard kiali
ShellSession

To test access to the Istio mesh from outside the cluster, you need to expose the ingress gateway. To do this, run the minikube tunnel command.

Use Spring Boot Istio Library

To test our library’s functionality, we will create a simple Spring Boot application that exposes a single REST endpoint.

@RestController
@RequestMapping("/callme")
public class CallmeController {

    private static final Logger LOGGER = LoggerFactory
        .getLogger(CallmeController.class);

    @Autowired
    Optional<BuildProperties> buildProperties;
    @Value("${VERSION}")
    private String version;

    @GetMapping("/ping")
    public String ping() {
        LOGGER.info("Ping: name={}, version={}", buildProperties.isPresent() ?
            buildProperties.get().getName() : "callme-service", version);
        return "I'm callme-service " + version;
    }
    
}
Java

Then, in addition to the standard Spring Web starter, add the istio-spring-boot-starter dependency.

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>com.github.piomin</groupId>
  <artifactId>istio-spring-boot-starter</artifactId>
  <version>1.2.1</version>
</dependency>
XML

Finally, we must add the @EnableIstio annotation to our application’s main class. We can also enable Istio Gateway to expose the REST endpoint outside the cluster. An Istio Gateway is a component that controls how external traffic enters or leaves a service mesh.

@SpringBootApplication
@EnableIstio(enableGateway = true)
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

Let’s deploy our application on the Kubernetes cluster. To do this, we must first create a role with the necessary permissions to manage Istio resources in the cluster. The role must be assigned to the ServiceAccount used by the application.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: callme-service-with-starter
rules:
  - apiGroups: ["networking.istio.io"]
    resources: ["virtualservices", "destinationrules", "gateways"]
    verbs: ["create", "get", "list", "watch", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: callme-service-with-starter
subjects:
  - kind: ServiceAccount
    name: callme-service-with-starter
    namespace: spring
roleRef:
  kind: Role
  name: callme-service-with-starter
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: callme-service-with-starter
YAML

Here are the Deployment and Service resources.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-with-starter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service-with-starter
  template:
    metadata:
      labels:
        app: callme-service-with-starter
    spec:
      containers:
        - name: callme-service-with-starter
          image: piomin/callme-service-with-starter
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
      serviceAccountName: callme-service-with-starter
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service-with-starter
  labels:
    app: callme-service-with-starter
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service-with-starter
YAML

The application repository is configured to run it with Skaffold. Of course, you can apply YAML manifests to the cluster with kubectl apply. To do this, simply navigate to the callme-service-with-starter/k8s directory and apply the deployment.yaml file. As part of the exercise, we will run our applications in the spring namespace.

skaffold dev -n spring
ShellSession

The sample Spring Boot application creates two Istio objects at startup: a VirtualService and a Gateway. We can verify them in the Kiali dashboard.

spring-boot-istio-kiali

The default host name generated for the gateway includes the deployment name and the .ext suffix. We can change the suffix name using the domain field in the @EnableIstio annotation. Assuming you run the minikube tunnel command, you can call the service using the Host header with the hostname in the following way:

$ curl http://localhost/callme/ping -H "Host:callme-service-with-starter.ext"
I'm callme-service v1
ShellSession

Additional Capabilities with Spring Boot Istio

The library’s behavior can be customized by modifying the @EnableIstio annotation. For example, you can enable the fault injection mechanism using the fault field. Both abort and delay are possible. You can make this change without redeploying the app. The library updates the existing VirtualService.

@SpringBootApplication
@EnableIstio(enableGateway = true, fault = @Fault(percentage = 50))
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

Now, you can call the same endpoint through the gateway. There is a 50% chance that you will receive this response.

spring-boot-istio-curl

Now let’s analyze a slightly more complex scenario. Let’s assume that we are running two different versions of the same application on Kubernetes. We use Istio to manage its versioning. Traffic is forwarded to the particular version based on the X-Version header in the incoming request. If the header value is v1 the request is sent to the application Pod with the label version=v1, and similarly, for the version v2. Here’s the annotation for the v1 application main class.

@SpringBootApplication
@EnableIstio(enableGateway = true, version = "v1",
	matches = { 
	  @Match(type = MatchType.HEADERS, key = "X-Version", value = "v1") })
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

The Deployment manifest for the callme-service-with-starter-v1 should define two labels: app and version.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-with-starter-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service-with-starter
      version: v1
  template:
    metadata:
      labels:
        app: callme-service-with-starter
        version: v1
    spec:
      containers:
        - name: callme-service-with-starter
          image: piomin/callme-service-with-starter
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
      serviceAccountName: callme-service-with-starter
YAML

Unlike the skaffold dev command, the skaffold run command simply launches the application on the cluster and terminates. Let’s first release version v1, and then move on to version v2.

skaffold run -n spring
ShellSession

Then, we can deploy the v2 version of our sample Spring Boot application. In this exercise, it is just an “artificial” version, since we deploy the same source code, but with different labels and environment variables injected in the Deployment manifest.

@SpringBootApplication
@EnableIstio(enableGateway = true, version = "v2",
	matches = { 
	  @Match(type = MatchType.HEADERS, key = "X-Version", value = "v2") })
public class CallmeApplication {

	public static void main(String[] args) {
		SpringApplication.run(CallmeApplication.class, args);
	}
	
}
Java

Here’s the callme-service-with-starter-v2 Deployment manifest.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-with-starter-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service-with-starter
      version: v2
  template:
    metadata:
      labels:
        app: callme-service-with-starter
        version: v2
    spec:
      containers:
        - name: callme-service-with-starter
          image: piomin/callme-service-with-starter
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v2"
      serviceAccountName: callme-service-with-starter
YAML

Service, on the other hand, remains unchanged. It still refers to Pods labeled with app=callme-service-with-starter. However, this time it includes both application instances marked as v1 or v2.

apiVersion: v1
kind: Service
metadata:
  name: callme-service-with-starter
  labels:
    app: callme-service-with-starter
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service-with-starter
YAML

Version v2 should be run in the same way as before, using the skaffold run command. There are three Istio objects generated during apps startup: Gateway, VirtualService and DestinationRule.

spring-boot-istio-versioning

A generated DestinationRule contains two subsets for both v1 and v2 versions.

spring-boot-istio-subsets

The automatically generated VirtualService looks as follows.

kind: VirtualService
apiVersion: networking.istio.io/v1
metadata:
  name: callme-service-with-starter-route
spec:
  hosts:
  - callme-service-with-starter
  - callme-service-with-starter.ext
  gateways:
  - callme-service-with-starter
  http:
  - match:
    - headers:
        X-Version:
          prefix: v1
    route:
    - destination:
        host: callme-service-with-starter
        subset: v1
    timeout: 6s
    retries:
      attempts: 3
      perTryTimeout: 2s
      retryOn: 5xx
  - match:
    - headers:
        X-Version:
          prefix: v2
    route:
    - destination:
        host: callme-service-with-starter
        subset: v2
      weight: 100
    timeout: 6s
    retries:
      attempts: 3
      perTryTimeout: 2s
      retryOn: 5xx
YAML

To test the versioning mechanism generated using the Spring Boot Istio library, set the X-Version header for each call.

$ curl http://localhost/callme/ping -H "Host:callme-service-with-starter.ext" -H "X-Version:v1"
I'm callme-service v1

$ curl http://localhost/callme/ping -H "Host:callme-service-with-starter.ext" -H "X-Version:v2"
I'm callme-service v2
ShellSession

Conclusion

I am still working on this library, and new features will be added in the near future. I hope it will be helpful for those of you who want to get started with Istio without getting into the details of its configuration.

The post Istio Spring Boot Library Released appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2026/01/06/istio-spring-boot-library-released/feed/ 0 15957
Multicluster Traffic Mirroring with Istio and Kind https://piotrminkowski.com/2021/07/12/multicluster-traffic-mirroring-with-istio-and-kind/ https://piotrminkowski.com/2021/07/12/multicluster-traffic-mirroring-with-istio-and-kind/#comments Mon, 12 Jul 2021 08:27:13 +0000 https://piotrminkowski.com/?p=9902 In this article, you will learn how to create an Istio mesh with mirroring between multiple Kubernetes clusters running on Kind. We will deploy the same application in two Kubernetes clusters, and then we will mirror the traffic between those clusters. When such a scenario might be useful? Let’s assume we have two Kubernetes clusters. […]

The post Multicluster Traffic Mirroring with Istio and Kind appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to create an Istio mesh with mirroring between multiple Kubernetes clusters running on Kind. We will deploy the same application in two Kubernetes clusters, and then we will mirror the traffic between those clusters. When such a scenario might be useful?

Let’s assume we have two Kubernetes clusters. The first of them is a production cluster, while the second is a test cluster. While there is huge incoming traffic to the production cluster, there is no traffic to the test cluster. What can we do in such a situation? We can just send a portion of production traffic to the test cluster. With Istio, you can also mirror internal traffic between e.g. microservices.

To simulate the scenario described above we will create two Kubernetes clusters locally with Kind. Then, we will install Istio mesh in multi-primary mode between different networks. The Kubernetes API Server and Istio Gateway need to be accessible by pods running on a different cluster. We have two applications. The caller-service application is running on the c1 cluster. It calls callme-service. The v1 version of the callme-service application is deployed on the c1 cluster, while the v2 version of the application is deployed on the c2 cluster. We will mirror 50% of traffic coming from the v1 version of our application to the v2 version running on the different clusters. The following picture illustrates our architecture.

istio-mirroring-arch

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions.

Both applications are configured to be deployed with Skaffold. In that case, you just need to download Skaffold CLI following the instructions available here. Of course, you also need to have Java and Maven available on your PC.

Create Kubernetes clusters with Kind

Firstly, let’s create two Kubernetes clusters using Kind. We don’t have to override any default settings, so we can just use the following command to create clusters.

$ kind create cluster --name c1
$ kind create cluster --name c2

Kind automatically creates a Kubernetes context and adds it to the config file. Just to verify, let’s display a list of running clusters.

$ kind get clusters
c1
c2

Also, we can display a list of contexts created by Kind.

$ kubectx | grep kind
kind-c1
kind-c2

Install MetalLB on Kubernetes clusters

To establish a connection between multiple clusters locally we need to expose some services as LoadBalancer. That’s why we need to install MetalLB. MetalLB is a load-balancer implementation for bare metal Kubernetes clusters. Firstly, we have to create the metalb-system namespace. Those operations should be performed on both our clusters.

$ kubectl apply -f \
  https://raw.githubusercontent.com/metallb/metallb/master/manifests/namespace.yaml

Then, we are going to create the memberlists secret required by MetalLB.

$ kubectl create secret generic -n metallb-system memberlist \
  --from-literal=secretkey="$(openssl rand -base64 128)" 

Finally, let’s install MetalLB.

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/manifests/metallb.yaml

In order to complete the configuration, we need to provide a range of IP addresses MetalLB controls. We want this range to be on the docker kind network.

$ docker network inspect -f '{{.IPAM.Config}}' kind

For me it is CIDR 172.20.0.0/16. Basing on it, we can configure the MetalLB IP pool per each cluster. For the first cluster c1 I’m setting addresses starting from 172.20.255.200 and ending with 172.20.255.250.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.200-172.20.255.250

We need to apply the configuration to the first cluster.

$ kubectl apply -f k8s/metallb-c1.yaml --context kind-c1

For the second cluster c2 I’m setting addresses starting from 172.20.255.150 and ending with 172.20.255.199.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.150-172.20.255.199

Finally, we can apply the configuration to the second cluster.

$ kubectl apply -f k8s/metallb-c2.yaml --context kind-c2

Install Istio on Kubernetes in multicluster mode

A multicluster service mesh deployment requires establishing trust between all clusters in the mesh. In order to do that we should configure the Istio certificate authority (CA) with a root certificate, signing certificate, and key. We can easily do it using Istio tools. First, we to go to the Istio installation directory on your PC. After that, we may use the Makefile.selfsigned.mk script available inside the tools/certs directory

$ cd $ISTIO_HOME/tools/certs/

The following command generates the root certificate and key.

$ make -f Makefile.selfsigned.mk root-ca

The following command generates an intermediate certificate and key for the Istio CA for each cluster. This will generate the required files in a directory named with a cluster name.

$ make -f Makefile.selfsigned.mk kind-c1-cacerts
$ make -f Makefile.selfsigned.mk kind-c2-cacerts

Then we may create Kubernetes Secret basing on the generated certificates. The same operation should be performed for the second cluster kind-c2.

$ kubectl create namespace istio-system
$ kubectl create secret generic cacerts -n istio-system \
      --from-file=kind-c1/ca-cert.pem \
      --from-file=kind-c1/ca-key.pem \
      --from-file=kind-c1/root-cert.pem \
      --from-file=kind-c1/cert-chain.pem

We are going to install Istio using the operator. It is important to set the same meshID for both clusters and different networks. We also need to create Istio Gateway for communication between two clusters inside a single mesh. It should be labeled with topology.istio.io/network=network1. The Gateway definition also contains two environment variables ISTIO_META_ROUTER_MODE and ISTIO_META_REQUESTED_NETWORK_VIEW. The first variable is responsible for setting the sni-dnat value that adds the clusters required for AUTO_PASSTHROUGH mode.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: kind-c1
      network: network1
  components:
    ingressGateways:
      - name: istio-eastwestgateway
        label:
          istio: eastwestgateway
          app: istio-eastwestgateway
          topology.istio.io/network: network1
        enabled: true
        k8s:
          env:
            - name: ISTIO_META_ROUTER_MODE
              value: "sni-dnat"
            - name: ISTIO_META_REQUESTED_NETWORK_VIEW
              value: network1
          service:
            ports:
              - name: status-port
                port: 15021
                targetPort: 15021
              - name: tls
                port: 15443
                targetPort: 15443
              - name: tls-istiod
                port: 15012
                targetPort: 15012
              - name: tls-webhook
                port: 15017
                targetPort: 15017

Before installing Istio we should label the istio-system namespace with topology.istio.io/network=network1. The Istio installation manifest is available in the repository as the k8s/istio-c1.yaml file.

$ kubectl --context kind-c1 label namespace istio-system \
      topology.istio.io/network=network1
$ istioctl install --config k8s/istio-c1.yaml \
      --context kind-c

There a similar IstioOperator definition for the second cluster. The only difference is in the name of the network, which is now network2 , and the name of the cluster.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      meshID: mesh1
      multiCluster:
        clusterName: kind-c2
      network: network2
  components:
    ingressGateways:
      - name: istio-eastwestgateway
        label:
          istio: eastwestgateway
          app: istio-eastwestgateway
          topology.istio.io/network: network2
        enabled: true
        k8s:
          env:
            - name: ISTIO_META_ROUTER_MODE
              value: "sni-dnat"
            - name: ISTIO_META_REQUESTED_NETWORK_VIEW
              value: network2
          service:
            ports:
              - name: status-port
                port: 15021
                targetPort: 15021
              - name: tls
                port: 15443
                targetPort: 15443
              - name: tls-istiod
                port: 15012
                targetPort: 15012
              - name: tls-webhook
                port: 15017
                targetPort: 15017

The same as for the first cluster, let’s label the istio-system namespace with topology.istio.io/network and install Istio using operator manifest.

$ kubectl --context kind-c2 label namespace istio-system \
      topology.istio.io/network=network2
$ istioctl install --config k8s/istio-c2.yaml \
      --context kind-c2

Configure multicluster connectivity

Since the clusters are on separate networks, we need to expose all local services on the gateway in both clusters. Services behind that gateway can be accessed only by services with a trusted TLS certificate and workload ID. The definition of the cross gateway is exactly the same for both clusters. You can that manifest in the repository as k8s/istio-cross-gateway.yaml.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: cross-network-gateway
spec:
  selector:
    istio: eastwestgateway
  servers:
    - port:
        number: 15443
        name: tls
        protocol: TLS
      tls:
        mode: AUTO_PASSTHROUGH
      hosts:
        - "*.local"

Let’s apply the Gateway object to both clusters.

$ kubectl apply -f k8s/istio-cross-gateway.yaml \
      --context kind-c1
$ kubectl apply -f k8s/istio-cross-gateway.yaml \
      --context kind-c2

In the last step in this scenario, we enable endpoint discovery between Kubernetes clusters. To do that, we have to install a remote secret in the kind-c2 cluster that provides access to the kind-c1 API server. And vice versa. Fortunately, Istio provides an experimental feature for generating remote secrets.

$ istioctl x create-remote-secret --context=kind-c1 --name=kind-c1 
$ istioctl x create-remote-secret --context=kind-c2 --name=kind-c2

Before applying generated secrets we need to change the address of the cluster. Instead of localhost and dynamically generated port, we have to use c1-control-plane:6443 for the first cluster, and respectively c2-control-plane:6443 for the second cluster. The remote secrets generated for my clusters are committed in the project repository as k8s/secret1.yaml and k8s/secret2.yaml. You compare them with secrets generated for your clusters. Replace them with your secrets, but remember to change the address of your clusters.

$ kubectl apply -f k8s/secret1.yaml --context kind-c2
$ kubectl apply -f k8s/secret2.yaml --context kind-c1

Configure Mirroring with Istio

We are going to deploy our sample applications in the default namespace. Therefore,  automatic sidecar injection should be enabled for that namespace.

$ kubectl label --context kind-c1 namespace default \
    istio-injection=enabled
$ kubectl label --context ind-c2 namespace default \
    istio-injection=enabled

Before configuring Istio rules let’s deploy v1 version of the callme-service application on the kind-c1 cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"

Then, we will deploy the v2 version of the callme-service application on the kind-c2 cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v2
  template:
    metadata:
      labels:
        app: callme-service
        version: v2
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v2"

Of course, we should also create Kubernetes Service on both clusters.

apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

The Istio DestinationRule defines two Subsets for callme-service basing on the version label.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: callme-service-destination
spec:
  host: callme-service
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

Finally, we may configure traffic mirroring with Istio. The 50% of traffic coming to the callme-service deployed on the kind-c1 cluster is sent to the callme-service deployed on the kind-c2 cluster.

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: callme-service-route
spec:
  hosts:
    - callme-service
  http:
    - route:
      - destination:
          host: callme-service
          subset: v1
        weight: 100
      mirror:
        host: callme-service
        subset: v2
      mirrorPercentage:
        value: 50.

We will also deploy the caller-service application on the kind-c1 cluster. It calls endpoint GET /callme/ping exposed by the callme-service application.

$ kubectl get pod --context kind-c1
NAME                                 READY   STATUS    RESTARTS   AGE
caller-service-b9dbbd6c8-q6dpg       2/2     Running   0          1h
callme-service-v1-7b65795f48-w7zlq   2/2     Running   0          1h

Let’s verify the list of running pods in the default namespace in the kind-c2 cluster.

$ kubectl get pod --context kind-c2
NAME                                 READY   STATUS    RESTARTS   AGE
callme-service-v2-665b876579-rsfks   2/2     Running   0          1h

In order to test Istio mirroring through multiple Kubernetes clusters, we call the endpoint GET /caller/ping exposed by caller-service. As I mentioned before it calls the similar endpoint exposed by the callme-service application with the HTTP client. The simplest way to test it is through enabling port-forwarding. Thanks to that, the caller-service Service is available on the local port 8080. Let’s call that endpoint 20 times with siege.

$ siege -r 20 -c 1 http://localhost:8080/caller/service

After that, you can verify the logs for callme-service-v1 and callme-service-v2 deployments.

$ kubectl logs pod/callme-service-v1-7b65795f48-w7zlq --context kind-c1
$ kubectl logs pod/callme-service-v2-665b876579-rsfks --context kind-c2

You should see the following log 20 times for the kind-c1 cluster.

I'm callme-service v1

And respectively you should see the following log 10 times for the kind-c2 cluster, because we mirror 50% of traffic from v1 to v2.

I'm callme-service v2

Final Thoughts

This article shows how to create Istio multicluster with traffic mirroring between different networks. If you would like to simulate a similar scenario in the same network you may use a tool called Submariner. You may find more details about running Submariner on Kubernetes in this article Kubernetes Multicluster with Kind and Submariner.

The post Multicluster Traffic Mirroring with Istio and Kind appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2021/07/12/multicluster-traffic-mirroring-with-istio-and-kind/feed/ 16 9902
Intro to OpenShift Service Mesh https://piotrminkowski.com/2020/08/06/intro-to-openshift-service-mesh-with-istio/ https://piotrminkowski.com/2020/08/06/intro-to-openshift-service-mesh-with-istio/#respond Thu, 06 Aug 2020 15:03:22 +0000 http://piotrminkowski.com/?p=8307 OpenShift 4 has introduced official support for service mesh based on the Istio framework. This support is built on top of Maistra operator. Maistra is an opinionated distribution of Istio designed to work with Openshift. It combines Kiali, Jaeger, and Prometheus into a platform managed by the operator. The current version of OpenShift Service Mesh […]

The post Intro to OpenShift Service Mesh appeared first on Piotr's TechBlog.

]]>
OpenShift 4 has introduced official support for service mesh based on the Istio framework. This support is built on top of Maistra operator. Maistra is an opinionated distribution of Istio designed to work with Openshift. It combines Kiali, Jaeger, and Prometheus into a platform managed by the operator. The current version of OpenShift Service Mesh is 1.1.5. According to the documentation, this version of the service mesh supports Istio 1.4.8. Also, for creating this tutorial I was using OpenShift 4.4 installed on Azure.

In this article, I will not explain the basics of Istio framework. If you do not have experience in using Istio for building service mesh on Kubernetes you may refer to my article Service Mesh on Kubernetes with Istio and Spring Boot.

1. Install OpenShift Service Mesh operators

OpenShift 4 provides extensive support for Kubernetes operators. You can install them using OpenShift Console. To do that you must navigate to Operators -> Operator Hub, and then find Red Hat OpenShift Service Mesh. You can install some other operators to enable integration between the service mesh and additional components like Jeager, Prometheus, or Kiali. In this tutorial, we are going to discuss the Kiali component. That’s why we have to search and install Kiali Operator.

kiali

2. OpenShift Service mesh configuration

By default, Istio is always installed inside istio-system namespace. Although you can install it on OpenShift in any project, we will use istio-system – according to the best practices. Once the operator is installed inside istio-system namespace we may create Service Mesh Control Plane. The only component that will be enabled is Kiali.

istio-controlplane

In the next step, we are going to create Service Mesh Member Roll and Service Mesh Member components. This time we have to prepare the YAML manifest. It is important to start from a ServiceMeshMemberRoll object. In this object, we should define a list of projects being a part of our service mesh. Currently, there is the only project – microservices.

apiVersion: maistra.io/v1
kind: ServiceMeshMemberRoll
metadata:
  name: default
  namespace: istio-system
spec:
  members:
    - microservices

In the ServiceMeshMember definition, it is important to set the right namespace for the control plane. In our case that is istio-system namespace.

apiVersion: maistra.io/v1
kind: ServiceMeshMember
metadata:
  name: default
spec:
  controlPlaneRef:
    name: basic-install
    namespace: istio-system

Finally, we can take a look at the configuration of OpenShift Service Mesh Operator inside istio-system project.

openshift-service-mesh-configuration

Here’s the list of deployments inside istio-system namespace after installation of OpenShift Service Mesh.

openshift-service-mesh-deployments

3. Deploy applications on OpenShift

Let’s switch to the microservices namespace. We will deploy our example microservices that communicate with each other. Each application would be deployed in two versions. We are using the same codebase, so each version would be distinguished based on the label version. Labels may be injected into the container using DownwardAPI.
The most important thing in the following Deployment definition is annotation sidecar.istio.io/inject. It is responsible for enabling Istio sidecar injection for the application. Other applications and their versions have a similar deployment manifest structure.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: department-deployment-v1
spec:
  selector:
    matchLabels:
      app: department
      version: v1
  template:
    metadata:
      labels:
        app: department
        version: v1
	  annotations:
        sidecar.istio.io/inject: "true"
    spec:
      containers:
      - name: department
        image: piomin/department-service
        ports:
        - containerPort: 8080
        volumeMounts:
          - mountPath: /etc/podinfo
            name: podinfo
      volumes:
        - name: podinfo
          downwardAPI:
            items:
              - path: "labels"
                fieldRef:
                  fieldPath: metadata.labels

The sample system consists of three microservices: employee-service, department-service, and organization-service. The source code of those applications is available on GitHub within the repository https://github.com/piomin/course-kubernetes-microservices/tree/openshift/simple-microservices. Each application is built on top of Spring Boot, and uses the H2 database as an in-memory data store. You can build and deploy them on OpenShift using Skaffold (by executing command skaffold dev), which manifest is configured inside the repository (skaffold.yaml).

I won’t describe the implementation details about example applications. They are written in Kotlin, and use OpenJDK as a base image. If you are interested in more detailed pieces of information you may watch two parts of my online course Microservices on Kubernetes: Inter-communication & gateway, and Microservices on Kubernetes: Service mesh.

Here’s a list of applications deployed inside project microservices.

openshift-service-mesh-apps

4. Istio configuration

After running all the sample applications we may proceed to the Istio configuration. Because each application is deployed in two versions, we will define DestinationRule component that defines a list of two subsets per application based on the value of the version label. Here’s the example DestinationRule for employee-service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: employee-service-destination
spec:
  host: employee-service.microservices.svc.cluster.local
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

Now, we may proceed to the definition of VirtualService. Routing between different versions of the application will be based on the value of HTTP header X-Version.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: employee-service-route
spec:
  hosts:
    - employee-service.microservices.svc.cluster.local
  http:
    - match:
        - headers:
            X-Version:
              exact: v1
      route:
        - destination:
            host: employee-service.microservices.svc.cluster.local
            subset: v1
    - match:
        - headers:
            X-Version:
              exact: v2
      route:
        - destination:
            host: employee-service.microservices.svc.cluster.local
            subset: v2
    - route:
        - destination:
            host: employee-service.microservices.svc.cluster.local
            subset: v1

The similar YAML manifests will be prepared for other microservices: department-service and organization-service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: department-service-destination
spec:
  host: department-service.microservices.svc.cluster.local
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

And VirtualService for department-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: department-service-route
spec:
  hosts:
    - department-service.microservices.svc.cluster.local
  http:
    - match:
        - headers:
            X-Version:
              exact: v1
      route:
        - destination:
            host: department-service.microservices.svc.cluster.local
            subset: v1
    - match:
        - headers:
            X-Version:
              exact: v2
      route:
        - destination:
            host: department-service.microservices.svc.cluster.local
            subset: v2
    - route:
        - destination:
            host: department-service.microservices.svc.cluster.local
            subset: v1

The whole configuration created until now was responsible for internal communication. Now, we will expose our application outside the OpenShift cluster. To do that we need to create Istio Gateway. It is referencing the Service called ingressgateway available in the namespace istio-system. That service is exposed outside the cluster using OpenShift Route.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: microservices-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "*"

Let’s take a look at the list of available routes. Besides Istio Gateway we can also access Kiali console outside the OpenShift cluster.

mesh-routes

The last thing we need to do is to create Istio virtual services responsible for routing from Istio gateway to the applications. The configuration is pretty similar to the internal virtual services. The difference is that it is referencing the Istio Gateway and performs routing to the downstream services basing on path prefix. If the path starts with /employee the request is forwarded toemployee-service etc. Here’s the configuration for employee-service. The similar configuration has been prepared for both department-service, and organization-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: employee-service-gateway-route
spec:
  hosts:
    - "*"
  gateways:
    - microservices-gateway
  http:
    - match:
        - headers:
            X-Version:
              exact: v1
          uri:
            prefix: "/employee"
      rewrite:
        uri: " "
      route:
        - destination:
            host: employee-service.microservices.svc.cluster.local
            subset: v1
    - match:
        - uri:
            prefix: "/employee"
          headers:
            X-Version:
              exact: v2
      rewrite:
        uri: " "
      route:
        - destination:
            host: employee-service.microservices.svc.cluster.local
            subset: v2

5. Test requests

The default hostname for my OpenShift cluster is istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io. To add some test data I’m sending the following requests.

$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/department/departments -d "{\"name\":\"Test1\"}" -H "Content-Type: application/json" -H "X-Version:v1"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/department/departments -d "{\"name\":\"Test1\"}" -H "Content-Type: application/json" -H "X-Version:v2"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/organization/organizations -d "{\"name\":\"Test1\"}" -H "Content-Type: application/json" -H "X-Version:v1"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/organization/organizations -d "{\"name\":\"Test1\"}" -H "Content-Type: application/json" -H "X-Version:v2"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/employee/employees -d "{\"firstName\":\"John\",\"lastName\":\"Smith\",\"position\":\"director\",\"organizationId\":1,\"departmentId\":1}" -H "Content-Type: application/json" -H "X-Version:v1"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/employee/employees -d "{\"firstName\":\"Paul\",\"lastName\":\"Walker\",\"position\":\"architect\",\"organizationId\":1,\"departmentId\":1}" -H "Content-Type: application/json" -H "X-Version:v1"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/employee/employees -d "{\"firstName\":\"John\",\"lastName\":\"Smith\",\"position\":\"director\",\"organizationId\":1,\"departmentId\":1}" -H "Content-Type: application/json" -H "X-Version:v2"
$ curl -X POST http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/employee/employees -d "{\"firstName\":\"Paul\",\"lastName\":\"Walker\",\"position\":\"architect\",\"organizationId\":1,\"departmentId\":1}" -H "Content-Type: application/json" -H "X-Version:v2"

Now, we can test internal communication between microservices. The following requests verifies communication between department-service, and employee-service.

$ curl http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/department/departments/1/with-employees -H "X-Version:v1"
$ curl http://istio-ingressgateway-istio-system.apps.np9zir0r.westeurope.aroapp.io/department/departments/1/with-employees -H "X-Version:v2"

6. Kiali

Finally, we access Kiali to take a look at the communication diagram. We had to generate some test traffic before.

openshift-service-mesh-kiali

Kiali allows us to verify Istio configuration. For example we may see the list of virtual services per project.

openshift-service-mesh-kiali-services

We may also take a look on the details of each VirtualService.

openshift-service-mesh-kiali-service

The post Intro to OpenShift Service Mesh appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/08/06/intro-to-openshift-service-mesh-with-istio/feed/ 0 8307
Spring Boot Library for integration with Istio https://piotrminkowski.com/2020/06/10/spring-boot-library-for-integration-with-istio/ https://piotrminkowski.com/2020/06/10/spring-boot-library-for-integration-with-istio/#comments Wed, 10 Jun 2020 15:17:10 +0000 http://piotrminkowski.com/?p=8102 In this article I’m going to present an annotation-based Spring Boot library for integration with Istio. The Spring Boot Istio library provides auto-configuration, so you don’t have to do anything more than including it to your dependencies to be able to use it. The library is using Istio Java Client me.snowdrop:istio-client for communication with Istio […]

The post Spring Boot Library for integration with Istio appeared first on Piotr's TechBlog.

]]>
In this article I’m going to present an annotation-based Spring Boot library for integration with Istio. The Spring Boot Istio library provides auto-configuration, so you don’t have to do anything more than including it to your dependencies to be able to use it.
The library is using Istio Java Client me.snowdrop:istio-client for communication with Istio API on Kubernetes. The following picture illustrates an architecture of the presented solution on Kubernetes. The Spring Boot Istio is working just during application startup. It is able to modify existing Istio resources or create the new one if there are no matching rules found.
spring-boot-istio-arch

Source code

The source code of library is available on my GitHub repository https://github.com/piomin/spring-boot-istio.git.

How to use it

To use in your Spring Boot application you include the following dependency.

<dependency>
   <groupId>com.github.piomin</groupId>
   <artifactId>spring-boot-istio</artifactId>
   <version>0.1.0.RELEASE</version>
</dependency>

After that you should annotate one of your class with @EnableIstio. The annotation contains several fields used for Istio DestinationRule and VirtualService objects.

@RestController
@RequestMapping("/caller")
@EnableIstio(version = "v1", timeout = 3, numberOfRetries = 3)
public class CallerController {

    private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

    @Autowired
    BuildProperties buildProperties;
    @Autowired
    RestTemplate restTemplate;
    @Value("${VERSION}")
    private String version;

    @GetMapping("/ping")
    public String ping() {
        LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), version);
        String response = restTemplate.getForObject("http://callme-service:8080/callme/ping", String.class);
        LOGGER.info("Calling: response={}", response);
        return "I'm caller-service " + version + ". Calling... " + response;
    }
   
}

The name of Istio objects is generated based on spring.application.name. So you need to provide that name in your application.yml.

spring:
  application:
    name: caller-service

Currently there are five available fields that may be used for @EnableIstio.

@Target({ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface EnableIstio {
    int timeout() default 0;
    String version() default "";
    int weight() default 0;
    int numberOfRetries() default 0;
    int circuitBreakerErrors() default 0;
}

Here’s the detailed description of available parameters.

  • version – it indicates the version of IstioSubset. We may define multiple versions of the same application. The name of label is version
  • weight – it sets a weight assigned to the Subset indicated by the version label
  • timeout – a total read timeout in seconds on the client side – including retries
  • numberOfRetries – it enables retry mechanism. By default we are retrying all 5XX HTTP codes. The timeout of a single retry is calculated as timeout / numberOfRetries
  • circuitBreakerErrors – it enables circuit breaker mechanism. It is based on a number of consecutive HTTP 5XX errors. If circuit is open for a single application it is ejected from the pool for 30 seconds

How it works

Here’s the Deployment definition of our sample service. It should be labelled with the same version as set inside @EnableIstio.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
        version: v1
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080

Let’s deploy our sample application on Kubernetes. After deploy we may verify the status of Deployment.

spring-boot-istio-deployment

The name of created DestinationRule is a concatenation of spring.application.name property value and word -destination.

spring-boot-istio-destinationrule

The name of created VirtualService is a concatenation of spring.application.name property value and word -route. Here’s the definition of VirtualService created for annotation @EnableIstio(version = "v1", timeout = 3, numberOfRetries = 3) and caller-service application.

spring-boot-istio-virtualservice

How it is implemented

We need to define a bean that implements BeanPostProcessor interface. On application startup it is trying to find the annotation @EnableIstio. If such annotation exists it takes a value of its fields and then it is creating new Istio objects or editing the currently existing objects.

public class EnableIstioAnnotationProcessor implements BeanPostProcessor {

    private final Logger LOGGER = LoggerFactory.getLogger(EnableIstioAnnotationProcessor.class);
    private ConfigurableListableBeanFactory configurableBeanFactory;
    private IstioClient istioClient;
    private IstioService istioService;

    public EnableIstioAnnotationProcessor(ConfigurableListableBeanFactory configurableBeanFactory, IstioClient istioClient, IstioService istioService) {
        this.configurableBeanFactory = configurableBeanFactory;
        this.istioClient = istioClient;
        this.istioService = istioService;
    }

    public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
        EnableIstio enableIstioAnnotation =  bean.getClass().getAnnotation(EnableIstio.class);
        if (enableIstioAnnotation != null) {
            LOGGER.info("Istio feature enabled: {}", enableIstioAnnotation);

            Resource<DestinationRule, DoneableDestinationRule> resource = istioClient.v1beta1DestinationRule()
                    .withName(istioService.getDestinationRuleName());
            if (resource.get() == null) {
                createNewDestinationRule(enableIstioAnnotation);
            } else {
                editDestinationRule(enableIstioAnnotation, resource);
            }

            Resource<VirtualService, DoneableVirtualService> resource2 = istioClient.v1beta1VirtualService()
                    .withName(istioService.getVirtualServiceName());
            if (resource2.get() == null) {
                 createNewVirtualService(enableIstioAnnotation);
            } else {
                editVirtualService(enableIstioAnnotation, resource2);
            }
        }
        return bean;
    }
   
}

We are using the API provided by Istio Client library. It provides a set of builders dedicated for creating elements of Istio objects.

private void createNewDestinationRule(EnableIstio enableIstioAnnotation) {
   DestinationRule dr = new DestinationRuleBuilder()
      .withMetadata(istioService.buildDestinationRuleMetadata())
      .withNewSpec()
      .withNewHost(istioService.getApplicationName())
      .withSubsets(istioService.buildSubset(enableIstioAnnotation))
      .withTrafficPolicy(istioService.buildCircuitBreaker(enableIstioAnnotation))
      .endSpec()
      .build();
   istioClient.v1beta1DestinationRule().create(dr);
   LOGGER.info("New DestinationRule created: {}", dr);
}

private void editDestinationRule(EnableIstio enableIstioAnnotation, Resource<DestinationRule, DoneableDestinationRule> resource) {
   LOGGER.info("Found DestinationRule: {}", resource.get());
   if (!enableIstioAnnotation.version().isEmpty()) {
      Optional<Subset> subset = resource.get().getSpec().getSubsets().stream()
         .filter(s -> s.getName().equals(enableIstioAnnotation.version()))
         .findAny();
      resource.edit()
         .editSpec()
         .addAllToSubsets(subset.isEmpty() ? List.of(istioService.buildSubset(enableIstioAnnotation)) :
                  Collections.emptyList())
            .editOrNewTrafficPolicyLike(istioService.buildCircuitBreaker(enableIstioAnnotation)).endTrafficPolicy()
         .endSpec()
         .done();
   }
}

private void createNewVirtualService(EnableIstio enableIstioAnnotation) {
   VirtualService vs = new VirtualServiceBuilder()
         .withNewMetadata().withName(istioService.getVirtualServiceName()).endMetadata()
      .withNewSpec()
      .addToHosts(istioService.getApplicationName())
      .addNewHttp()
      .withTimeout(enableIstioAnnotation.timeout() == 0 ? null : new Duration(0, (long) enableIstioAnnotation.timeout()))
      .withRetries(istioService.buildRetry(enableIstioAnnotation))
         .addNewRoute().withNewDestinationLike(istioService.buildDestination(enableIstioAnnotation)).endDestination().endRoute()
      .endHttp()
      .endSpec()
      .build();
   istioClient.v1beta1VirtualService().create(vs);
   LOGGER.info("New VirtualService created: {}", vs);
}

private void editVirtualService(EnableIstio enableIstioAnnotation, Resource<VirtualService, DoneableVirtualService> resource) {
   LOGGER.info("Found VirtualService: {}", resource.get());
   if (!enableIstioAnnotation.version().isEmpty()) {
      istioClient.v1beta1VirtualService().withName(istioService.getVirtualServiceName())
         .edit()
         .editSpec()
         .editFirstHttp()
         .withTimeout(enableIstioAnnotation.timeout() == 0 ? null : new Duration(0, (long) enableIstioAnnotation.timeout()))
         .withRetries(istioService.buildRetry(enableIstioAnnotation))
         .editFirstRoute()
         .withWeight(enableIstioAnnotation.weight() == 0 ? null: enableIstioAnnotation.weight())
            .editOrNewDestinationLike(istioService.buildDestination(enableIstioAnnotation)).endDestination()
         .endRoute()
         .endHttp()
         .endSpec()
         .done();
   }
}

The post Spring Boot Library for integration with Istio appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/06/10/spring-boot-library-for-integration-with-istio/feed/ 2 8102
Service mesh on Kubernetes with Istio and Spring Boot https://piotrminkowski.com/2020/06/01/service-mesh-on-kubernetes-with-istio-and-spring-boot/ https://piotrminkowski.com/2020/06/01/service-mesh-on-kubernetes-with-istio-and-spring-boot/#comments Mon, 01 Jun 2020 07:26:39 +0000 http://piotrminkowski.com/?p=8017 Istio is currently the leading solution for building service mesh on Kubernetes. Thanks to Istio you can take control of a communication process between microservices. It also lets you secure and observe your services. Spring Boot is still the most popular JVM framework for building microservice applications. In this article, I’m going to show how […]

The post Service mesh on Kubernetes with Istio and Spring Boot appeared first on Piotr's TechBlog.

]]>
Istio is currently the leading solution for building service mesh on Kubernetes. Thanks to Istio you can take control of a communication process between microservices. It also lets you secure and observe your services. Spring Boot is still the most popular JVM framework for building microservice applications. In this article, I’m going to show how to use both these tools to build applications and provide communication between them over HTTP on Kubernetes.

Example of Istio Spring Boot

For demonstrating usage of Istio and Spring Boot I created repository on GitHub with two sample applications: callme-service and caller-service. The address of this repository is https://github.com/piomin/sample-istio-services.git. The same repository has been used for my previous article about Istio: Service Mesh on Kubernetes with Istio in 5 steps. I moved this example to branch old_master, so if you for any reason would be interested in traffic management with a previous major version of Istio (0.X) please refer to that branch and article on my blog.
The source code is prepared to be used with Skaffold and Jib tools. Both these tools are simplifying development on local Kubernetes. All you need to do to use them is to download and install Skaffold, because the Jib plugin is already included in Maven pom.xml as shown below. For details about development using both these tools please refer to my article Local Java development on Kubernetes.

<plugin>
   <groupId>com.google.cloud.tools</groupId>
   <artifactId>jib-maven-plugin</artifactId>
   <version>2.1.0</version>
</plugin>

Installing Istio

To install Istio on your Kubernetes cluster you need to run two commands after downloading it. First of them is istioctl command.


$ istioctl manifest apply --set profile=demo

For executing a second command you also need to have kubectl tool. I was running my samples on Kubernetes with Docker Desktop, and I had to set 4 CPUs with 8GB RAM, which are recommended settings for testing Istio. Depending on the namespace, where you are deploying your applications you should run the following command. I’m using namespace default.

$ kubectl label namespace default istio-injection=enabled

Create Spring Boot applications

Now, let’s consider the architecture visible in the picture below. There are two running instances of application callme-service. These are two different versions of this application v1 and v2. In our case the only difference is in Deployment – not in the code. Application caller-service is communicating with callme-service. That traffic is managed by Istio, which sends 20% of requests to the v1 version of the application, and 80% to the v2 version. Tt also adds 3s delay to 33% of traffic.

service-mesh-on-kubernetes-istio-springboot-arch1

Here’s the structure of application callme-service.

service-mesh-on-kubernetes-istio-spring-boot-sourcecode

Like I mentioned before there is no difference in the code, there is just a difference in the environment variables injected into the application. The implementation of Spring @Controller responsible for handling incoming HTTP requests is very simple. It just injects the value of environment variable VERSION and returns it a response from GET /ping endpoint.

@RestController
@RequestMapping("/callme")
public class CallmeController {

    private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);
    private static final String INSTANCE_ID = UUID.randomUUID().toString();
    private Random random = new Random();

    @Autowired
    BuildProperties buildProperties;
    @Value("${VERSION}")
    private String version;

    @GetMapping("/ping")
    public String ping() {
        LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), version);
        return "I'm callme-service " + version;
    }
   
}

On the other side there is caller-service with the similar GET /ping endpoint that calls endpoint exposed by callme-service using Spring RestTemplate. It uses the name of Kubernetes Service as the address of the target application.

@RestController
@RequestMapping("/caller")
public class CallerController {

    private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

    @Autowired
    BuildProperties buildProperties;
    @Autowired
    RestTemplate restTemplate;
    @Value("${VERSION}")
    private String version;
   
    @GetMapping("/ping")
    public String ping() {
        LOGGER.info("Ping: name={}, version={}", buildProperties.getName(), version);
        String response = restTemplate.getForObject("http://callme-service:8080/callme/ping", String.class);
        LOGGER.info("Calling: response={}", response);
        return "I'm caller-service " + version + ". Calling... " + response;
    }
   
}

Deploy Spring Boot application on Kubernetes

We are creating two Deployment on Kubernetes for two different versions of the same application with names callme-service-v1 and callme-service-v2. For the fist of them we are injecting env to the container VERSION=v1, while for the second VERSION=v2.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v1
  template:
    metadata:
      labels:
        app: callme-service
        version: v1
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: callme-service-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: callme-service
      version: v2
  template:
    metadata:
      labels:
        app: callme-service
        version: v2
    spec:
      containers:
        - name: callme-service
          image: piomin/callme-service
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: VERSION
              value: "v2"
---
apiVersion: v1
kind: Service
metadata:
  name: callme-service
  labels:
    app: callme-service
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: callme-service

Of course there is also caller-service. We also need to deploy it. But this time there is only a single Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: caller-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: caller-service
  template:
    metadata:
      name: caller-service
      labels:
        app: caller-service
        version: v1
    spec:
      containers:
      - name: caller-service
        image: piomin/caller-service
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        env:
          - name: VERSION
            value: "v1"
---
apiVersion: v1
kind: Service
metadata:
  name: caller-service
  labels:
    app: caller-service
spec:
  type: NodePort
  ports:
    - port: 8080
      name: http
  selector:
    app: caller-service

Istio rules

Finally, we are creating two Istio components DestinationRule and VirtualService. The callme-service-destination destination rule contains definitions of subsets based on label version from Deployment. The callme-service-route virtual service uses these rules and sets weight for each subset. Additionally it injects 3s delay to route for 33% of requests.

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: callme-service-destination
spec:
  host: callme-service
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: callme-service-route
spec:
  hosts:
    - callme-service
  http:
    - route:
      - destination:
          host: callme-service
          subset: v2
        weight: 80
      - destination:
          host: callme-service
          subset: v1
        weight: 20
      fault:
        delay:
          percentage:
            value: 33
          fixedDelay: 3s

Since a delay is injected into the route by Istio, we have to set timeout on the client side (caller-service). To test that timeout on caller-service we can’t use port forwarding to call endpoint directly from the pod. It also won’t work if we call Kubernetes Service. That’s why we will also create Istio Gateway for caller-service. It is exposed on port 80 and it is using hostname caller.example.com.

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: caller-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "caller.example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: caller-service-destination
spec:
  host: caller-service
  subsets:
    - name: v1
      labels:
        version: v1
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: caller-service-route
spec:
  hosts:
    - "caller.example.com"
  gateways:
    - caller-gateway
  http:
    - route:
        - destination:
            host: caller-service
            subset: v1
      timeout: 0.5s

Testing Istio Spring Boot communication

The fastest way of deploying an application is with Jib and Skaffold. First you go to directory callme-service and execute skaffold dev command with optional --port-forward parameter.

$ cd callme-service
$ skaffold dev --port-forward

Then do the same for caller-service.

$ cd caller-service
$ skaffold dev --port-forward

Our both applications should be succesfully built and deployed on Kubernetes. The Kubernetes and Istio manifests should be applied. Let’s check out the list of deployments and running pods.

kubectl

We can also verify a list of Istio components.

service-mesh-on-kubernetes-istio-spring-boot-istio

How can we access our Istio Ingress Gateway? Let’s take a look at its configuration.

service-mesh-on-kubernetes-istio-spring-boot-gateway

Ingress Gateway is available on localhost:80. We just need to set HTTP header Host during call on value taken from caller-gatewaycaller.example.com. Here’s the successful call without any delay.

curl-ok

Here’s a call that has been delayed 3s on the callme-service side. Since we have set timeout to 0.5s on the caller-service it is finished with HTTP 504.

service-mesh-on-kubernetes-istio-spring-boot-curl-failed

Now let’s perform some same calls in row. Because traffic from caller-service to callme-service is split 80% to 20% between v2 and v1 versions most of logs is I’m caller-service v1. Calling… I’m callme-service v2. Additionally around ⅓ of calls is finishing with 0.5s timeout.

service-mesh-on-kubernetes-istio-spring-boot-curls

The post Service mesh on Kubernetes with Istio and Spring Boot appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/06/01/service-mesh-on-kubernetes-with-istio-and-spring-boot/feed/ 12 8017
Best Practices For Microservices on Kubernetes https://piotrminkowski.com/2020/03/10/best-practices-for-microservices-on-kubernetes/ https://piotrminkowski.com/2020/03/10/best-practices-for-microservices-on-kubernetes/#comments Tue, 10 Mar 2020 11:37:56 +0000 http://piotrminkowski.com/?p=7798 There are several best practices for building microservices architecture properly. You may find many articles about it online. One of them is my previous article Spring Boot Best Practices For Microservices. I focused there on the most important aspects that should be considered when running microservice applications built on top of Spring Boot on production. […]

The post Best Practices For Microservices on Kubernetes appeared first on Piotr's TechBlog.

]]>
There are several best practices for building microservices architecture properly. You may find many articles about it online. One of them is my previous article Spring Boot Best Practices For Microservices. I focused there on the most important aspects that should be considered when running microservice applications built on top of Spring Boot on production. I didn’t assume there is any platform used for orchestration or management, but just a group of independent applications. In this article, I’m going to extend the list of already introduced best practices with some new rules dedicated especially to microservices deployed on the Kubernetes platform.
The first question is if it makes any difference when you deploy your microservices on Kubernetes instead of running them independently without any platform? Well, actually yes and no… Yes, because now you have a platform that is responsible for running and monitoring your applications, and it launches some of its own rules. No, because you still have microservices architecture, a group of loosely coupled, independent applications, and you should not forget about it! In fact, many of the previously introduced best practices are actual, some of them need to be redefined a little. There are also some new, platform-specific rules, which should be mentioned.
One thing that needs to be explained before proceeding. This list of Kubernetes microservices best practices is built based on my experience in running microservices-based architecture on cloud platforms like Kubernetes. I didn’t copy it from other articles or books. In my organization, we have already migrated our microservices from Spring Cloud (Eureka, Zuul, Spring Cloud Config) to OpenShift. We are continuously improving this architecture based on experience in maintaining it.

Example

The sample Spring Boot application that implements currently described Kubernetes microservices best practices is written in Kotlin. It is available on GitHub in repository sample-spring-kotlin-microservice under branch kubernetes: https://github.com/piomin/sample-spring-kotlin-microservice/tree/kubernetes.

1. Allow platform to collect metrics

I have also put a similar section in my article about best practices for Spring Boot. However, metrics are also one of the important Kubernetes microservices best practices. We were using InfluxDB as a target metrics store. Since our approach to gathering metrics data is being changed after migration to Kubernetes I redefined the title of this point into Allow platform to collect metrics. The main difference between current and previous approaches is in the way of collecting data. We use Prometheus, because that process may be managed by the platform. InfluxDB is a push-based system, where your application actively pushes data into the monitoring system. Prometheus is a pull-based system, where a server fetches the metrics values from the running application periodically. So, our main responsibility at this point is to provide endpoints on the application side for Prometheus.
Fortunately, it is very easy to provide metrics for Prometheus with Spring Boot. You need to include Spring Boot Actuator and a dedicated Micrometer library for integration with Prometheus.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
   <groupId>io.micrometer</groupId>
   <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

We should also enable exposing Actuator HTTP endpoints outside application. You can enable a single endpoint dedicated for Prometheus or just expose all Actuator endpoints as shown below.

management.endpoints.web.exposure.include: '*'

After running your application endpoint is by default available under path /actuator/prometheus.

best-practices-microservices-kubernetes-actuator

Assuming you run your application on Kubernetes you need to deploy and configure Prometheus to scrape logs from your pods. The configuration may be delivered as Kubernetes ConfigMap. The prometheus.yml file should contain section scrape_config with path of endpoint serving metrics and Kubernetes discovery settings. Prometheus is trying to localize all application pods by Kubernetes Endpoints. The application should be labeled with app=sample-spring-kotlin-microservice and have a port with name http exposed outside the container.

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus
  labels:
    name: prometheus
data:
  prometheus.yml: |-
    scrape_configs:
      - job_name: 'springboot'
        metrics_path: /actuator/prometheus
        scrape_interval: 5s
        kubernetes_sd_configs:
        - role: endpoints
          namespaces:
            names:
              - default

        relabel_configs:
          - source_labels: [__meta_kubernetes_service_label_app]
            separator: ;
            regex: sample-spring-kotlin-microservice
            replacement: $1
            action: keep
          - source_labels: [__meta_kubernetes_endpoint_port_name]
            separator: ;
            regex: http
            replacement: $1
            action: keep
          - source_labels: [__meta_kubernetes_namespace]
            separator: ;
            regex: (.*)
            target_label: namespace
            replacement: $1
            action: replace
          - source_labels: [__meta_kubernetes_pod_name]
            separator: ;
            regex: (.*)
            target_label: pod
            replacement: $1
            action: replace
          - source_labels: [__meta_kubernetes_service_name]
            separator: ;
            regex: (.*)
            target_label: service
            replacement: $1
            action: replace
          - source_labels: [__meta_kubernetes_service_name]
            separator: ;
            regex: (.*)
            target_label: job
            replacement: ${1}
            action: replace
          - separator: ;
            regex: (.*)
            target_label: endpoint
            replacement: http
            action: replace

The last step is to deploy Prometheus on Kubernetes. You should attach ConfigMap with Prometheus configuration to the Deployment via Kubernetes mounted volume. After that you may set the location of a configuration file using config.file parameter: --config.file=/prometheus2/prometheus.yml.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus
  labels:
    app: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
        - name: prometheus
          image: prom/prometheus:latest
          args:
            - "--config.file=/prometheus2/prometheus.yml"
            - "--storage.tsdb.path=/prometheus/"
          ports:
            - containerPort: 9090
              name: http
          volumeMounts:
            - name: prometheus-storage-volume
              mountPath: /prometheus/
            - name: prometheus-config-map
              mountPath: /prometheus2/
      volumes:
        - name: prometheus-storage-volume
          emptyDir: {}
        - name: prometheus-config-map
          configMap:
            name: prometheus

Now you can verify if Prometheus has discovered your application running on Kubernetes by accessing endpoint /targets.

best-practices-microservices-kubernetes-prometheus

2. Prepare logs in right format

The approach to collecting logs is pretty similar to collecting metrics. Our application should not handle the process of sending logs by itself. It just should take care of formatting logs sent to the output stream properly. Since Docker has a built-in logging driver for Fluentd it is very convenient to use it as a log collector for applications running on Kubernetes. This means no additional agent is required on the container to push logs to Fluentd. Logs are directly shipped to Fluentd service from STDOUT and no additional logs file or persistent storage is required. Fluentd tries to structure data as JSON to unify logging across different sources and destinations.
In order to format our logs to JSON readable by Fluentd we may include the Logstash Logback Encoder library to our dependencies.

<dependency>
   <groupId>net.logstash.logback</groupId>
   <artifactId>logstash-logback-encoder</artifactId>
   <version>6.3</version>
</dependency>

Then we just need to set a default console log appender for our Spring Boot application in the file logback-spring.xml.

<configuration>
    <appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>
    <logger name="jsonLogger" additivity="false" level="DEBUG">
        <appender-ref ref="consoleAppender"/>
    </logger>
    <root level="INFO">
        <appender-ref ref="consoleAppender"/>
    </root>
</configuration>

The logs are printed into STDOUT in the format visible below.

kubernetes-log-format

It is very simple to install Fluentd, Elasticsearch and Kibana on Minikube. Disadvantage of this approach is that we are installing older versions of these tools.

$ minikube addons enable efk
* efk was successfully enabled
$ minikube addons enable logviewer
* logviewer was successfully enabled

After enabling efk and logviewer addons Kubernetes pulls and starts all the required pods as shown below.

best-practices-microservices-kubernetes-pods-logging

Thanks to the logstash-logback-encoder library we may automatically create logs compatible with Fluentd including MDC fields. Here’s a screen from Kibana that shows logs from our test application.

best-practices-microservices-kubernetes-kibana

Optionally, you can add my library for logging requests/responses for Spring Boot application.

<dependency>
   <groupId>com.github.piomin</groupId>
   <artifactId>logstash-logging-spring-boot-starter</artifactId>
   <version>1.2.2.RELEASE</version>
</dependency>

3. Implement both liveness and readiness health check

It is important to understand the difference between liveness and readiness probes in Kubernetes. If these probes are not implemented carefully, they can degrade the overall operation of a service, for example by causing unnecessary restarts. Liveness probe is used to decide whether to restart the container or not. If an application is unavailable for any reason, restarting the container sometimes can make sense. On the other hand, a readiness probe is used to decide if a container can handle incoming traffic. If a pod has been recognized as not ready, it is removed from load balancing. Fail of the readiness probe does not result in pod restart. The most typical liveness or readiness probe for web applications is realized via HTTP endpoint.
In a typical web application running outside a platform like Kubernetes, you won’t distinguish liveness and readiness health checks. That’s why most web frameworks provide only a single built-in health check implementation. For Spring Boot application you may easily enable health check by including Spring Boot Actuator to your dependencies. The important information about the Actuator health check is that it may behave differently depending on integrations between your application and third-party systems. For example, if you define a Spring data source for connecting to a database or declare a connection to the message broker, a health check may automatically include such validation through auto-configuration. Therefore, if you set a default Spring Actuator health check implementation as a liveness probe endpoint, it may result in unnecessary restart if the application is unable to connect the database or message broker. Since such behavior is not desired, I suggest you should implement a very simple liveness endpoint, that just verifies the availability of application without checking connection to other external systems.
Adding a custom implementation of a health check is not very hard with Spring Boot. There are some different ways to do that. One of them is visible below. We are using the mechanism provided within Spring Boot Actuator. It is worth noting that we won’t override a default health check, but we are adding another, custom implementation. The following implementation is just checking if an application is able to handle incoming requests.

@Component
@Endpoint(id = "liveness")
class LivenessHealthEndpoint {

    @ReadOperation
    fun health() : Health = Health.up().build()

    @ReadOperation
    fun name(@Selector name: String) : String = "liveness"

    @WriteOperation
    fun write(@Selector name: String) {

    }

    @DeleteOperation
    fun delete(@Selector name: String) {

    }

}

In turn, a default Spring Boot Actuator health check may be the right solution for a readiness probe. Assuming your application would connect to database Postgres and RabbitMQ message broker you should add the following dependencies to your Maven pom.xml.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
   <groupId>org.postgresql</groupId>
   <artifactId>postgresql</artifactId>
   <scope>runtime</scope>
</dependency>

Now, just for information add the following property to your application.yml. It enables displaying detailed information for auto-configured Actuator /health endpoint.

management:
  endpoint:
    health:
      show-details: always

Finally, let’s call /actuator/health to see the detailed result. As you see in the picture below, a health check returns information about Postgres and RabbitMQ connections.

best-practices-microservices-kubernetes-readiness

There is another aspect of using liveness and readiness probes in your web application. It is related to thread pooling. In a standard web container like Tomcat, each request is handled by the HTTP thread pool. If you are processing each request in the main thread and you have some long-running tasks in your application you may block all available HTTP threads. If your liveness will fail several times in row an application pod will be restarted. Therefore, you should consider implementing long-running tasks using another thread pool. Here’s the example of HTTP endpoint implementation with DeferredResult and Kotlin coroutines.

@PostMapping("/long-running")
fun addLongRunning(@RequestBody person: Person): DeferredResult<Person> {
   var result: DeferredResult<Person>  = DeferredResult()
   GlobalScope.launch {
      logger.info("Person long-running: {}", person)
      delay(10000L)
      result.setResult(repository.save(person))
   }
   return result
}

4. Consider your integrations

Hardly ever our application is able to exist without any external solutions like databases, message brokers or just other applications. There are two aspects of integration with third-party applications that should be carefully considered: connection settings and auto-creation of resources.
Let’s start with connection settings. As you probably remember, in the previous section we were using the default implementation of Spring Boot Actuator /health endpoint as a readiness probe. However, if you leave default connection settings for Postgres and Rabbit each call of the readiness probe takes a long time if they are unavailable. That’s why I suggest decreasing these timeouts to lower values as shown below.

spring:
  application:
    name: sample-spring-kotlin-microservice
  datasource:
    url: jdbc:postgresql://postgres:5432/postgres
    username: postgres
    password: postgres123
    hikari:
      connection-timeout: 2000
      initialization-fail-timeout: 0
  jpa:
    database-platform: org.hibernate.dialect.PostgreSQLDialect
  rabbitmq:
    host: rabbitmq
    port: 5672
    connection-timeout: 2000

Except properly configured connection timeouts you should also guarantee auto-creation of resources required by the application. For example, if you use RabbitMQ queue for asynchronous messaging between two applications you should guarantee that the queue is created on startup if it does not exist. To do that first declare a queue – usually on the listener side.

@Configuration
class RabbitMQConfig {

    @Bean
    fun myQueue(): Queue {
        return Queue("myQueue", false)
    }

}

Here’s a listener bean with receiving method implementation.

@Component
class PersonListener {

    val logger: Logger = LoggerFactory.getLogger(PersonListener::class.java)

    @RabbitListener(queues = ["myQueue"])
    fun listen(msg: String) {
        logger.info("Received: {}", msg)
    }

}

The similar case is with database integration. First, you should ensure that your application starts even if the connection to the database fails. That’s why I declared PostgreSQLDialect. It is required if the application is not able to connect to the database. Moreover, each change in the entities model should be applied on tables before application startup.
Fortunately, Spring Boot has auto-configured support for popular tools for managing database schema changes: Liquibase and Flyway. To enable Liquibase we just need to include the following dependency in Maven pom.xml.

<dependency>
   <groupId>org.liquibase</groupId>
   <artifactId>liquibase-core</artifactId>
</dependency>

Then you just need to create a change log and put in the default location db/changelog/db.changelog-master.yaml. Here’s a sample Liquibase changelog YAML file for creating table person.

databaseChangeLog:
  - changeSet:
      id: 1
      author: piomin
      changes:
        - createTable:
            tableName: person
            columns:
              - column:
                  name: id
                  type: int
                  autoIncrement: true
                  constraints:
                    primaryKey: true
                    nullable: false
              - column:
                  name: name
                  type: varchar(50)
                  constraints:
                    nullable: false
              - column:
                  name: age
                  type: int
                  constraints:
                    nullable: false
              - column:
                  name: gender
                  type: smallint
                  constraints:
                    nullable: false

5. Use Service Mesh

If you are building microservices architecture outside Kubernetes, such mechanisms like load balancing, circuit breaking, fallback, or retrying are realized on the application side. Popular cloud-native frameworks like Spring Cloud simplify the implementation of these patterns in your application and just reduce it to adding a dedicated library to your project. However, if you migrate your microservices to Kubernetes you should not still use these libraries for traffic management. It is becoming some kind of anti-pattern. Traffic management in communication between microservices should be delegated to the platform. This approach on Kubernetes is known as Service Mesh. One of the most important Kubernetes microservices best practices is to use dedicated software for building a service mesh.
Since originally Kubernetes has not been dedicated to microservices, it does not provide any built-in mechanism for advanced managing of traffic between many applications. However, there are some additional solutions dedicated for traffic management, which may be easily installed on Kubernetes. One of the most popular of them is Istio. Besides traffic management it also solves problems related to security, monitoring, tracing and metrics collecting.
Istio can be easily installed on your cluster or on standalone development instances like Minikube. After downloading it just run the following command.

$ istioctl manifest apply

Istio components need to be injected into a deployment manifest. After that, we can define traffic rules using YAML manifests. Istio gives many interesting options for configuration. The following example shows how to inject faults into the existing route. It can be either delays or aborts. We can define a percentage level of error using the percent field for both types of fault. In the Istio resource I have defined a 2 seconds delay for every single request sent to Service account-service.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: account-service
spec:
  hosts:
    - account-service
  http:
  - fault:
      delay:
        fixedDelay: 2s
        percent: 100
    route:
    - destination:
        host: account-service
        subset: v1

Besides VirtualService we also need to define DestinationRule for account-service. It is really simple – we have just defined the version label of the target service.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: account-service
spec:
  host: account-service
  subsets:
  - name: v1
    labels:
      version: v1

6. Be open for framework-specific solutions

There are many interesting tools and solutions around Kubernetes, which may help you in running and managing applications. However, you should also not forget about some interesting tools and solutions offered by a framework you use. Let me give you some examples. One of them is the Spring Boot Admin. It is a useful tool designed for monitoring Spring Boot applications across a single discovery. Assuming you are running microservices on Kubernetes you may also install Spring Boot Admin there.
There is another interesting project within Spring Cloud – Spring Cloud Kubernetes. It provides some useful features that simplify integration between a Spring Boot application and Kubernetes. One of them is a discovery across all namespaces. If you use that feature together with Spring Boot Admin, you may easily create a powerful tool, which is able to monitor all Spring Boot microservices running on your Kubernetes cluster. For more details about implementation details you may refer to my article Spring Boot Admin on Kubernetes.
Sometimes you may use Spring Boot integrations with third-party tools to easily deploy such a solution on Kubernetes without building separated Deployment. You can even build a cluster of multiple instances. This approach may be used for products that can be embedded in a Spring Boot application. It can be, for example RabbitMQ or Hazelcast (popular in-memory data grid). If you are interested in more details about running Hazelcast cluster on Kubernetes using this approach please refer to my article Hazelcast with Spring Boot on Kubernetes.

7. Be prepared for a rollback

Kubernetes provides a convenient way to rollback an application to an older version based on ReplicaSet and Deployment objects. By default, Kubernetes keeps 10 previous ReplicaSets and lets you roll back to any of them. However, one thing needs to be pointed out. A rollback does not include configuration stored inside ConfigMap and Secret. Sometimes it is desired to rollback not only application binaries, but also configuration.
Fortunately, Spring Boot gives us really great possibilities for managing externalized configuration. We may keep configuration files inside the application and also load them from an external location. On Kubernetes we may use ConfigMap and Secret for defining Spring configuration files. The following definition of ConfigMap creates application-rollbacktest.yml Spring configuration containing only a single property. This configuration is loaded by the application only if Spring profile rollbacktest is active.

apiVersion: v1
kind: ConfigMap
metadata:
  name: sample-spring-kotlin-microservice
data:
  application-rollbacktest.yml: |-
    property1: 123456

A ConfigMap is included to the application through a mounted volume.

spec:
  containers:
  - name: sample-spring-kotlin-microservice
    image: piomin/sample-spring-kotlin-microservice
    ports:
    - containerPort: 8080
       name: http
    volumeMounts:
    - name: config-map-volume
       mountPath: /config/
  volumes:
    - name: config-map-volume
       configMap:
         name: sample-spring-kotlin-microservice

We also have application.yml on the classpath. The first version contains only a single property.

property1: 123

In the second we are going to activate the rollbacktest profile. Since, a profile-specific configuration file has higher priority than application.yml, the value of property1 property is overridden with value taken from application-rollbacktest.yml.

property1: 123
spring.profiles.active: rollbacktest

Let’s test the mechanism using a simple HTTP endpoint that prints the value of the property.

@RestController
@RequestMapping("/properties")
class TestPropertyController(@Value("\${property1}") val property1: String) {

    @GetMapping
    fun printProperty1(): String  = property1
    
}

Let’s take a look how we are rolling back a version of deployment. First, let’s see how many revisions we have.

$ kubectl rollout history deployment/sample-spring-kotlin-microservice
deployment.apps/sample-spring-kotlin-microservice
REVISION  CHANGE-CAUSE
1         
2         
3         

Now, we are calling endpoint /properties of current deployment, that returns value of property property1. Since, profile rollbacktest is active it returns value from file application-rollbacktest.yml.

$ curl http://localhost:8080/properties
123456

Let’s roll back to the previous revision.


$ kubectl rollout undo deployment/sample-spring-kotlin-microservice --to-revision=2
deployment.apps/sample-spring-kotlin-microservice rolled back

As you see below the revision=2 is not visible, it is now deployed as the newest revision=4.

$ kubectl rollout history deployment/sample-spring-kotlin-microservice
deployment.apps/sample-spring-kotlin-microservice
REVISION  CHANGE-CAUSE
1         
3         
4         

In this version of application profile rollbacktest wasn’t active, so value of property property1 is taken from application.yml.

 $ curl http://localhost:8080/properties
123

The post Best Practices For Microservices on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2020/03/10/best-practices-for-microservices-on-kubernetes/feed/ 1 7798
Microservices traffic management using Istio on Kubernetes https://piotrminkowski.com/2018/05/09/microservices-traffic-management-using-istio-on-kubernetes/ https://piotrminkowski.com/2018/05/09/microservices-traffic-management-using-istio-on-kubernetes/#respond Wed, 09 May 2018 09:56:44 +0000 https://piotrminkowski.wordpress.com/?p=6513 I have already described a simple example of route configuration between two microservices deployed on Kubernetes in one of my previous articles: Service Mesh with Istio on Kubernetes in 5 steps. You can refer to this article if you are interested in the basic information about Istio, and its deployment on Kubernetes via Minikube. Today […]

The post Microservices traffic management using Istio on Kubernetes appeared first on Piotr's TechBlog.

]]>
I have already described a simple example of route configuration between two microservices deployed on Kubernetes in one of my previous articles: Service Mesh with Istio on Kubernetes in 5 steps. You can refer to this article if you are interested in the basic information about Istio, and its deployment on Kubernetes via Minikube. Today we will create some more advanced traffic management rules based on the same sample applications as used in the previous article about Istio.

The source code of sample applications is available on GitHub in repository sample-istio-services (https://github.com/piomin/sample-istio-services.git). There are two sample applications callme-service and caller-service deployed in two different versions 1.0 and 2.0. Version 1.0 is available in branch v1 (https://github.com/piomin/sample-istio-services/tree/v1), while version 2.0 in the branch v2 (https://github.com/piomin/sample-istio-services/tree/v2). Using these sample applications in different versions I’m going to show you different strategies of traffic management depending on a HTTP header set in the incoming requests.

We may force caller-service to route all the requests to the specific version of callme-service by setting header x-version to v1 or v2. We can also not set this header in the request which results in splitting traffic between all existing versions of service. If the request comes to version v1 of caller-service the traffic is splitted 50-50 between two instances of callme-service. If the request is received by v2 instance of caller-service 75% traffic is forwarded to version v2 of callme-service, while only 25% to v1. The scenario described above has been illustrated on the following diagram.

istio-advanced-1

Before we proceed to the example, I should say some words about traffic management with Istio. If you have read my previous article about Istio, you would probably know that each rule is assigned to a destination. Rules control a process of requests routing within a service mesh. The one very important information about them,especially for the purposes of the example illustrated on the diagram above, is that multiple rules can be applied to the same destination. The priority of every rule is determined by the precedence field of the rule. There is one principle related to a value of this field: the higher value of this integer field, the greater priority of the rule. As you may probably guess, if there is more than one rule with the same precedence value the order of rules evaluation is undefined. In addition to a destination, we may also define a source of the request in order to restrict a rule only to a specific caller. If there are multiple deployments of a calling service, we can even filter them out by setting source’s label field. Of course, we can also specify the attributes of an HTTP request such as uri, scheme or headers that are used for matching a request with a defined rule.

Ok, now let’s take a look at the rule with the highest priority. Its name is callme-service-v1 (1). It applies to callme-service (2), and has the highest priority in comparison to other rules (3). It is applied only to requests sent by caller-service (4), that contain HTTP header x-version with value v1 (5). This route rule applies only to version v1 of callme-service (6).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v1 # (1)
spec:
  destination:
    name: callme-service # (2)
  precedence: 4 # (3)
    match:
      source:
        name: caller-service # (4)
    request:
      headers:
        x-version:
          exact: "v1" # (5)
    route:
    - labels:
    version: v1 # (6)

Here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-7

The next rule callme-service-v2 (1) has a lower priority (2). However, it does not conflict with the first rule, because it applies only to the requests containing x-version header with value v2 (3). It forwards all requests to version v2 of callme-service (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v2 # (1)
spec:
  destination:
    name: callme-service
  precedence: 3 # (2)
    match:
      source:
        name: caller-service
      request:
        headers:
          x-version:
            exact: "v2" # (3)
    route:
    - labels:
    version: v2 # (4)

As before, here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-6

The rule callme-service-v1-default (1) visible in the code fragment below has a lower priority (2) than two previously described rules. In practice it means that it is executed only if conditions defined in two previous rules were not fulfilled. Such a situation occurs if you do not pass the header x-version inside HTTP request, or it would have different value than v1 or v2. The rule visible below applies only to the instance of service labeled with v1 version (3). Finally, the traffic to callme-service is load balanced in proportions 50-50 between two versions of that service (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v1-default # (1)
spec:
  destination:
    name: callme-service
  precedence: 2 # (2)
  match:
    source:
      name: caller-service
    labels:
      version: v1 # (3)
    route: # (4)
    - labels:
      version: v1
      weight: 50
    - labels:
      version: v2
      weight: 50

Here’s the fragment of the first diagram, which is handled by this route rule.

istio-advanced-4

The last rule is pretty similar to the previously described callme-service-v1-default. Its name is callme-service-v2-default (1), and it applies only to version v2 of caller-service (3). It has the lowest priority (2), and splits traffic between two version of callme-service in proportions 75-25 in favor of version v2 (4).

apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
  name: callme-service-v2-default # (1)
spec:
  destination:
    name: callme-service
  precedence: 1 # (2)
    match:
      source:
        name: caller-service
      labels:
        version: v2 # (3)
      route: # (4)
      - labels:
        version: v1
        weight: 25
      - labels:
        version: v2
        weight: 75

The same as before, I have also included the diagram illustrating the behaviour of this rule.

istio-advanced-5

All the rules may be placed inside a single file. In that case they should be separated with line ---. This file is available in code’s repository inside callme-service module as multi-rule.yaml. To deploy all defined rules on Kubernetes just execute the following command.

$ kubectl apply -f multi-rule.yaml

After successful deploy you may check out the list of available rules by running command istioctl get routerule.

istio-advanced-2

Before we will start any tests, we obviously need to have sample applications deployed on Kubernetes. These applications are really simple and pretty similar to the applications used for tests in my previous article about Istio. The controller visible below implements method GET /callme/ping, which prints version of application taken from pom.xml and value of x-version HTTP header received in the request.

[code language=”java”]@RestController
@RequestMapping(“/callme”)
public class CallmeController {

private static final Logger LOGGER = LoggerFactory.getLogger(CallmeController.class);

@Autowired
BuildProperties buildProperties;

@GetMapping(“/ping”)
public String ping(@RequestHeader(name = “x-version”, required = false) String version) {
LOGGER.info(“Ping: name={}, version={}, header={}”, buildProperties.getName(), buildProperties.getVersion(), version);
return buildProperties.getName() + “:” + buildProperties.getVersion() + ” with version ” + version;
}

}

Here’s the controller class that implements method GET /caller/ping. It prints a version of caller-service taken from pom.xml and calls method GET callme/ping exposed by callme-service. It needs to include x-version header to the request when sending it to the downstream service.

[code language=”java”]@RestController
@RequestMapping(“/caller”)
public class CallerController {

private static final Logger LOGGER = LoggerFactory.getLogger(CallerController.class);

@Autowired
BuildProperties buildProperties;
@Autowired
RestTemplate restTemplate;

@GetMapping(“/ping”)
public String ping(@RequestHeader(name = “x-version”, required = false) String version) {
LOGGER.info(“Ping: name={}, version={}, header={}”, buildProperties.getName(), buildProperties.getVersion(), version);
HttpHeaders headers = new HttpHeaders();
if (version != null)
headers.set(“x-version”, version);
HttpEntity entity = new HttpEntity(headers);
ResponseEntity response = restTemplate.exchange(“http://callme-service:8091/callme/ping”, HttpMethod.GET, entity, String.class);
return buildProperties.getName() + “:” + buildProperties.getVersion() + “. Calling… ” + response.getBody() + ” with header ” + version;
}

}

Now, we may proceed to applications build and deployment on Kubernetes. Here are are the further steps.

1. Building application

First, switch to branch v1 and build the whole project sample-istio-services by executing mvn clean install command.

2. Building Docker image

The Dockerfiles are placed in the root directory of every application. Build their Docker images by executing the following commands.

$ docker build -t piomin/callme-service:1.0 .
$ docker build -t piomin/caller-service:1.0 .

Alternatively, you may omit this step, because images piomin/callme-service and piomin/caller-service are available on my Docker Hub account.

3. Inject Istio components to Kubernetes deployment file

Kubernetes YAML deployment file is available in the root directory of every application as deployment.yaml. The result of the following command should be saved as a separated file, for example deployment-with-istio.yaml.

$ istioctl kube-inject -f deployment.yaml

4. Deployment on Kubernetes

Finally, you can execute a well-known kubectl command in order to deploy a Docker container with our sample application.


$ kubectl apply -f deployment-with-istio.yaml

Then switch to branch v2, and repeat the steps described above for version 2.0 of the sample applications. The final deployment result is visible in the picture below.

istio-advanced-3

One very useful thing when running Istio on Kubernetes is out-of-the-box integration with such tools like Zipkin, Grafana or Prometheus. Istio automatically sends some metrics, that are collected by Prometheus, for example total number of requests in metric istio_request_count. YAML deployment files for these plugins ara available inside directory ${ISTIO_HOME}/install/kubernetes/addons. Before installing Prometheus using kubectl command I suggest to change service type from default ClusterIP to NodePort by adding the line type: NodePort.

apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
  labels:
    name: prometheus
  name: prometheus
  namespace: istio-system
spec:
  type: NodePort
  selector:
    app: prometheus
  ports:
  - name: prometheus
    protocol: TCP
    port: 9090

Then we should run command kubectl apply -f prometheus.yaml in order to deploy Prometheus on Kubernetes. The deployment is available inside istio-system namespace. To check the external port of service run the following command. For me, it is available under address http://192.168.99.100:32293.

istio-advanced-14

In the following diagram visualized using Prometheus I filtered out only the requests sent to callme-service. Green color points to requests received by version v2 of the service, while red color points to requests processed by version v1 of the service. Like you can see in this diagram, in the beginning I have sent the requests to caller-service with HTTP header x-version set to value v2, then I didn’t set this header and traffic has been splitted between to deployed instances of the service. Finally I set it to v1. I defined an expression rate(istio_request_count{callme-service.default.svc.cluster.local}[1m]), which returns per-second rate of requests received by callme-service.

istio-advanced-13

Testing

Before sending some test requests to caller-service we need to obtain its address on Kubernetes. After executing the following command you see that it is available under address http://192.168.99.100:32237/caller/ping.

istio-services-16

We have four possible scenarios. First, when we set header x-version to v1 the request will be always routed to callme-service-v1.

istio-advanced-10

If a header x-version is not included in the requests the traffic will be splitted between callme-service-v1

istio-advanced-11

… and callme-service-v2.

istio-advanced-12

Finally, if we set header x-version to v2 the request will be always routed to callme-service-v2.

istio-advanced-14

Conclusion

Using Istio you can easily create and apply simple and more advanced traffic management rules to the applications deployed on Kubernetes. You can also monitor metrics and traces through the integration between Istio and Zipkin, Prometheus and Grafana.

The post Microservices traffic management using Istio on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2018/05/09/microservices-traffic-management-using-istio-on-kubernetes/feed/ 0 6513