secrets Archives - Piotr's TechBlog https://piotrminkowski.com/tag/secrets/ Java, Spring, Kotlin, microservices, Kubernetes, containers Mon, 20 Mar 2023 14:43:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 secrets Archives - Piotr's TechBlog https://piotrminkowski.com/tag/secrets/ 32 32 181738725 Vault with Secrets Store CSI Driver on Kubernetes https://piotrminkowski.com/2023/03/20/vault-with-secrets-store-csi-driver-on-kubernetes/ https://piotrminkowski.com/2023/03/20/vault-with-secrets-store-csi-driver-on-kubernetes/#comments Mon, 20 Mar 2023 14:43:20 +0000 https://piotrminkowski.com/?p=14082 This article will teach you how to use the Secrets Store CSI Driver to integrate your app with HashiCorp Vault on Kubernetes. The main goal of that project is to integrate the secrets store with Kubernetes via a Container Storage Interface (CSI) volume. It allows mounting multiple secrets or keys retrieved from secure external providers […]

The post Vault with Secrets Store CSI Driver on Kubernetes appeared first on Piotr's TechBlog.

]]>
This article will teach you how to use the Secrets Store CSI Driver to integrate your app with HashiCorp Vault on Kubernetes. The main goal of that project is to integrate the secrets store with Kubernetes via a Container Storage Interface (CSI) volume. It allows mounting multiple secrets or keys retrieved from secure external providers like AWS Secrets Manager, Google Secret Manager, or HashiCorp Vault. In order to test the solution, we will create a simple Spring Boot app that reads the content of a file on a mounted volume. We will also use Terraform with the Helm provider to install and configure both Secrets Store CSI Driver and HashiCorp Vault. Finally, we are going to consider a secret rotation scenario.

The solution presented in this article is not the only way how we can deal with HashiCorp Vault on Kubernetes. If you are interested in other approaches you may refer to some of my previous articles. In that article, you can find a guide on how to integrate Vault secrets with Argo CD through the plugin. If you are running Spring Boot apps on Kubernetes you can also be interested in Spring Cloud Vault support. In that case please refer to the following article.

How it works

I guess you may not be very familiar with the Container Storage Interface (CSI) pattern. At the high-level CSI is a standard for exposing block or file storage to the containers. It is implemented by different storage providers.

The Secrets Store CSI Driver is running on Kubernetes as a DeamonSet. It is interacting with every instance of Kubelet on the Kubernetes nodes. Once the pod is starting, the Secrets Store CSI Driver communicates with the external secrets provider to retrieve the secret content. The following diagram illustrates how Secrets Store CSI Driver works on Kubernetes.

vault-secrets-store-csi-arch

It provides the SecretProviderClass CRD to manage that process. In this provider class, we need to set the secure vault address and the location of the secret keys. Here’s the SecretProviderClass for our scenario. We will use HashiCorp Vault running on Kubernetes.

apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: vault-database
  namespace: default
spec:
  parameters:
    objects: |-
      - objectName: "db-password"
        secretPath: "secret/data/db-pass"
        secretKey: "password"
    roleName: webapp
    vaultAddress: 'http://vault.vault.svc:8200'
  provider: vault

Here’s the location of our secret in the HashiCorp Vault. As you see the current value of the password entry is test1.

vault-secrets-store-csi-secret

Source Code

As usual – if you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my two GitHub repositories. The first of them contains Terraform scripts for installing Vault and Secrets Store CSI. After cloning it you go to the vault-ocp directory. The second repository contains a simple Spring Boot app for a test scenario. Once you clone it, go to the spring-util-app directory. Then you should just follow my instructions 🙂

Install Vault and Secrets Store CSI Driver with Terraform

As I mentioned before, we will use Terraform to set up almost the whole test scenario today. We will just leverage Skaffold, in the last step, to deploy the Spring Boot app on the Kubernetes cluster.

In order to install both Vault and Secrets Store CSI Driver we will use Helm charts. To do that part of the exercise, we need kubernetes (1) and helm (2) as the Terraform providers. The third step (3) is required only if you run the scenario on OpenShift. It changes the default service account access restrictions and security context constraints (SCCs) to ensure that a pod has sufficient permissions to start on OpenShift. Then we may proceed to the Helm charts installation. For the Secrets Store CSI Driver chart (4), it is important to enable secrets rotation since we will test this feature at the end of the article. Finally, we can install the HashiCorp Vault chart (5).

# (1)
provider "kubernetes" {
  config_path = "~/.kube/config"
  config_context = var.cluster-context
}

# (2)
provider "helm" {
  kubernetes {
    config_path = "~/.kube/config"
    config_context = var.cluster-context
  }
}

# (3)
resource "kubernetes_cluster_role_binding" "privileged" {
  metadata {
    name = "system:openshift:scc:privileged"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "system:openshift:scc:privileged"
  }
  subject {
    kind      = "ServiceAccount"
    name      = "secrets-store-csi-driver"
    namespace = "k8s-secrets-store-csi"
  }
  subject {
    kind      = "ServiceAccount"
    name      = "vault-csi-provider"
    namespace = "vault"
  }
}

resource "kubernetes_namespace" "vault" {
  metadata {
    name = "vault"
  }
}

resource "kubernetes_service_account" "vault-sa" {
  depends_on = [kubernetes_namespace.vault]
  metadata {
    name      = "vault"
    namespace = "vault"
  }
}

resource "kubernetes_secret_v1" "vault-secret" {
  depends_on = [kubernetes_namespace.vault]
  metadata {
    name = "vault-token"
    namespace = "vault"
    annotations = {
      "kubernetes.io/service-account.name" = "vault"
    }
  }

  type = "kubernetes.io/service-account-token"
}

# (4)
resource "helm_release" "secrets-store-csi-driver" {
  chart            = "secrets-store-csi-driver"
  name             = "csi-secrets-store"
  namespace        = "k8s-secrets-store-csi"
  create_namespace = true
  repository       = "https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts"

  set {
    name  = "linux.providersDir"
    value = "/var/run/secrets-store-csi-providers"
  }

  set {
    name  = "syncSecret.enabled"
    value = "true"
  }

  set {
    name  = "enableSecretRotation"
    value = "true"
  }
}

# (5)
resource "helm_release" "vault" {
  chart            = "vault"
  name             = "vault"
  namespace        = "vault"
  create_namespace = true
  repository       = "https://helm.releases.hashicorp.com"

  values = [
    file("values.yaml")
  ]
}

I’m using the OpenShift platform to run the scenario. In some cases, it impacts our scenario and requires additional configuration. However, without those extensions, you can easily run the scenario on vanilla Kubernetes.

The Helm values.yaml file used by the Vault chart is visible below. In order to simplify deployment, we will enable the development mode (1). It generates a root token automatically and runs a single instance of Vault. We can enable Route for OpenShift (2) and use the image supported by Red Hat (3). Of course, we also need to enable global OpenShift configuration (4). You can omit all three steps (2) (3) (4) when running the scenario on vanilla Kubernetes. Finally, we need to enable CSI support (5) and disable the Vault sidecar injector which is not needed in that exercise (7). The path in the csi.deamonSet.providersDir property should be the same as the linux.providersDir in the Halm chart params (6).

server:
  dev:
    enabled: true # (1)
  route: # (2)
    enabled: true
    host: ""
    tls: null
  image: # (3)
    repository: "registry.connect.redhat.com/hashicorp/vault"
    tag: "1.12.4-ubi"
  serviceAccount:
    name: vault
    create: false
global: # (4)
  openshift: true
csi: 
  debug: true
  enabled: true # (5)
  daemonSet:
    providersDir: /var/run/secrets-store-csi-providers # (6)
    securityContext:
      container:
        privileged: true
injector: # (7)
  enabled: false

Finally, let’s apply the configuration to the target cluster.

$ terraform apply -auto-approve -compact-warnings

Here’s the result of the terraform apply command for my cluster:

In order to verify if everything has been installed successfully we can display the details of the vault-csi-provider DaemonSet in the vault namespace.

$ kubectl describe ds vault-csi-provider -n vault

Then, we can do a very similar thing for the Secrets Store CSI driver. We need to display the details of the csi-secrets-store-secrets-store-csi-driver DeamonSet.

$ kubectl describe ds csi-secrets-store-secrets-store-csi-driver \
  -n k8s-secrets-store-csi

Configure Vault with Terraform

The big advantage of using Terraform in our scenario is its integration with Vault. There is a dedicated Terraform provider for interacting with HashiCorp Vault. In order to set up the provider, we need to pass the Vault token (root in dev mode) and address. We will still need the kubernetes provider in that part of the exercise.

provider "kubernetes" {
  config_path = "~/.kube/config"
  config_context = var.cluster-context
}

provider "vault" {
  token = "root"
  address = var.vault-addr
}

We need to set the Kubernetes context path name and Vault API address in the variables.tf file. Here’s my Terraform variables.tf file:

variable "cluster-context" {
  type    = string
  default = "default/api-cluster-6sccr-6sccr-sandbox1544-opentlc-com:6443/opentlc-mgr"
}

variable "vault-addr" {
  type = string
  default = "http://vault-vault.apps.cluster-6sccr.6sccr.sandbox1544.opentlc.com"
}

The Terraform script responsible for configuring Vault is visible below. There are several things we need to do before deploying a sample Spring Boot app. Here’s a list of the required steps:

  1. We need to enable Kubernetes authentication in Vault. The Secrets Store CSI Driver will use it to authenticate against the instance of Vault
  2. In the second step, we will create a test secret. Its name is password and the value is test1. It is stored in Vault under the /secret/data/db-pass path.
  3. Then, we have to Configure Kubernetes authenticate method.
  4. In the fourth step, we are creating the policy for our app. It has read access to the secret created in step 2
  5. We are creating the ServicerAccount for our sample Spring Boot app in the default namespace. The name of the ServiceAccount object is webapp-sa.
  6. Finally, we can proceed to the last step in the Vault configuration – the authentication role required to access the secret. The name of the role is webapp and is then used by the Secrets Store CSI SecretProviderClass CR. The authentication role refers to the already created policy and ServiceAccount webapp-sa in the default namespace.
  7. Once the Vault backend is configured properly we create the Secrets Store CSI SecretProviderClass CR.
# (1)
resource "vault_auth_backend" "kubernetes" {
  type = "kubernetes"
}

# (2)
resource "vault_kv_secret_v2" "secret" {
  mount = "secret"
  name = "db-pass"
  data_json = jsonencode(
    {
      password = "test1"
    }
  )
}

data "kubernetes_secret" "vault-token" {
  metadata {
    name      = "vault-token"
    namespace = "vault"
  }
}

# (3)
resource "vault_kubernetes_auth_backend_config" "example" {
  backend                = vault_auth_backend.kubernetes.path
  kubernetes_host        = "https://172.30.0.1:443"
  kubernetes_ca_cert     = data.kubernetes_secret.vault-token.data["ca.crt"]
  token_reviewer_jwt     = data.kubernetes_secret.vault-token.data.token
}

# (4)
resource "vault_policy" "internal-app" {
  name = "internal-app"

  policy = <<EOT
path "secret/data/db-pass" {
  capabilities = ["read"]
}
EOT
}

# (5)
resource "kubernetes_service_account" "webapp-sa" {
  metadata {
    name      = "webapp-sa"
    namespace = "default"
  }
}

# (6)
resource "vault_kubernetes_auth_backend_role" "internal-role" {
  backend                          = vault_auth_backend.kubernetes.path
  role_name                        = "webapp"
  bound_service_account_names      = ["webapp-sa"]
  bound_service_account_namespaces = ["default"]
  token_ttl                        = 3600
  token_policies                   = ["internal-app"]
}

# (7)
resource "kubernetes_manifest" "vault-database" {
  manifest = {
    "apiVersion" = "secrets-store.csi.x-k8s.io/v1alpha1"
    "kind"       = "SecretProviderClass"
    "metadata" = {
      "name"      = "vault-database"
      "namespace" = "default"
    }
    "spec" = {
      "provider"   = "vault"
      "parameters" = {
        "vaultAddress" = "http://vault.vault.svc:8200"
        "roleName"     = "webapp"
        "objects"      = "- objectName: \"db-password\"\n  secretPath: \"secret/data/db-pass\"\n  secretKey: \"password\""
      }
    }
  }
}

Once again, to apply the configuration we need to execute the terraform apply command.

Of course, we could apply the whole configuration visible above using Vault CLI or UI. However, we can verify it with Vault UI. In order to log in there we must use the root token. After login, we need to go to the Access tab and then to the Auth Methods menu. As you see, there is a webapp method defined in Terraform scripts.

vault-secrets-store-csi-vault-config

Let’s switch to the Policies tab. Then we can check out if the internal-app policy exists.

Run the App with Mounted Secrets

Once we applied the whole configuration with Terraform we may proceed to the sample Spring Boot app. The idea is pretty simple. Our app just reads the data from the file and exposes it as the REST endpoint. Here’s the REST @Controller implementation:

@RestController
@RequestMapping("/api")
class SampleUtilController {

    @GetMapping("/db-password")
    fun resourceString(): String {
        val file = File("/mnt/secrets-store/db-password")
        return if(file.exists()) file.readText()
        else "none"
    }
}

Here’s the app Deployment manifest. As you see we are using the secrets-store.csi.k8s.io implementation of the CSI driver for mounted volume. It refers the vault-database SecretProviderClass object created with the Terraform script. The volume is containing the file with the value of our secret. We are mounting it under the path /mnt/secrets-store, which is accessed by the Spring Boot application.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-util-app
spec:
  selector:
    matchLabels:
      app: sample-util-app
  template:
    metadata:
      labels:
        app: sample-util-app
    spec:
      serviceAccountName: webapp-sa
      containers:
        - name: sample-util-app
          image: piomin/sample-util-app
          ports:
            - containerPort: 8080
          volumeMounts:
            - mountPath: /mnt/secrets-store
              name: secrets-store
              readOnly: true
      volumes:
        - name: secrets-store
          csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: "vault-database"

Here’s the Kubernetes Service manifest:

apiVersion: v1
kind: Service
metadata:
  name: sample-util-app
spec:
  type: ClusterIP
  selector:
    app: sample-util-app
  ports:
    - port: 8080
      targetPort: 8080

We can easily build and deploy the app with Skaffold. It also allows exposing a port outside the cluster as a local port with the port-forward option.

$ skaffold dev --port-forward

Finally, we can our test endpoint GET /api/db-password. It returns the value we have already set in Vault for the db-pass/password secret.

$ curl http://localhost:8080/api/db-password
test2

Now, let’s test the secret rotation feature. In order to do that we need to change the value of the db-pass/password secret. We can do it using Vault UI. We can set the test2 value:

Secrets Store CSI Driver periodically queries managed secrets to detect changes. So, after the change, we would probably wait a moment until our app refreshes that value. The default poll interval is 2 minutes. We can override it using the rotation-poll-interval parameter (e.g. on the Helm chart). However, the most important thing is, that everything happens without restarting the pod. The only trace of change is in the events:

Now, let’s query for the latest value of the key used by the app. As you see the value has been refreshed.

Final Thoughts

If you are looking for a solution that injects Vault secrets into your app without creating Kubernetes Secret, Secrets Store CSI Driver is the solution for you. It is able to refresh the value of the secret in your app without restarting the container. In this article, I’m presenting how to install and configure it with Terraform to simplify the installation and configuration process.

The post Vault with Secrets Store CSI Driver on Kubernetes appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/03/20/vault-with-secrets-store-csi-driver-on-kubernetes/feed/ 6 14082
Sealed Secrets on Kubernetes with ArgoCD and Terraform https://piotrminkowski.com/2022/12/14/sealed-secrets-on-kubernetes-with-argocd-and-terraform/ https://piotrminkowski.com/2022/12/14/sealed-secrets-on-kubernetes-with-argocd-and-terraform/#comments Wed, 14 Dec 2022 10:00:16 +0000 https://piotrminkowski.com/?p=13796 In this article, you will learn how to manage secrets securely on Kubernetes in the GitOps approach using Sealed Secrets, ArgoCD, and Terraform. We will use Terraform for setting up both Sealed Secrets and ArgoCD on the Kubernetes cluster. ArgoCD will realize the GitOps model by synchronizing encrypted secrets from the Git repository to the […]

The post Sealed Secrets on Kubernetes with ArgoCD and Terraform appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to manage secrets securely on Kubernetes in the GitOps approach using Sealed Secrets, ArgoCD, and Terraform. We will use Terraform for setting up both Sealed Secrets and ArgoCD on the Kubernetes cluster. ArgoCD will realize the GitOps model by synchronizing encrypted secrets from the Git repository to the cluster. Sealed Secrets decrypts the data and create a standard Kubernetes Secret object instead of the encrypted SealedSecret CRD from the Git repository.

How it works

Let’s discuss our architecture in greater detail. In the first step, we are installing ArgoCD and Sealed Secrets on Kubernetes with Terraform. In order to install both these tools, we will leverage Terraform support for Helm charts. During ArgoCD installation we will also create the Application that refers to the Git repository with configuration (1). This repository will contain YAML manifests including an encrypted version of our Kubernetes Secret. When Terraform installs Sealed Secrets it sets the private key for secrets decryption and the public key for encryption (2).

Once we successfully install Sealed Secrets we can interact with its controller running on the cluster with kubeseal CLI. With the kubeseal command we can get an encrypted version of the input Kubernetes Secret (3). Then we place an encrypted secret inside the repository with the app deployment manifests (4). Argo CD will automatically apply the latest configuration to the Kubernetes cluster (5). Once the new encrypted secret appears Sealed Secrets detects it and tries to decrypt using a previously set private key (6). As a result, a new Secret object is created and then injected into our sample app (7). That’s the last step of our exercise. We will test the result using the HTTP endpoint exposed by the app.

sealed-secrets-kubernetes-arch

Prerequisites

To proceed with the exercise you need to have a running instance of Kubernetes. It can be a local or a cloud instance – it doesn’t matter. Additionally, you also need to install two CLI tools on your laptop:

  1. kubeseal – the client-side part of Sealed Secrets. You will installation instructions here.
  2. terraform – to run Terraform HCL scripts you need to have CLI. You will installation instructions here.

Source Code

If you would like to try it by yourself, you can always take a look at my source code. In order to do that you need to clone my GitHub repository. This repository contains sample Terraform scripts for initializing Kubernetes. You should go to the sealedsecrets directory. Then just follow my instructions.

Install ArgoCD and Sealed Secrets with Terraform

Assuming you already have the terraform CLI installed the first thing you need to do is to define the Helm provider with the path to your Kube config and the name of the Kube context. Here’s our providers.tf file:

provider "kubernetes" {
  config_path = "~/.kube/config"
  config_context = var.cluster-context
}

provider "helm" {
  kubernetes {
    config_path = "~/.kube/config"
    config_context = var.cluster-context
  }
}

Since I’m using Kubernetes on the Docker Desktop the name of my context is docker-desktop. Here’s the variables.tf file:

variable "cluster-context" {
  type    = string
  default = "docker-desktop"
}

Here’s the Terraform script for installing ArgoCD and Sealed Secrets. For Sealed Secrets, we need to set keys for encryption and decryption. By default, the Sealed Secrets chart detects an existing TLS secret with the name sealed-secrets-key inside the target namespace. If it does not exist the chart creates a new one containing generated keys. In order to define the secret with the predefined TLS keys, we first need to create the namespace (1). Then we create the Secret sealed-secrets-key that contains our tls.crt and tls.key (2). After that we may install Sealed Secrets in the sealed-secrets namespace using Helm chart (3).

At the same time, we are installing ArgoCD in the argocd namespace also using Helm chart (4). The chart automatically creates the namespace thanks to the create_namespace parameter. Once we install ArgoCD we can create the Application object responsible for synchronization between the Git repository and Kubernetes cluster. We can also do it using the same Terraform script thanks to the argocd-apps Helm chart (5). It allows us to define a list of ArgoCD Applications inside the Helm values file (6).

# Sealed Secrets Installation

# (1)
resource "kubernetes_namespace" "sealed-secrets-ns" {
  metadata {
    name = "sealed-secrets"
  }
}

# (2)
resource "kubernetes_secret" "sealed-secrets-key" {
  depends_on = [kubernetes_namespace.sealed-secrets-ns]
  metadata {
    name      = "sealed-secrets-key"
    namespace = "sealed-secrets"
  }
  data = {
    "tls.crt" = file("keys/tls.crt")
    "tls.key" = file("keys/tls.key")
  }
  type = "kubernetes.io/tls"
}

# (3)
resource "helm_release" "sealed-secrets" {
  depends_on = [kubernetes_secret.sealed-secrets-key]
  chart      = "sealed-secrets"
  name       = "sealed-secrets"
  namespace  = "sealed-secrets"
  repository = "https://bitnami-labs.github.io/sealed-secrets"
}

# ArgoCD Installation

# (4)
resource "helm_release" "argocd" {
  chart            = "argo-cd"
  name             = "argocd"
  namespace        = "argocd"
  repository       = "https://argoproj.github.io/argo-helm"
  create_namespace = true
}

# (5)
resource "helm_release" "argocd-apps" {
  depends_on = [helm_release.argocd]
  chart      = "argocd-apps"
  name       = "argocd-apps"
  namespace  = "argocd"
  repository = "https://argoproj.github.io/argo-helm"

  # (6)
  values = [
    file("argocd/applications.yaml")
  ]
}

We store Helm values inside the argocd/applications.yml file. In fact, we are going to apply the same set of YAML manifests into two different namespaces: demo-1 and demo-2. The namespace is automatically created during the synchronization.

applications:
 - name: sample-app-1
   namespace: argocd
   project: default
   source:
     repoURL: https://github.com/piomin/openshift-cluster-config.git
     targetRevision: HEAD
     path: apps/simple
   destination:
     server: https://kubernetes.default.svc
     namespace: demo-1
   syncPolicy:
     automated:
       prune: false
       selfHeal: false
     syncOptions:
      - CreateNamespace=true
 - name: sample-app-2
   namespace: argocd
   project: default
   source:
     repoURL: https://github.com/piomin/openshift-cluster-config.git
     targetRevision: HEAD
     path: apps/simple
   destination:
     server: https://kubernetes.default.svc
     namespace: demo-2
   syncPolicy:
     automated:
       prune: false
       selfHeal: false
     syncOptions:
       - CreateNamespace=true

Now, the only thing is to apply the configuration to the Kubernetes cluster. Before we do that we need to initialize Terraform working directory with the following command:

$ cd sealed-secrets
$ terraform init

Finally, we can apply the configuration:

$ terraform apply

Here’s the output of the terraform apply command:

Encrypt Secret with Kubeseal

Assuming you have already installed Sealed Secrets with Terraform on your Kubernetes cluster and kubeseal CLI on your laptop you can encrypt your secret for the first time. Here’s our Kubernetes Secret. It contains just the single field password with the base64-encoded value 123456. We are going to create the SealedSecret object from that Secret using the kubeseal command.

apiVersion: v1
kind: Secret
metadata:
  name: sample-secret
type: Opaque
data:
  password: MTIzNDU2

By default, kubeseal tries to find the Sealed Secrets controller under the sealed-secrets-controller name inside the kube-system namespace. As you see we have already installed it in the sealed-secrets namespace under the sealed-secrets name.

We need to override both the controller name and namespace in the kubeseal command with the --controller-name and --controller-namespace parameters. Here’s our command:

$ kubeseal -f sample-secret.yaml -w sample-sealed-secret.yaml \
   --controller-name sealed-secrets \
   --controller-namespace sealed-secrets

The result may be quite surprising. Sealed Secrets doesn’t allow encrypting secrets without a namespace set in the YAML manifest. That’s because, by default, it uses a strict scope. With that scope, the secret must be sealed with exactly the same name and namespace. These attributes become part of the encrypted data. For me, it adds the default namespace as shown below.

Therefore it won’t be possible to decrypt the secret in a different namespace than the namespace set for the input Kubernetes Secret. On the other hand, we want to apply the same configuration in two different namespaces demo-1 and demo-2. In that case, we have to change the default scope to cluster-wide. With that kubeseal parameter the secret can be unsealed in any namespace and can be given any name. Here’s the command we should use to generate the SealedSecret object:

$ kubeseal -f sample-secret.yaml -w sample-sealed-secret.yaml \
   --controller-name sealed-secrets \
   --controller-namespace sealed-secrets \
   --scope cluster-wide

The output file of the command visible above contains the encrypted secret inside the SealedSecret object. Now, we should just add that YAML manifest to our Git repository.

Apply Sealed Secret with ArgoCD

Our sample Git repository with configuration for ArgoCD is available here. You should go to the apps/simple-with-secret directory. You will find there a Deployment, Service and SealedSecret objects. What’s important they don’t have any namespace set. Here’s our SealedSecret object:

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  annotations:
    sealedsecrets.bitnami.com/cluster-wide: "true"
  creationTimestamp: null
  name: sample-secret
spec:
  encryptedData:
    password: AgCW2Nf1gZzn42QQai/zr0VAtb5ZFyOjxMC8ghYcp5bu4EiYmJupX726zTx4XHQThrPgi/jHvzJoymToYJMIYuMegfKmZGcyMMZxJavYFTtlF9CIegPCkD3kjrJMCWcOadyDkBNIIfFAO6ljPwMD+stpsoBZ6WT8fGokxSwE/poKPpWFozC5RImf7HsYjGYVd8onxCySmcJZFYERi2G0qSWBlFDUsJ/ao5vyxIeiS25DBV1Bn475Lgyv6uTfvY6mesvrxw7OWjJmve2xRD/hS87Wp7cdBE264M/NMk1z24VysQr/ezSSI6S14NgzbcWo/hsKwWLmy6u259o8Xot5nVYpo2EhKFm/r62rko0eC2XMkjXhntMLKLpML3mTdadIFK50OauJvyVZS21sgeTlIMeSq6A6trekYyZvBtQaVixIthGHa/ymJXlIBZVJRL7/SJXquaX+J75AXUzPD3Hag8Kt5R5F6TVY2ox8RkMCpAVVAsiztMbyfRgzel6cAfDyj6l5f8GWI2T7gu5uHXgZFwVeyESn3aTO8qqws6NpLlwrtnjLwoCiXXC1Qo39wXaSJoH7fdJwihvOyiwbfaHkjhQwavNHpBoMEbKYQTV6DXSOTN8eeT1ZPoTN8AM+DtMdS2IpvMxZRsgaanh3O7gf5L02nGEq2WyP75s5sLoa7F8dQ27ZUeznqxIrNzrLqNM4dJuqZTbL4AM=
  template:
    metadata:
      annotations:
        sealedsecrets.bitnami.com/cluster-wide: "true"
      creationTimestamp: null
      name: sample-secret
    type: Opaque

Once it will be applied to the cluster, the Sealed Secrets controller will decrypt it and create the Kubernetes Secret object. Our sample app just takes the Kubernetes Secret and prints the value of the password key.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app
spec:
  selector:
    matchLabels:
      app: sample-app
  template:
    metadata:
      labels:
        app: sample-app
    spec:
      containers:
      - name: sample-app
        image: quay.io/pminkows/sample-kotlin-spring:1.4.30
        ports:
        - containerPort: 8080
          name: http
        env:
          - name: PASS
            valueFrom:
              secretKeyRef:
                key: password
                name: sample-secret

We will test the app’s HTTP endpoint through the Kubernetes Service:

apiVersion: v1
kind: Service
metadata:
  name: sample-app
spec:
  type: ClusterIP
  selector:
    app: sample-app
  ports:
  - port: 8080
    name: http

Let’s verify what happened. Go to the ArgoCD dashboard. As you see there are two applications created by the Terraform script during installation. Both of them automatically applied the configuration from the Git repository to the cluster.

sealed-secrets-kubernetes-apps

Let’s display details of the selected ArgoCD Application. As you see the sample-secret Secret object has already been created from the sample-secret SealedSecret object.

sealed-secrets-kubernetes-argocd

Now, let’s enable port-forward for the Kubernetes Service on port 8080:

$ kubectl port-forward svc/sample-app 8080:8080 -n demo-1

The app is able to display a list of environment variables. We can also display just a selected variable by calling the following endpoint:

$ curl http://localhost:8080/actuator/env/PASS

Final Thoughts

In general, there are two popular approaches to managing secrets on Kubernetes in GitOps style. In the first of them, we store the encrypted value of the secret in the Git repository. Then the software running on the cluster decrypts the value and creates Kubernetes Secret. That solution is represented by the Sealed Secrets and has been described today. In the second of them, we store just a reference to the secret, not the value of the secret in the Git repository. The value of the secret is stored in the third-party tool. Based on the key software running on the cluster retrieves the value.

The most popular example of such a third-party tool is HashiCorp Vault. You can read more about managing secrets with Vault and ArgoCD in the following article. There is also another very promising project in that area – External Secrets. You can expect my article about it soon 🙂

The post Sealed Secrets on Kubernetes with ArgoCD and Terraform appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/12/14/sealed-secrets-on-kubernetes-with-argocd-and-terraform/feed/ 1 13796
Manage Secrets on Kubernetes with ArgoCD and Vault https://piotrminkowski.com/2022/08/08/manage-secrets-on-kubernetes-with-argocd-and-vault/ https://piotrminkowski.com/2022/08/08/manage-secrets-on-kubernetes-with-argocd-and-vault/#comments Mon, 08 Aug 2022 14:59:26 +0000 https://piotrminkowski.com/?p=12804 In this article, you will learn how to integrate ArgoCD with HashiCorp Vault to manage secrets on Kubernetes. In order to use ArgoCD and Vault together during the GitOps process, we will use the following plugin. It replaces the placeholders inside YAML or JSON manifests with the values taken from Vault. What is important in […]

The post Manage Secrets on Kubernetes with ArgoCD and Vault appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to integrate ArgoCD with HashiCorp Vault to manage secrets on Kubernetes. In order to use ArgoCD and Vault together during the GitOps process, we will use the following plugin. It replaces the placeholders inside YAML or JSON manifests with the values taken from Vault. What is important in our case, it also supports Helm templates.

You can use Vault in several different ways on Kubernetes. For example, you may integrate it directly with your Spring Boot app using the Spring Cloud Vault project. To read more about it please refer to that post on my blog.

Prerequisites

In this exercise, we are going to use Helm a lot. Our sample app Deployment is based on the Helm template. We also use Helm to install Vault and ArgoCD on Kubernetes. Finally, we need to customize the ArgoCD Helm chart parameters to enable and configure ArgoCD Vault Plugin. So, before proceeding please ensure you have basic knowledge about Helm. Of course, you should also install it on your laptop. For me, it is possible with homebrew:

$ brew install helm

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that you need to clone my GitHub repository. Then you should just follow my instructions 🙂

Run and Configure Vault on Kubernetes

In the first step, we are going to install Vault on Kubernetes. We can easily do it using its official Helm chart. In order to simplify our exercise, we run it in development mode with a single server instance. Normally, you would configure it in HA mode. Let’s add the Hashicorp helm repository:

$ helm repo add hashicorp https://helm.releases.hashicorp.com

In order to enable development mode the parameter server.dev.enabled should have the value true. We don’t need to override any other default values:

$ helm install vault hashicorp/vault \
    --set "server.dev.enabled=true"

To check if Vault is successfully installed on the Kubernetes cluster we can display a list of running pods:

$ kubectl get pod 
NAME                                   READY   STATUS    RESTARTS   AGE
vault-0                                1/1     Running   0          25s
vault-agent-injector-9456c6d55-hx2fd   1/1     Running   0          21s

We can configure Vault in several different ways. One of the options is through the UI. To login there we may use the root token generated only in development mode. Vault also exposes the HTTP API. The last available option is with the CLI. CLI is available inside the Vault pod, but as well we can install it locally. For me, it is possible using the brew install vault command. Then we need to enable port-forwarding and export Vault local address as the VAULT_ADDR environment variable:

$ kubectl port-forward vault-0 8200
$ export VAULT_ADDR=http://127.0.0.1:8200
$ vault status

Then just login to the Vault server using the root token:

Enable Kubernetes Authentication on Vault

There are several authentication methods on Vault. However, since we run it on Kubernetes we should the method dedicated to that platform. What is important, this method is also supported by the ArgoCD Vault Plugin. Firstly, let’s enable the Kubernetes Auth method:

$ vault auth enable kubernetes

Then, we need to configure our authentication method. There are three required parameters: the URL of the Kubernetes API server, Kubernetes CA certificate, and a token reviewer JWT.

$ vault write auth/kubernetes/config \
    token_reviewer_jwt="<your reviewer service account JWT>" \
    kubernetes_host=<your Kubernetes API address> \
    kubernetes_ca_cert=@ca.crt

In order to easily obtain all those parameters, you can the following three commands. Then you can set them also e.g. using the Vault UI.

$ echo "https://$( kubectl exec vault-0 -- env | grep KUBERNETES_PORT_443_TCP_ADDR| cut -f2 -d'='):443"
$ kubectl exec vault-0 \
  -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
$ echo $(kubectl exec vault-0 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)

Once we pass all the required parameters, we may proceed to the named role creation. ArgoCD Vault Plugin will use that role to authenticate against the Vault server. We need to provide the namespace with ArgoCD and the name of the Kubernetes service account used by the ArgoCD Repo Server. Our token expires after 24 hours.

$ vault write auth/kubernetes/role/argocd \
  bound_service_account_names=argocd-repo-server \
  bound_service_account_namespaces=argocd \
  policies=argocd \
  ttl=24h

That’s all for now. We will also need to create a test secret on Vault and configure a policy for the argocd role. Before that, let’s take a look at our sample Spring Boot app and its Helm template.

Helm Template for Spring Boot App

Our app is very simple. It just exposes a single HTTP endpoint that returns the value of the environment variable inside a container. Here’s the REST controller class written in Kotlin.

@RestController
@RequestMapping("/persons")
class PersonController {

    @Value("\${PASS:none}")
    lateinit var pass: String

    @GetMapping("/pass")
    fun printPass() = pass

}

We will use a generic Helm chart to deploy our app on Kubernetes. Our Deployment template contains a list of environment variables defined for the container.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.app.name }}
spec:
  replicas: {{ .Values.app.replicas }}
  selector:
    matchLabels:
      app: {{ .Values.app.name }}
  template:
    metadata:
      labels:
        app: {{ .Values.app.name }}
    spec:
      containers:
        - name: {{ .Values.app.name }}
          image: {{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}
          ports:
          {{- range .Values.app.ports }}
            - containerPort: {{ .value }}
              name: {{ .name }}
          {{- end }}
          {{- if .Values.app.envs }}
          env:
          {{- range .Values.app.envs }}
            - name: {{ .name }}
              value: {{ .value }}
          {{- end }}
          {{- end }}

There is also another file in the templates directory. It contains a definition of the Kubernetes Service.

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.app.name }}
spec:
  type: ClusterIP
  selector:
    app: {{ .Values.app.name }}
  ports:
  {{- range .Values.app.ports }}
  - port: {{ .value }}
    name: {{ .name }}
  {{- end }}

Finally, let’s take a look at the Chart.yaml file.

apiVersion: v2
name: sample-with-envs
description: A Helm chart for Kubernetes
type: application
version: 1.0
appVersion: "1.0"

Our goal is to use this Helm chart to deploy the sample Spring Boot app on Kubernetes with ArgoCD and Vault. Of course, before we do it we need to install ArgoCD.

Install ArgoCD with Vault Plugin on Kubernetes

Normally, it would be a very simple installation. But this time we need to customize the ArgoCD template to install it with the Vault plugin. Or more precisely, we have to customize the configuration of the ArgoCD Repository Server. It is one of the ArgoCD internal services. It maintains the local cache of the Git repository and generates Kubernetes manifests.

argocd-vault-deployment

There are some different options for installing Vault plugin on ArgoCD. The full list of options is available here. Starting with the version 2.4.0 of ArgoCD it is possible to install it via a sidecar container. We will choose the option based on sidecar and initContainer. You may read more about it here. However, our case would be different a little since we Helm instead of Kustomize for installing ArgoCD. To clarify, we need to do three things to install the Vault plugin on ArgoCD. Let’s analyze those steps:

  • define initContainer on the ArgoCD Repository Server Deployment to download argocd-vault-plugin binaries and mount them on the volume
  • define the ConfigMap containing the ConfigManagementPlugin CRD overriding a default behavior of Helm on ArgoCD
  • customize argocd-repo-server Deployment to mount the volume with argocd-vault-plugin and the ConfigMap created in the previous step

After those steps, we would have to integrate the plugin with the running instance of the Vault server. We will use a previously create Vault argocd role.

Firstly, let’s create the ConfigMap to customize the default behavior of Helm on ArgoCD. After running the helm template command we will also run the argocd-vault-plugin generate command to replace all inline placeholders with the secrets defined in Vault. The address and auth configuration of Vault are defined in the vault-configuration secret.

apiVersion: v1
kind: ConfigMap
metadata:
  name: cmp-plugin
data:
  plugin.yaml: |
    ---
    apiVersion: argoproj.io/v1alpha1
    kind: ConfigManagementPlugin
    metadata:
      name: argocd-vault-plugin-helm
    spec:
      allowConcurrency: true
      discover:
        find:
          command:
            - sh
            - "-c"
            - "find . -name 'Chart.yaml' && find . -name 'values.yaml'"
      generate:
        command:
          - bash
          - "-c"
          - |
            helm template $ARGOCD_APP_NAME -n $ARGOCD_APP_NAMESPACE -f <(echo "$ARGOCD_ENV_HELM_VALUES") . |
            argocd-vault-plugin generate -s vault-configuration -
      lockRepo: false

Here’s the vault-configuration Secret:

apiVersion: v1
kind: Secret
metadata:
  name: vault-configuration
  namespace: argocd 
data:
  AVP_AUTH_TYPE: azhz
  AVP_K8S_ROLE: YXJnb2Nk
  AVP_TYPE: dmF1bHQ=
  VAULT_ADDR: aHR0cDovL3ZhdWx0LmRlZmF1bHQ6ODIwMA==
type: Opaque

To see the values let’s display the Secret in Lens. Vault is running in the default namespace, so its address is http://vault.default:8200. The name of our role in Vault is argocd. We also need to set the auth type as k8s.

Finally, we need to customize the ArgoCD Helm installation. To achieve that let’s define the Helm values.yaml file. It contains the definition of initContainer and sidecar for argocd-repo-server. We also mount the cmp-plugin ConfigMap to the Deployment, and add additional privileges to the argocd-repo-service ServiceAccount to allow reading Secrets.

repoServer:
  rbac:
    - verbs:
        - get
        - list
        - watch
      apiGroups:
        - ''
      resources:
        - secrets
        - configmaps
  initContainers:
    - name: download-tools
      image: registry.access.redhat.com/ubi8
      env:
        - name: AVP_VERSION
          value: 1.11.0
      command: [sh, -c]
      args:
        - >-
          curl -L https://github.com/argoproj-labs/argocd-vault-plugin/releases/download/v$(AVP_VERSION)/argocd-vault-plugin_$(AVP_VERSION)_linux_amd64 -o argocd-vault-plugin &&
          chmod +x argocd-vault-plugin &&
          mv argocd-vault-plugin /custom-tools/
      volumeMounts:
        - mountPath: /custom-tools
          name: custom-tools

  extraContainers:
    - name: avp-helm
      command: [/var/run/argocd/argocd-cmp-server]
      image: quay.io/argoproj/argocd:v2.4.8
      securityContext:
        runAsNonRoot: true
        runAsUser: 999
      volumeMounts:
        - mountPath: /var/run/argocd
          name: var-files
        - mountPath: /home/argocd/cmp-server/plugins
          name: plugins
        - mountPath: /tmp
          name: tmp-dir
        - mountPath: /home/argocd/cmp-server/config
          name: cmp-plugin
        - name: custom-tools
          subPath: argocd-vault-plugin
          mountPath: /usr/local/bin/argocd-vault-plugin

  volumes:
    - configMap:
        name: cmp-plugin
      name: cmp-plugin
    - name: custom-tools
      emptyDir: {}

In order to install ArgoCD on Kubernetes add the following Helm repository:

$ helm repo add argo https://argoproj.github.io/argo-helm

Let’s install it in the argocd namespace using customized parameters in the values.yaml file:

$ kubectl create ns argocd
$ helm install argocd argo/argo-cd -n argocd -f values.yaml

Sync Vault Secrets with ArgoCD

Once we deployed Vault and ArgoCD on Kubernetes we may proceed to the next step. Now, we are going to create a secret on Vault. Firstly, let’s enable the KV engine:

$ vault secrets enable kv-v2

Then, we can create a sample secret with the argocd name and a single password key:

$ vault kv put kv-v2/argocd password="123456"

ArgoCD Vault Plugin uses the argocd policy to read secrets. So, in the next step, we need to create the following policy to enable reading the previously created secret:

$ vault policy write argocd - <<EOF
path "kv-v2/data/argocd" {
  capabilities = ["read"]
}
EOF

Then, we may define the ArgoCD Application for deploying our Spring Boot app on Kubernetes. The Helm template for Kubernetes manifests is available on the GitHub repository under the simple-with-envs directory (1). As a tool for creating manifests we choose plugin (2). However, we won’t set its name since we use a sidecar container with argocd-vault-plugin. ArgoCD Vault plugin allows passing inline values in the application manifest. It reads the content defined inside the HELM_VALUES environment variable (3) (depending on the environment variable name set inside cmp-plugin ConfigMap). And finally, the most important thing. ArgoCD Vault Plugin is looking for placeholders inside the <> brackets. For inline values, it should have the following structure: <path:path_to_the_vault_secret#name_of_the_key> (4). In our case, we define the environment variable PASS that uses the argocd secret and the password key stored inside the KV engine.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: simple-helm
  namespace: argocd
spec:
  destination:
    name: ''
    namespace: default
    server: https://kubernetes.default.svc
  project: default
  source:
    path: simple-with-envs # (1)
    repoURL: https://github.com/piomin/sample-generic-helm-charts.git 
    targetRevision: HEAD
    plugin: # (2)
      env:
        - name: HELM_VALUES # (3)
          value: |
            image:
              registry: quay.io
              repository: pminkows/sample-kotlin-spring
              tag: "1.4.30"

            app:
              name: sample-spring-boot-kotlin
              replicas: 1
              ports:
                - name: http
                  value: 8080
              envs:
                - name: PASS
                  value: <path:kv-v2/data/argocd#password> # (4)

Finally, we can create the ArgoCD Application. It should have the OutOfSync status:

Let’s synchronize its state with the Git repository. We can do it using e.g. ArgoCD UI. If everything works fine you should see the green tile with your application name.

argocd-vault-sync

Then, let’s just verify the structure of our app Deployment. You should see the value 123456 instead of the placeholder defined inside the ArgoCD Application.

argocd-vault-result

It is just a formality, but in the end, you can test the endpoint GET /persons/pass exposed by our Spring Boot app. It prints the value of the PASS environment variable. To do that you should also enable port-forwarding for the app.

$ kubectl port-forward svc/simple-helm 8080:8080
$ curl http://localhost:8080/persons/pass

Final Thoughts

GitOps approach becomes very popular in a Kubernetes-based environment. As always, one of the greatest challenges with that approach is security. Hashicorp Vault is one of the best tools for managing and protecting sensitive data. It can be easily installed on Kubernetes and included in your GitOps process. In this article, I showed how to use it together with other very popular solutions for deploying apps: ArgoCD and Helm.

The post Manage Secrets on Kubernetes with ArgoCD and Vault appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2022/08/08/manage-secrets-on-kubernetes-with-argocd-and-vault/feed/ 4 12804