Loki Archives - Piotr's TechBlog https://piotrminkowski.com/tag/loki/ Java, Spring, Kotlin, microservices, Kubernetes, containers Fri, 21 Jul 2023 22:01:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/piotrminkowski.com/wp-content/uploads/2020/08/cropped-me-2-tr-x-1.png?fit=32%2C32&ssl=1 Loki Archives - Piotr's TechBlog https://piotrminkowski.com/tag/loki/ 32 32 181738725 Logging in Kubernetes with Loki https://piotrminkowski.com/2023/07/20/logging-in-kubernetes-with-loki/ https://piotrminkowski.com/2023/07/20/logging-in-kubernetes-with-loki/#respond Thu, 20 Jul 2023 22:50:37 +0000 https://piotrminkowski.com/?p=14339 In this article, you will learn how to install, configure and use Loki to collect logs from apps running on Kubernetes. Together with Loki, we will use the Promtail agent to ship the logs and the Grafana dashboard to display them in graphical form. We will also create a simple app written in Quarkus that […]

The post Logging in Kubernetes with Loki appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to install, configure and use Loki to collect logs from apps running on Kubernetes. Together with Loki, we will use the Promtail agent to ship the logs and the Grafana dashboard to display them in graphical form. We will also create a simple app written in Quarkus that prints the logs in JSON format. Of course, Loki will collect the logs from the whole cluster. If you are interested in other approaches for integrating your apps with Loki you can read my article. It shows how to send the Spring Boot app logs to Loki using Loki4j Logback appended. You can also find the article about Grafana Agent used to send logs from the Spring Boot app to Loki on Grafana Cloud here.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. In order to do that, you need to clone my GitHub repository. Then you should just follow my instructions.

Install Loki Stack on Kubernetes

In the first step, we will install Loki Stack on Kubernetes. The most convenient way to do it is through the Helm chart. Fortunately, there is a single Helm chart that installs and configures all the tools required in our exercise: Loki, Promtail, and Grafana. Let’s add the following Helm repository:

$ helm repo add grafana https://grafana.github.io/helm-charts

Then, we can install the loki-stack chart. By default, it does not install Grafana. In order to enable Grafana we need to set the grafana.enabled parameter to true. Our Loki Stack is installed in the loki-stack namespace:

$ helm install loki grafana/loki-stack \
  -n loki-stack \
  --set grafana.enabled=true \
  --create-namespace

Here’s a list of running pods in the loki-stack namespace:

$ kubectl get po -n loki-stack
NAME                           READY   STATUS    RESTARTS   AGE
loki-0                         1/1     Running   0          78s
loki-grafana-bf598db67-czcds   2/2     Running   0          93s
loki-promtail-vt25p            1/1     Running   0          30s

Let’s enable port forwarding to access the Grafana dashboard on the local port:

$ kubectl port-forward svc/loki-grafana 3000:80 -n loki-stack

Helm chart automatically generates a password for the admin user. We can obtain it with the following command:

$ kubectl get secret -n loki-stack loki-grafana \
    -o jsonpath="{.data.admin-password}" | \
    base64 --decode ; echo

Once we login into the dashboard we will see the auto-configured Loki datasource. We can use it to get the latest logs from the Kubernetes cluster:

It seems that the `loki-stack` Helm chart is not maintained anymore. As the replacement, we can use three separate Helm charts for Loki, Promtail, and Grafana. It is described in the last section of that article. Although `loki-stack` simplifies installation, in the current situation, it is not a suitable method for production. Instead, we should use the `loki-distributed` chart.

Create and Deploy Quarkus App on Kubernetes

In the next step, we will install our sample Quarkus app on Kubernetes. It connects to the Postgres database. Therefore, we will also install Postgres with the Bitnami Helm chart:

$ helm install person-db bitnami/postgresql -n sample-quarkus \
  --set auth.username=quarkus  \
  --set auth.database=quarkus  \
  --set fullnameOverride=person-db \
  --create-namespace

With Quarkus we can easily change the logs format to JSON. We just need to include the following Maven dependency:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-logging-json</artifactId>
</dependency>

And also enable JSON logging in the application properties:

quarkus.log.console.json = true

Besides the static logging fields, we will include a single dynamic field. We will use the MDC mechanism for that (1) (2). That field indicates the id of the person for whom we make the GET or POST request. Here’s the code of the REST controller:

@Path("/persons")
public class PersonResource {

    private PersonRepository repository;
    private Logger logger;

    public PersonResource(PersonRepository repository, Logger logger) {
        this.repository = repository;
        this.logger = logger;
    }

    @POST
    @Transactional
    public Person add(Person person) {
        repository.persist(person);
        MDC.put("personId", person.id); // (1)
        logger.infof("IN -> add(%s)", person);
        return person;
    }

    @GET
    @APIResponseSchema(Person.class)
    public List<Person> findAll() {
        logger.info("IN -> findAll");
        return repository.findAll()
                .list();
    }

    @GET
    @Path("/{id}")
    public Person findById(@PathParam("id") Long id) {
        MDC.put("personId", id); // (2)
        logger.infof("IN -> findById(%d)", id);
        return repository.findById(id);
    }
}

Here’s the sample log for the GET endpoint. Now, our goal is to parse and index it properly in Loki with Promtail.

Now, we need to deploy our sample app on Kubernetes. Fortunately, with Quarkus we can build and deploy the app using the single Maven command. We just need to activate the following custom profile which includes quarkus-kubernetes dependency and enables deployment with the quarkus.kubernetes.deploy property. It also activates image build using the Jib Maven Plugin.

<profile>
  <id>kubernetes</id>
  <activation>
    <property>
      <name>kubernetes</name>
    </property>
  </activation>
  <dependencies>
    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-container-image-jib</artifactId>
    </dependency>
    <dependency>
      <groupId>io.quarkus</groupId>
      <artifactId>quarkus-kubernetes</artifactId>
    </dependency>
  </dependencies>
  <properties>
    <quarkus.kubernetes.deploy>true</quarkus.kubernetes.deploy>
  </properties>
</profile>

Let’s build and deploy the app:

$ mvn clean package -DskipTests -Pkubernetes

Here’s the list of running pods (database and app):

$ kubectl get po -n sample-quarkus
NAME                             READY   STATUS    RESTARTS   AGE
person-db-0                      1/1     Running   0          48s
person-service-9f67b6d57-gvbs6   1/1     Running   0          18s

Configure Promptail to Parse JSON Logs

Let’s take a look at the Promtail configuration. We can find it inside the loki-promtail Secret. As you see it uses only the cri component.

server:
  log_level: info
  http_listen_port: 3101


clients:
  - url: http://loki:3100/loki/api/v1/push

positions:
  filename: /run/promtail/positions.yaml

scrape_configs:
  - job_name: kubernetes-pods
    pipeline_stages:
      - cri: {}
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - source_labels:
          - __meta_kubernetes_pod_controller_name
        regex: ([0-9a-z-.]+?)(-[0-9a-f]{8,10})?
        action: replace
        target_label: __tmp_controller_name
      - source_labels:
          - __meta_kubernetes_pod_label_app_kubernetes_io_name
          - __meta_kubernetes_pod_label_app
          - __tmp_controller_name
          - __meta_kubernetes_pod_name
        regex: ^;*([^;]+)(;.*)?$
        action: replace
        target_label: app
      - source_labels:
          - __meta_kubernetes_pod_label_app_kubernetes_io_instance
          - __meta_kubernetes_pod_label_release
        regex: ^;*([^;]+)(;.*)?$
        action: replace
        target_label: instance
      - source_labels:
          - __meta_kubernetes_pod_label_app_kubernetes_io_component
          - __meta_kubernetes_pod_label_component
        regex: ^;*([^;]+)(;.*)?$
        action: replace
        target_label: component
      - action: replace
        source_labels:
          - __meta_kubernetes_pod_node_name
        target_label: node_name
      - action: replace
        source_labels:
          - __meta_kubernetes_namespace
        target_label: namespace
      - action: replace
        replacement: $1
        separator: /
        source_labels:
          - namespace
          - app
        target_label: job
      - action: replace
        source_labels:
          - __meta_kubernetes_pod_name
        target_label: pod
      - action: replace
        source_labels:
          - __meta_kubernetes_pod_container_name
        target_label: container
      - action: replace
        replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
          - __meta_kubernetes_pod_uid
          - __meta_kubernetes_pod_container_name
        target_label: __path__
      - action: replace
        regex: true/(.*)
        replacement: /var/log/pods/*$1/*.log
        separator: /
        source_labels:
          - __meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash
          - __meta_kubernetes_pod_annotation_kubernetes_io_config_hash
          - __meta_kubernetes_pod_container_name
        target_label: __path__

The result for our app is quite inadequate. Loki stores the full Kubernetes pods’ log lines and doesn’t recognize our logging fields.

In order to change that behavior we will parse data using the json component. This action will be limited just to our sample application (1). We will label the log records with level, sequence, and the personId MDC field (2) after extracting them from the Kubernetes log line. The mdc field contains a list of objects, so we need to perform additional JSON parsing (3) to extract the personId field. As the output, Promtail should return the log message field (4). Here’s the required transformation in the configuration file:

- job_name: kubernetes-pods
  pipeline_stages:
    - cri: {}
    - match:
        selector: '{app="person-service"}' # (1)
        stages:
          - json:
              expressions:
                log:
          - json: # (2)
              expressions:
                sequence: sequence
                message: message
                level: level
                mdc:
              source: log
          - json: # (3)
              expressions:
                personId: personId
              source: mdc
          - labels:
              sequence:
              level:
              personId:
          - output: # (4)
              source: message

After setting a new value of the loki-promtail Secret we should restart the Promtail pod. Let’s also restart our app and perform some test calls of the REST API:

$ curl http://localhost:8080/persons/1

$ curl http://localhost:8080/persons/6

$ curl -X 'POST' http://localhost:8080/persons \
  -H 'Content-Type: application/json' \
  -d '{
  "name": "John Wick",
  "age": 18,
  "gender": "MALE",
  "externalId": 100,
  "address": {
    "street": "Test Street",
    "city": "Warsaw",
    "flatNo": 18,
    "buildingNo": 100
  }
}'

Let’s see how it looks in Grafana:

kubernetes-loki-list-of-logs

As you see, the log record for the GET request is labeled with level, sequence and the personId MDC field. That’s what we exactly wanted to achieve!

kubernetes-loki-labels-log-line

Now, we are able to filter results using the fields from our JSON log line:

kubernetes-loki-search-logs

Distributed Installation of Loki Stack

In the previously described installation method, we run a single instance of Loki. In order to use a more cloud-native and scalable approach we should switch to the loki-distributed Helm chart. It decides a single Loki instance into several independent components. That division also separates read and write streams. Let’s install it in the loki-distributed namespace with the following command:

$ helm install loki grafana/loki-distributed \
  -n loki-distributed --create-namespace

When installing Promtail we should modify the default address of the write endpoint. We use the Loki gateway component for that. In our case the name of the gateway Service is loki-loki-distributed-gateway. That component listens on the 80 port.

config:
  clients:
  - url: http://loki-loki-distributed-gateway/loki/api/v1/push

Let’s install Promtail using the following command:

$ helm install promtail grafana/promtail -n loki-distributed \
  -f values.yml

Finally, we should install Grafana. The same as before we will use a dedicated Helm chart:

$ helm install grafana grafana/grafana -n loki-distributed

Here’s a list of running pods:

$ kubectl get pod -n loki-distributed
NAME                                                    READY   STATUS    RESTARTS   AGE
grafana-6cd56666b9-6hvqg                                1/1     Running   0          42m
loki-loki-distributed-distributor-59767b5445-n59bq      1/1     Running   0          48m
loki-loki-distributed-gateway-7867bc8ddb-kgdfk          1/1     Running   0          48m
loki-loki-distributed-ingester-0                        1/1     Running   0          48m
loki-loki-distributed-querier-0                         1/1     Running   0          48m
loki-loki-distributed-query-frontend-86c944647c-vl2bz   1/1     Running   0          48m
promtail-c6dxj                                          1/1     Running   0          37m

After logging in to Grafana, we should add the Loki data source (we could also do it during the installation with Helm values). This time we have to connect to the query-frontend component available under the address loki-loki-distributed-query-frontend:3100.

Final Thoughts

Loki Stack is an interesting alternative to Elastic Stack for collecting and aggregating logs on Kubernetes. Loki has been designed to be very cost-effective and easy to operate. Since it does not index the contents of the logs, the usage of such resources as disk space or RAM memory is lower than for Elasticsearch. In this article, I showed you how to install Loki Stack on Kubernetes and how to configure it to analyze app logs in practice.

The post Logging in Kubernetes with Loki appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/07/20/logging-in-kubernetes-with-loki/feed/ 0 14339
Logging in Spring Boot with Loki https://piotrminkowski.com/2023/07/05/logging-in-spring-boot-with-loki/ https://piotrminkowski.com/2023/07/05/logging-in-spring-boot-with-loki/#comments Wed, 05 Jul 2023 21:15:27 +0000 https://piotrminkowski.com/?p=14316 In this article, you will learn how to collect and send the Spring Boot app logs to Grafana Loki. We will use Loki4j Logback appended for that. Loki is a horizontally scalable, highly available log aggregation system inspired by Prometheus. I’ll show how to configure integration between the app and Loki step by step. However, […]

The post Logging in Spring Boot with Loki appeared first on Piotr's TechBlog.

]]>
In this article, you will learn how to collect and send the Spring Boot app logs to Grafana Loki. We will use Loki4j Logback appended for that. Loki is a horizontally scalable, highly available log aggregation system inspired by Prometheus. I’ll show how to configure integration between the app and Loki step by step. However, you can also use my auto-configured library for logging HTTP requests and responses that will do all those steps for you.

Source Code

If you would like to try it by yourself, you may always take a look at my source code. To do that you need to clone my GitHub repository. For the source code repository with my custom Spring Boot Logging library go here. Then you should just follow my instructions.

Using Loki4j Logback Appender

In order to use Loki4j Logback appended we need to include a single dependency in Maven pom.xml. The current version of that library is 1.4.1:

<dependency>
    <groupId>com.github.loki4j</groupId>
    <artifactId>loki-logback-appender</artifactId>
    <version>1.4.1</version>
</dependency>

Then we need to create the logback-spring.xml file in the src/main/resources directory. Our instance of Loki is available under the http://localhost:3100 address (1). Loki does not index the contents of the logs – but only metadata labels. There are some static labels like the app name, log level, or hostname. We can set them in the format.label field (2). We will also set some dynamic labels and therefore we enable the Logback markers feature (3). Finally, we are setting the log format pattern (4). In order to simplify, potential transformations with LogQL (Loki query language) we will use JSON notation.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

  <springProperty name="name" source="spring.application.name" />

  <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
      <pattern>
        %d{HH:mm:ss.SSS} %-5level %logger{36} %X{X-Request-ID} - %msg%n
      </pattern>
    </encoder>
  </appender>

  <appender name="LOKI" class="com.github.loki4j.logback.Loki4jAppender">
    <!-- (1) -->
    <http>
      <url>http://localhost:3100/loki/api/v1/push</url>
    </http>
    <format>
      <!-- (2) -->
      <label>
        <pattern>app=${name},host=${HOSTNAME},level=%level</pattern>
        <!-- (3) -->
        <readMarkers>true</readMarkers>
      </label>
      <message>
        <!-- (4) -->
        <pattern>
{
   "level":"%level",
   "class":"%logger{36}",
   "thread":"%thread",
   "message": "%message",
   "requestId": "%X{X-Request-ID}"
}
        </pattern>
      </message>
    </format>
  </appender>

  <root level="INFO">
    <appender-ref ref="CONSOLE" />
    <appender-ref ref="LOKI" />
  </root>

</configuration>

Besides the static labels, we may send dynamic data, e.g. something specific just for the current request. Assuming we have a service that manages persons, we want to log the id of the target person from the request. As I mentioned before, with Loki4j we can use Logback markers for that. In classic Logback, markers are mostly used to filter log records. With Loki, we just need to define the LabelMarker object containing the key/value Map of dynamic fields (1). Then we pass the object to the current log line (2).

@RestController
@RequestMapping("/persons")
public class PersonController {

    private final Logger LOG = LoggerFactory
       .getLogger(PersonController.class);
    private final List<Person> persons = new ArrayList<>();

    @GetMapping
    public List<Person> findAll() {
        return persons;
    }

    @GetMapping("/{id}")
    public Person findById(@PathVariable("id") Long id) {
        Person p = persons.stream().filter(it -> it.getId().equals(id))
                .findFirst()
                .orElseThrow();
        LabelMarker marker = LabelMarker.of("personId", () -> 
           String.valueOf(p.getId())); // (1)
        LOG.info(marker, "Person successfully found"); // (2)
        return p;
    }

    @GetMapping("/name/{firstName}/{lastName}")
    public List<Person> findByName(
       @PathVariable("firstName") String firstName,
       @PathVariable("lastName") String lastName) {
       
       return persons.stream()
          .filter(it -> it.getFirstName().equals(firstName)
                        && it.getLastName().equals(lastName))
          .toList();
    }

    @PostMapping
    public Person add(@RequestBody Person p) {
        p.setId((long) (persons.size() + 1));
        LabelMarker marker = LabelMarker.of("personId", () -> 
           String.valueOf(p.getId()));
        LOG.info(marker, "New person successfully added");
        persons.add(p);
        return p;
    }

    @DeleteMapping("/{id}")
    public void delete(@PathVariable("id") Long id) {
        Person p = persons.stream()
           .filter(it -> it.getId().equals(id))
           .findFirst()
           .orElseThrow();
        persons.remove(p);
        LabelMarker marker = LabelMarker.of("personId", () -> 
           String.valueOf(id));
        LOG.info(marker, "Person successfully removed");
    }

    @PutMapping
    public void update(@RequestBody Person p) {
        Person person = persons.stream()
                .filter(it -> it.getId().equals(p.getId()))
                .findFirst()
                .orElseThrow();
        persons.set(persons.indexOf(person), p);
        LabelMarker marker = LabelMarker.of("personId", () -> 
            String.valueOf(p.getId()));
        LOG.info(marker, "Person successfully updated");
    }

}

Assuming we have multiple dynamic fields in the single log line, we have to create the LabelMarker object in this way:

LabelMarker marker = LabelMarker.of(() -> Map.of("audit", "true",
                    "X-Request-ID", MDC.get("X-Request-ID"),
                    "X-Correlation-ID", MDC.get("X-Correlation-ID")));

Running Loki with Spring Boot App

The simplest way to run Loki on the local machine is with a Docker container. Besides just the Loki instance, we will also run Grafana to display and search logs. Here’s the docker-compose.yml with all the required services. You can run them with the docker compose up command. However, there is another way – directly with the Spring Boot app.

version: "3"

networks:
  loki:

services:
  loki:
    image: grafana/loki:2.8.2
    ports:
      - "3100:3100"
    command: -config.file=/etc/loki/local-config.yaml
    networks:
      - loki

  grafana:
    environment:
      - GF_PATHS_PROVISIONING=/etc/grafana/provisioning
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
    entrypoint:
      - sh
      - -euc
      - |
        mkdir -p /etc/grafana/provisioning/datasources
        cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml
        apiVersion: 1
        datasources:
        - name: Loki
          type: loki
          access: proxy
          orgId: 1
          url: http://loki:3100
          basicAuth: false
          isDefault: true
          version: 1
          editable: false
        EOF
        /run.sh
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    networks:
      - loki

In order to take advantage of the Spring Boot Docker Compose support we need to place the docker-compose.yml in the app root directory. Then, we have to include the spring-boot-docker-compose dependency in the Maven pom.xml:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-docker-compose</artifactId>
  <optional>true</optional>
</dependency>

Once we do all the required things, we can run the app. For example, with the following Maven command:

$ mvn spring-boot:run

Now, before the app, Spring Boot starts containers defined in the docker-compose.yml.

spring-boot-loki-docker-compose

Let’s just display a list of running containers. As you see, everything will work fine since Loki listens on the local port 3100:

$ docker ps            
CONTAINER ID   IMAGE                    COMMAND                  CREATED         STATUS         PORTS                    NAMES
d23390fbee06   grafana/loki:2.8.2       "/usr/bin/loki -conf…"   4 minutes ago   Up 2 minutes   0.0.0.0:3100->3100/tcp   sample-spring-boot-web-loki-1
84a47637a50b   grafana/grafana:latest   "sh -euc 'mkdir -p /…"   2 days ago      Up 2 minutes   0.0.0.0:3000->3000/tcp   sample-spring-boot-web-grafana-1

Testing Logging on the Spring Boot REST App

After running the app, we can make some test calls of our REST API. In the beginning, let’s add some persons:

$ curl 'http://localhost:8080/persons' \
  -H 'Content-Type: application/json' \
  -d '{"firstName": "AAA","lastName": "BBB","age": 20,"gender": "MALE"}'

$ curl 'http://localhost:8080/persons' \
  -H 'Content-Type: application/json' \
  -d '{"firstName": "CCC","lastName": "DDD","age": 30,"gender": "FEMALE"}'

$ curl 'http://localhost:8080/persons' \
  -H 'Content-Type: application/json' \
  -d '{"firstName": "EEE","lastName": "FFF","age": 40,"gender": "MALE"}'

Then, we may call the “find” endpoints several times with different criteria:

$ curl http://localhost:8080/persons/1
$ curl http://localhost:8080/persons/2
$ curl http://localhost:8080/persons/3

Here are the application logs from the console. There are just simple log lines – not formatted in JSON.

Now, let’s switch to Grafana. We already have integration with Loki configured. In the new dashboard, we need to choose Loki.

spring-boot-loki-grafana-datasource

Here’s the history of app logs stored in Loki.

As you see we are logging in the JSON format. Some log lines contain dynamic labels included with the Loki4j Logback appended.

spring-boot-loki-logs

We added the personId label to some logs, so we can easily filter records only with requests for a particular person. Here’s the LogQL query that filters records for the personId=1:

{app="first-service"} |= `` | personId = `1`

Here’s the result visible in the Grafana dashboard:

spring-boot-loki-labels

We can also format logs using LogQL. Thanks to JSON format it is possible to prepare a query that parses the whole log message.

{app="first-service"} |= `` | json

As you see, now Loki treats all the JSON fields as metadata labels:

Using Spring Boot Loki Starter Library

If you don’t want to configure those things by yourself, you can use my Spring Boot library which provides auto-configuration for that. Additionally, it automatically logs all the incoming HTTP requests and outgoing HTTP responses. If default settings are enough you just need to include the single Spring Boot starter as a dependency:

<dependency>
  <groupId>com.github.piomin</groupId>
  <artifactId>logstash-logging-spring-boot-starter</artifactId>
  <version>2.0.2</version>
</dependency>

The library logs each request and response with several default labels including e.g. requestId or correlationId.

If you need more information about my Spring Boot Logging library you can refer to my previous articles about it. Here’s the article with a detailed explanation with some implementation details. Another one is more focused on the usage guide.

The post Logging in Spring Boot with Loki appeared first on Piotr's TechBlog.

]]>
https://piotrminkowski.com/2023/07/05/logging-in-spring-boot-with-loki/feed/ 3 14316