☸️ Web Application on Kubernetes: A Tutorial to Observability with the Elastic Stack

Benoit COUETIL 💫 - Nov 27 '23 - - Dev Community

This article still uses the Helm charts to deploy the Elastic Stack. It is now archived and read-only. A later version of this article will be published using the recommended ECK Kubernetes operator. Stay tuned 🤓

Initial thoughts

The Elastic Stack is a group of open-source products from Elastic designed to help users extract data from any type of source and in any format, enabling them to search, analyze, and visualize that data in real-time.

Observability refers to the ability to understand a system's internal state by examining its external outputs, particularly its data.

To efficiently administer a web application deployed to a Kubernetes cluster, we need to make the invisible visible across multiple complementary aspects, employing the three pillars of observability: logs, metrics, and traces. These pillars are essential for gaining insights into the application's performance and health.

From a tooling perspective, a paid approach would involve using tools like Datadog, New Relic, and App Dynamics, following the latest Magic Quadrant for Application Performance Monitoring and Observability:

observability magic quadrant

On the open-source side, a popular stack is Prometheus/Grafana/Loki/Tempo.

However, in this article, we will explore an alternative route that we have successfully experimented with on multiple projects over the past few years: The Elastic Stack. One of the great advantages of this collection of tools is how seamlessly they integrate with each other.

By following the guidance in this article, you will be able to observe the following with the Elastic Stack:

  • Logs:
    • Kubernetes technical logs
    • Application logs
  • Metrics:
    • Kubernetes generic health indicators
    • Application technical KPIs
    • Application business KPIs
  • Traces:
    • End-to-end traces of application user requests
    • Traces of application internal routines

Architecture

Below, you'll discover the architecture that emerged from our careful considerations:

  • Efficiency: A design that strikes a balance between minimalism and robustness suitable for production environments.
  • Unified Cluster Approach: While best practices often suggest separate clusters for applications and observability data, our recommended architecture proposes a unified cluster. We've chosen this approach to begin with, opting for simplicity and cost-effectiveness, reserving the split-cluster strategy for when the application experiences significant growth and reaches a critical scale.

Kubernetes elastic architecture

Prerequisites

To follow this guide, you need the following prerequisites in place:

To add the official Elastic Helm repo as a remote, execute the following command in your shell:

helm repo add elastic https://helm.elastic.co
Enter fullscreen mode Exit fullscreen mode

teslapunkai telescope observatory

1. Observability database and frontend

Install Elasticsearch

Elasticsearch is a distributed, free and open search and analytics engine for all types of data. We will use it as the centralized database of the stack.

Create a configuration file overriding some of the default values:

elasticsearch.helm.values.yml

replicas: 3 # default

minimumMasterNodes: 2 # default

esJavaOpts: "-Xmx4g -Xms4g"

ingress:
  enabled: false

resources:
  requests:
    cpu: 500m
    memory: 5Gi
  limits:
    cpu: 4
    memory: 10Gi

volumeClaimTemplate:
  resources:
    requests:
      storage: 100Gi # default 30Gi
Enter fullscreen mode Exit fullscreen mode

Replace the elasticsearch password with one of our choice in below command, then use it to install or upgrade Elasticsearch to the cluster in elk namespace:

helm -n elk upgrade --install es elastic/elasticsearch --create-namespace --version 8.5.1 -f elasticsearch.helm.values.yml --set secret.password=XXX
Enter fullscreen mode Exit fullscreen mode

Install Kibana

Kibana is a free and open frontend application that sits on top of the Elastic Stack. All observability data will be presented through it. It is also to administration tool for the whole stack.

Create a configuration file overriding some of the default values:

kibana.helm.values.yml

imageTag: 8.5.1

resources:
  requests:
    cpu: 10m
    memory: 768Mi
  limits:
    cpu: 1900m
    memory: 1024Mi

ingress:
  enabled: false

# Allows you to add any config files in /usr/share/kibana/config/
kibanaConfig:
  kibana.yml: |
    ### default
    server.host: "0.0.0.0"
    server.shutdownTimeout: "5s"
    elasticsearch.hosts: ["http://elasticsearch:9200"]
    monitoring.ui.container.elasticsearch.enabled: true

    ### custom
    # from https://github.com/elastic/apm-server/issues/10361
    xpack.fleet.packages:
    - name: apm
      version: 8.5.1
Enter fullscreen mode Exit fullscreen mode

NOTE: notice the disabled ingress. You can enable it depending on your architecture. Notice also the custom configuration to enable Elastic APM automatically. This module itself will be installed in a below paragraph.

Install Kibana with this commands (the deletion handle edge cases):

kubectl -n elk delete secret kib-kibana-es-token || true
helm -n elk upgrade --install kib elastic/kibana --version 8.5.1 -f kibana.helm.values.yml
Enter fullscreen mode Exit fullscreen mode

teslapunkai telescope observatory

2. Logs

Logs are a fundamental component of observability, providing a chronological record of events and activities within a system or application. These records typically contain textual information, including error messages, status updates, and user actions. Logs play a crucial role in diagnosing issues, monitoring system behavior, and troubleshooting problems. They are essential for understanding the historical context of events and are commonly used for auditing, compliance, and debugging purposes. Logs are often collected and aggregated in centralized systems, making it easier for developers and operators to analyze and gain insights into the health and performance of complex systems.

Logs are handled by Filebeat, stored by the Elasticsearch cluster, and presented by Kibana.

Display logs data with Kibana

Once logs will be collected using next paragraph, we will be able to create visualizations of our choice in Kibana. Here are some examples.

  • Log level heatmaps and log volume per container:

Image description

  • Generic logs:

Image description

  • Tokenized logs:

Image description

  • Log volume per level per namespace:

Image description

Collect logs with Filebeat

Filebeat is a log shipping tool that efficiently collects logs from a device and sends them to an external storage.

When installed in a Kubernetes cluster, Filebeat will automatically fetch container logs and ship them directly to Elasticsearch. While it's possible to use a Logstash instance in between, we chose not to for the sake of simplicity (and lack of need at the moment).

Filebeat also boasts the capability to split logs on the fly, which we'll leverage to separate logs produced by our developed application modules. However, all other logs will remain raw, including the generic Kubernetes-added fields.

To create a configuration file resembling this setup, you can use the following YAML file:

filebeat.helm.values.yml

resources:
  requests:
    cpu: 10m
    memory: 250Mi

# from https://stackoverflow.com/questions/73788395/elasticsearch-filebeat-how-to-define-multiline-in-filebeat-inputs-with-condi
filebeatConfig:
  filebeat.yml: |
    max_procs: 1 # default = virtual CPU count (8 in production, too much for OVH API servers)
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          resource: pod # pod / service / node
          scope: node # node / cluster
          add_resource_metadata: # add labels and annotations from these resources
            node.enabled: false
            namespace.enabled: false
          kube_client_options:
            qps: 1
            burst: 10
          templates:
            - condition:
                or:
                  - equals.kubernetes.container.name: server # all our app containers have this name
                  - equals.kubernetes.container.name: teleport # same message start
              config:
                - type: container
                  paths:
                    - /var/log/containers/*-${data.kubernetes.container.id}.log
                  multiline:
                    pattern: "^[0-9]{4}-[0-9]{2}-[0-9]{2}" # starts with our date pattern
                    negate: true
                    match: after
                  processors:
                    - add_kubernetes_metadata:
                        host: ${NODE_NAME}
                        matchers:
                          - logs_path:
                              logs_path: "/var/log/containers/"
                    - dissect:
                        # searchable fields are defined here: https://www.elastic.co/guide/en/ecs/8.7/ecs-field-reference.html
                        # 2023-04-13T09:44:59.013Z  INFO user[1f4ba76d-c76f-4d4f-bd91-453b5313708d] 1 --- [io-8080-exec-10] c.p.b.m.t.a.LyraMessageBuilder: Start LyraMessageBuilder.build(..)
                        tokenizer: "%{event.start} %{log.level} user[%{process.real_user.id}] %{log.syslog.msgid} --- [%{log.syslog.procid}] %{log.origin.function}: %{event.reason}"
                        field: "message"
                        target_prefix: ""
                        ignore_failure: true
                        overwrite_keys: true
                        trim_values: "all" # values are trimmed for leading and trailing
            - condition: # other non app pods
                and:
                  - not.equals.kubernetes.container.name: apm-server
                  - not.equals.kubernetes.container.name: autoscaler # kube-dns-autoscaler
                  - not.equals.kubernetes.container.name: aws-cluster-autoscaler
                  - not.equals.kubernetes.container.name: calico-node
                  - not.equals.kubernetes.container.name: cert-manager-webhook-ovh
                  - not.equals.kubernetes.container.name: coredns
                  - not.equals.kubernetes.container.name: csi-snapshotter
                  - not.equals.kubernetes.container.name: filebeat
                  - not.equals.kubernetes.container.name: ingress-nginx-default-backend
                  - not.equals.kubernetes.container.name: logstash
                  - not.equals.kubernetes.container.name: metricbeat
                  - not.equals.kubernetes.container.name: pgadmin4
                  - not.equals.kubernetes.container.name: server # <-- above condition
                  - not.equals.kubernetes.container.name: teleport # <-- above condition
                  - not.equals.kubernetes.container.name: wormhole
              config:
                - type: container
                  paths:
                    - /var/log/containers/*-${data.kubernetes.container.id}.log
                  processors:
                    - add_kubernetes_metadata:
                        host: ${NODE_NAME}
                        matchers:
                          - logs_path:
                              logs_path: "/var/log/containers/"

    output.elasticsearch:
      hosts: '["https://${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}"]'
      username: '${ELASTICSEARCH_USERNAME}'
      password: '${ELASTICSEARCH_PASSWORD}'
      protocol: https
      ssl.certificate_authorities: ["/usr/share/filebeat/certs/ca.crt"]
Enter fullscreen mode Exit fullscreen mode

A few notes on the above configuration:

  • The scope: node configuration of the Kubernetes provider is mandatory for master nodes with limited performance. While there's no difference on AWS, it becomes crucial on OVH managed clusters to ensure optimal performance.
  • There are two template blocks defined: one for our application and another for logs from all other applications (excluding the ones we choose to ignore).
  • Multiline logs are handled using config[0].multiline. The negate: true solution proves to be the most reliable approach to address edge case issues effectively.
  • The tokenizer plays a crucial role in populating known Filebeat fields, enabling them to be searchable in Kibana.

Once your configuration file is ready, install Filebeat:

helm -n elk upgrade --install fb elastic/filebeat --version 8.5.1 -f filebeat.helm.values.yml
Enter fullscreen mode Exit fullscreen mode

Handle logs retention in Kibana

Sooner or later, you will need to configure data retention in Elasticsearch. Elasticsearch offers a well-structured data lifecycle, ranging from Hot to Warm to Cold phases, to balance performance and CPU usage effectively.

To configure Filebeat lifecycle policies in Kibana, run this command in the Dev Tools:

PUT _ilm/policy/filebeat
{
  "policy": {
    "phases": {
      "hot": { "actions": { "rollover": { "max_primary_shard_size": "10GB", "max_age": "7d" } } },
      "delete": { "min_age": "30d", "actions": { "delete": {} } }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

teslapunkai telescope observatory

3. Technical metrics

Metrics are quantitative measurements that provide valuable numerical data about the performance and behavior of a system or application. These measurements are usually collected at regular intervals, representing various aspects such as resource utilization, response times, throughput, and error rates. Metrics enable monitoring and alerting systems to detect anomalies and deviations from expected behavior, facilitating proactive problem identification and mitigation. By visualizing metrics over time, teams can identify trends and patterns, optimize resource allocation, and make informed decisions to improve the overall system performance and user experience.

Technical metrics are handled by Metricbeat, stored by the Elasticsearch cluster, and presented by Kibana.

Display technical metrics in Kibana

Once technical metrics will be collected using next paragraph, we will be able to access default Metricbeat dashboards in Kibana, and/or create some custom ones.

Here are examples from official documentation:

Metricbeat cluster overview

Metricbeat controller manager

Collect technical metrics with Metricbeat

Metricbeat is a lightweight shipper that you can install to periodically collect metrics.

When installed in a Kubernetes cluster, Metricbeat will automatically fetch metrics and statistics and ship them directly to Elasticsearch.

Create a configuration file resembling this:

metricbeat.helm.values.yml

daemonset:
  resources:
    requests:
      cpu: 10m
      memory: 250Mi
    limits: # re-definition of chart default values
      memory: 500Mi

  metricbeatConfig:
    metricbeat.yml:
      | # test connection in pod with command 'apm-server test output'
      metricbeat.modules:

      - module: kubernetes
        metricsets:
          - node
          - pod
          - system
        period: 1m
        hosts: ["https://${HOSTNAME}:10250"]
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        ssl.verification_mode: "none"
        processors:
        - add_kubernetes_metadata: ~

      - module: kubernetes
        enabled: true
        metricsets:
          - event
        period: 1m

      output.elasticsearch:
        hosts: '["https://${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}"]'
        ssl.enabled: true
        ssl.certificate_authorities: ["/usr/share/metricbeat/certs/ca.crt"]
        username: '${ELASTICSEARCH_USERNAME}'
        password: '${ELASTICSEARCH_PASSWORD}'

deployment:
  enabled: true
  resources:
    requests:
      cpu: 10m
      memory: 250Mi
    limits: # re-definition of chart default values
      memory: 500Mi

  metricbeatConfig:
    metricbeat.yml: |
      metricbeat.modules:
      - module: kubernetes
        enabled: true
        metricsets:
          - state_node
          - state_deployment
          - state_pod
        period: 30s
        hosts: ["${KUBE_STATE_METRICS_HOSTS}"]
      output.elasticsearch:
        hosts: '["https://${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}"]'
        ssl.enabled: true
        ssl.certificate_authorities: ["/usr/share/metricbeat/certs/ca.crt"]
        username: '${ELASTICSEARCH_USERNAME}'
        password: '${ELASTICSEARCH_PASSWORD}'
Enter fullscreen mode Exit fullscreen mode

Once your configuration file is ready, install Metricbeat:

helm -n elk upgrade --install fb elastic/metricbeat --version 8.5.1 -f metricbeat.helm.values.yml
Enter fullscreen mode Exit fullscreen mode

Handle technical metrics retention in Kibana

To configure Metricbeat lifecycle policies in Kibana, you can run a command like this in the Dev Tools:

PUT _ilm/policy/metricbeat
{
  "policy": {
    "phases": {
      "hot": { "actions": { "rollover": { "max_primary_shard_size": "10GB", "max_age": "7d" } } },
      "delete": { "min_age": "30d", "actions": { "delete": {} } }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

teslapunkai telescope observatory

4. Business metrics

Business metrics, distinct from technical metrics, are key performance indicators (KPIs) that provide insights into the overall health and success of a business. These metrics focus on measuring the business's performance, growth, revenue, and customer engagement. Unlike technical metrics, which primarily assess the performance and efficiency of systems and processes, business metrics are tied directly to the organization's strategic goals and objectives. Examples of business metrics include customer acquisition cost (CAC), customer lifetime value (CLV), conversion rates, revenue growth, and customer satisfaction scores. Analyzing and tracking business metrics is essential for making informed decisions, identifying areas of improvement, and aligning business efforts to achieve long-term success and profitability.

Business metrics could be collected from data added to traces by additional development. Or they can be collected automatically from your databases.

In this section, business metrics will indeed be harvested from the application's database by Logstash, stored by the Elasticsearch cluster, and presented by Kibana.

Display business metrics in Kibana

Once business metrics will be collected using next paragraph, we will be able to create visualizations of our choice in Kibana.

Here are is an example:

Business metrics

Collect business metrics with Logstash

Logstash is a data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to elasticsearch.

Logstash is capable of getting data directly from a PostGreSQL server. It is probably capable of handling multiple PostGreSQL database requests, but we decided to have on logstash process per request.

To create a configuration file resembling this setup, you can use the following YAML file:

logstash.helm.values.yml

fullnameOverride: ls-finance

logstashConfig:
  logstash.yml: |
    http.host: 0.0.0.0
    xpack.monitoring.enabled: false

# Allows you to add any pipeline files in /usr/share/logstash/pipeline/logstash.conf
### ***warn*** there is a hardcoded logstash.conf in the image, override it first
logstashPipeline:
  logstash.conf: |
    input {
      jdbc {
        jdbc_connection_string => "jdbc:postgresql://${BACKEND_DB_HOST_PORT}/backend"
        jdbc_user => "${BACKEND_DB_USER}"
        jdbc_password => "${BACKEND_DB_PASSWORD}"
        jdbc_driver_class => "org.postgresql.Driver"
        jdbc_driver_library => "/usr/share/logstash/drivers/postgresql-42.6.0.jar"
        schedule => "* * * * *"
        statement => "WITH balance_aggregation as ( select id AS movement_id FROM wallet_movement)
                      SELECT '${NAMESPACE}' AS namespace, date_time, wm.id as movement_id, wallet_movement_type as movement_type, amount, current_balance, wallet_type, pf.email AS user_email, w.user_id, mp.name AS marketplace, pay.local_ref, w.id AS wallet_id
                      FROM wallet_movement wm
                      LEFT JOIN balance_aggregation b ON wm.id = b.movement_id
                      WHERE amount <> 0"
      }
    }
    filter {
      mutate {
            add_field => { "kubernetes.namespace" => "%{namespace}" }
      }
      if [user_id] =~ /.+/ {
        mutate {
            add_field => { "process.real_user.id" => "%{user_id}" }
        }
      }
    }
    output {
      elasticsearch {
      hosts => ["https://elasticsearch-master.elk.svc.cluster.local:9200"]
      user => '${ELASTICSEARCH_USERNAME}'
      password => '${ELASTICSEARCH_PASSWORD}'
      index => 'my-logstash-index'
      document_id => '%{movement_id}'
      doc_as_upsert => true
      ssl_certificate_verification => false
      }
    }

extraEnvs:
  - name: "ELASTICSEARCH_USERNAME"
    valueFrom:
      secretKeyRef:
        name: elasticsearch-master-credentials
        key: username
  - name: "ELASTICSEARCH_PASSWORD"
    valueFrom:
      secretKeyRef:
        name: elasticsearch-master-credentials
        key: password
  - name: BACKEND_DB_HOST_PORT
    valueFrom:
      secretKeyRef:
        name: app-backend
        key: BACKEND_DB_HOST_PORT
  - name: BACKEND_DB_USER
    valueFrom:
      secretKeyRef:
        name: app-backend
        key: BACKEND_DB_USER
  - name: BACKEND_DB_PASSWORD
    valueFrom:
      secretKeyRef:
        name: app-backend
        key: BACKEND_DB_PASSWORD
  - name: NAMESPACE
    valueFrom:
      fieldRef:
        fieldPath: metadata.namespace

### init container and volume to be able to do:
# curl --location https://jdbc.postgresql.org/download/postgresql-42.6.0.jar --output /usr/share/logstash/lib/postgresql-42.6.0.jar

extraVolumes:
  - name: postgres-driver
    emptyDir: {}

extraInitContainers:
  - name: init-script-downloader
    image: curlimages/curl:8.00.1
    args:
      - --location
      - --insecure
      - https://jdbc.postgresql.org/download/postgresql-42.6.0.jar
      - --output
      - /tmp/drivers/postgresql-42.6.0.jar
    volumeMounts:
      - name: postgres-driver
        mountPath: /tmp/drivers/

extraVolumeMounts:
  - name: postgres-driver
    mountPath: /usr/share/logstash/drivers/
Enter fullscreen mode Exit fullscreen mode

A few notes on the above configuration:

  • The fullnameOverride parameter let you install multiple instances, one per request
  • The mutation filters lets us ship fields with . in the name, something we did not manage to do directly in the request
  • The default logstash image does not contain any JDBC adapter, extraVolumes, extraVolumeMounts and extraInitContainers helps us get around that limitation

Once your configuration file is ready, install Logstash:

helm -n elk upgrade --install fb elastic/logstash --version 8.5.1 -f logstash.helm.values.yml
Enter fullscreen mode Exit fullscreen mode

It will then send data every minute to Elasticsearch, overriding obsolete rows in associated index.

By default, Elasticsearch create 2 versions of a field, with and without keyword. We do not need both, we do not use searchable text in Logstash fields. For this purpose, apply this in Dev Tools / Console in Kibana:

DELETE my-logstash-index
PUT my-logstash-index
{
    "mappings": {
    "dynamic": "runtime"
    }
}
Enter fullscreen mode Exit fullscreen mode

teslapunkai telescope observatory

5. Traces

Traces are detailed records of the end-to-end flow of a specific user request or transaction as it traverses through various components and services within a distributed system. These traces provide a holistic view of the entire request's journey, capturing information about service interactions, latency, and potential bottlenecks. Traces are vital for understanding the complexities of microservices architecture and identifying performance issues in distributed systems. They help in pinpointing the root cause of problems and optimizing the flow of requests, leading to better system reliability and user satisfaction. Traces are particularly valuable in environments with high interconnectivity, enabling teams to visualize and comprehend the flow of data across a vast network of services.

Traces are collected by APM Agents (depending of the technology)[https://www.elastic.co/guide/en/apm/agent/index.html]. If no agent is provided for a given technology, we can still send data using OpenTelemetry. These agents send data to the APM Server, which populate Elasticsearch. As usual, data is presented through Kibana. There are traces displayed, but also metrics, improving the observability capabilities.

Display traces and metrics in Kibana

There is a whole APM section in Kibana. You can inspect user requests with great details.

Notably, we can see trace details:

transaction sample

And also lots of metrics, for example:

transactions overview

Collect traces with APM agents and APM Server

Traces are harvested by an agent in each application module, sending data to a specific APM server, that store data in Elasticsearch.

Observing a web application, with client side rendering, the APM server has to be exposed by an ingress for the frontend part.

To create a configuration file resembling this setup, you can use the following YAML file:

apm-server.helm.values.yml

ingress:
  enabled: true
  annotations:
    nginx.ingress.kubernetes.io/enable-access-log: "false" # too many NGINX logs
  # hosts:
  #   - TODO # in command line
apmConfig:
  apm-server.yml:
    | # test connection in pod with command 'apm-server test output'
    ### default

    queue: {}
    output.elasticsearch:
      hosts: ["https://elasticsearch-master:9200"] # http=>https else does not work (even with 'protocol: https')
      username: "${ELASTICSEARCH_USERNAME}"
      password: "${ELASTICSEARCH_PASSWORD}"

    ### custom
      protocol: https
      ssl.verification_mode: none
    apm-server:
      host: "0.0.0.0:8200" # default
      rum.enabled: true
      kibana:
        enabled: true
        host: http://kib-kibana:5601
        protocol: "http"
        username: "${ELASTICSEARCH_USERNAME}"
        password: "${ELASTICSEARCH_PASSWORD}"
Enter fullscreen mode Exit fullscreen mode

Note the Kibana configuration triggered directly by APM server at startup with apm-server:kibana:enabled. Note also the enable-access-log to avoid polluting NGINX ingress controller logs.

Once your configuration file is ready, install APM Server with this command, configuring the right ingress.hosts for the right environment:

helm -n elk upgrade --install apm elastic/apm-server --version 8.5.1 -f devops/k8s/apm-server.helm.values.yml --set "ingress.hosts[0]=apm.my-app.com"
Enter fullscreen mode Exit fullscreen mode

Reduce traces footprint in Kibana

You will experiment a lot of space taken by APM documents. If you want to avoid unnecessary data, you can drop some documents from ingestion pipelines.

We found that data stored in metrics-apm.internal and metrics-apm.app grow very fast and are not much of use to us. You can completely ignore them by running those commands in the Kibana's Dev Tools:

PUT _ingest/pipeline/metrics-apm.internal@custom
{
  "description": "remove unused internal live data",
  "processors": [ { "drop": { "if" : "ctx.processor?.event == 'metric'" } } ]
}

PUT _ingest/pipeline/metrics-apm.app@custom
{
  "description": "remove unused app live data",
  "processors": [ { "drop": { "if" : "ctx.processor?.event == 'metric'" } } ]
}
Enter fullscreen mode Exit fullscreen mode

Another optimization can be dropping some specific traces/spans. For example, if Kubernetes liveness and readiness probes have /health/ in their URL for each application modules, and you want to get rid of them, you can completely filter them out at ingestion with this command:

PUT _ingest/pipeline/traces-apm@custom
{
  "description": "remove kubernetes liveness and readiness spans",
  "processors": [ { "drop": { "if" : "ctx.url?.path?.contains('/health/')" } } ]
}
Enter fullscreen mode Exit fullscreen mode

Handle traces retention in Kibana

create a lifecycle policy

PUT _ilm/policy/custom-30d
{
  "policy": {
    "phases": {
      "hot": { "actions": { "rollover": { "max_primary_shard_size": "10GB", "max_age": "7d" } } },
      "delete": { "min_age": "30d", "actions": { "delete": {} } }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

And use it for traces-apm:

PUT _component_template/traces-apm@custom
{
  "template": {
    "settings": {
      "lifecycle": {
        "name": "custom-30d"
      }
    }
  },
  "_meta": {
    "package": {
      "name": "apm"
    },
    "managed_by": "fleet",
    "managed": true
  }
}
Enter fullscreen mode Exit fullscreen mode

Wrapping up

With this tutorial, we managed to implement the three pillars of observability for a web application on Kubernetes: logs, metrics, and traces. All using a single open-source, free integrated stack, Yet having infinite display customization at hand.

You struggle with the instruction provided ? You managed to obtain better result with another free efficient stack ? You are satisfied with your custom dashboard and want to share ? Feel free to engage discussion in below comment section 🤓.

teslapunkai telescope observatory

Illustrations generated locally by Automatic1111 using MajicMixFantasy model with TeslapunkAI LoRA

Further reading

This article was enhanced with the assistance of an AI language model to ensure clarity and accuracy in the content, as English is not my native language.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .