To keep your clusters and applications healthy and drive productivity forward, you need to stay informed of all events occurring in your cluster. DKP helps you to stay informed of these events by using the alertmanager of the kube-prometheus-stack.

Kommander is configured with pre-defined alerts to monitor four specific events. You receive alerts related to:

  • State of your nodes

  • System services managing the Kubernetes cluster

  • Resource events from specific system services

  • Prometheus expressions exceeding some pre-defined thresholds

Some examples of the alerts currently available are:

  • CPUThrottlingHigh

  • TargetDown

  • KubeletNotReady

  • KubeAPIDown

  • CoreDNSDown

  • KubeVersionMismatch

A complete list with all the pre-defined alerts can be found on GitHub.

Prerequisites

  • Determine the name of the workspace where you wish to perform the actions. You can use the dkp get workspaces command to see the list of workspace names and their corresponding namespaces.

  • Set the WORKSPACE_NAMESPACE environment variable to the name of the workspace’s namespace where the cluster is attached:

    export WORKSPACE_NAMESPACE=<workspace_namespace>
    CODE

Use overrides configMaps to configure alert rules

You can enable or disable the default alert rules by providing the desired configuration in an overrides ConfigMap. For example, if you want to disable the default node alert rules, follow these steps to define an overrides ConfigMap:

  1. Create a file named kube-prometheus-stack-overrides.yaml and paste the following YAML code into it to create the overrides ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kube-prometheus-stack-overrides
      namespace: ${WORKSPACE_NAMESPACE}
    data:
     values.yaml: |
       ---
       defaultRules:
         rules:
           node: false
    CODE
  2. Use the following command to apply the YAML file:

    kubectl apply -f kube-prometheus-stack-overrides.yaml
    CODE
  3. Edit the kube-prometheus-stack AppDeployment to replace the spec.configOverrides.name value with kube-prometheus-stack-overrides. (You can use the steps in the procedure, Deploy an application with a custom configuration as a guide.)

    dkp edit appdeployment -n ${WORKSPACE_NAMESPACE} kube-prometheus-stack
    CODE

    After your editing is complete, the AppDeployment resembles this example:

    apiVersion: apps.kommander.d2iq.io/v1alpha2
    kind: AppDeployment
    metadata:
      name: kube-prometheus-stack
      namespace: ${WORKSPACE_NAMESPACE}
    spec:
      appRef:
        name: kube-prometheus-stack-34.9.3
        kind: ClusterApp
      configOverrides:
        name: kube-prometheus-stack-overrides
    CODE
  4. To disable all rules, create an overrides ConfigMap with this YAML code:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kube-prometheus-stack-overrides
      namespace: ${WORKSPACE_NAMESPACE}
    data:
     values.yaml: |
       ---
       defaultRules:
         create: false
    CODE
  5. Alert rules for the Velero platform service are turned off by default. You can enable them with the following overrides ConfigMap. They should be enabled only if the velero platform service is enabled. If platform services are disabled disable the alert rules to avoid alert misfires.

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kube-prometheus-stack-overrides
      namespace: ${WORKSPACE_NAMESPACE}
    data:
      values.yaml: |
        ---
        mesosphereResources:
          rules:
            velero: true
    CODE
  6. To create a custom alert rule named my-rule-name, create the overrides ConfigMap with this YAML code:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kube-prometheus-stack-overrides
      namespace: ${WORKSPACE_NAMESPACE}
    data:
      values.yaml: |
        ---
        additionalPrometheusRulesMap:
          my-rule-name:
            groups:
            - name: my_group
              rules:
              - record: my_record
                expr: 100 * my_record
    CODE

After you set up your alerts, you can manage each alert using the Prometheus web console to mute or unmute firing alerts, and perform other operations. For more information about configuring alertmanager, see the Prometheus website.

To access the Prometheus Alertmanager UI, browse to the landing page and then search for the Prometheus Alertmanager dashboard, for example https://<CLUSTER_URL>/dkp/alertmanager.

Notify Prometheus Alerts in Slack

To hook up the Prometheus alertmanager notification system, you need to overwrite the existing configuration.

  1. The following file, named alertmanager.yaml, configures alertmanager to use the Incoming Webhooks feature of Slack (slack_api_url: https://hooks.slack.com/services/<HOOK_ID>) to fire all the alerts to a specific channel #MY-SLACK-CHANNEL-NAME.

    global:
      resolve_timeout: 5m
      slack_api_url: https://hooks.slack.com/services/<HOOK_ID>
    
    route:
      group_by: ['alertname']
      group_wait: 2m
      group_interval: 5m
      repeat_interval: 1h
    
      # If an alert isn't caught by a route, send it to slack.
      receiver: slack_general
      routes:
        - match:
            alertname: Watchdog
          receiver: "null"
    
    receivers:
      - name: "null"
      - name: slack_general
        slack_configs:
          - channel: '#MY-SLACK-CHANNEL-NAME'
            icon_url: https://avatars3.githubusercontent.com/u/3380462
            send_resolved: true
            color: '{{ if eq .Status "firing" }}danger{{ else }}good{{ end }}'
            title: '{{ template "slack.default.title" . }}'
            title_link: '{{ template "slack.default.titlelink" . }}'
            pretext: '{{ template "slack.default.pretext" . }}'
            text: '{{ template "slack.default.text" . }}'
            fallback: '{{ template "slack.default.fallback" . }}'
            icon_emoji: '{{ template "slack.default.iconemoji" . }}'
    
    templates:
      - '*.tmpl'
    CODE
  2. The following file, named notification.tmpl, is a template that defines a pretty format for the fired notifications:

    {{ define "__titlelink" }}
    {{ .ExternalURL }}/#/alerts?receiver={{ .Receiver }}
    {{ end }}
    
    {{ define "__title" }}
    [{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] {{ .GroupLabels.SortedPairs.Values | join " " }}
    {{ end }}
    
    {{ define "__text" }}
    {{ range .Alerts }}
    {{ range .Labels.SortedPairs }}*{{ .Name }}*: `{{ .Value }}`
    {{ end }} {{ range .Annotations.SortedPairs }}*{{ .Name }}*: {{ .Value }}
    {{ end }} *source*: {{ .GeneratorURL }}
    {{ end }}
    {{ end }}
    
    {{ define "slack.default.title" }}{{ template "__title" . }}{{ end }}
    {{ define "slack.default.username" }}{{ template "__alertmanager" . }}{{ end }}
    {{ define "slack.default.fallback" }}{{ template "slack.default.title" . }} | {{ template "slack.default.titlelink" . }}{{ end }}
    {{ define "slack.default.pretext" }}{{ end }}
    {{ define "slack.default.titlelink" }}{{ template "__titlelink" . }}{{ end }}
    {{ define "slack.default.iconemoji" }}{{ end }}
    {{ define "slack.default.iconurl" }}{{ end }}
    {{ define "slack.default.text" }}{{ template "__text" . }}{{ end }}
    CODE
  3. Finally, apply these changes to alertmanager as follows. Set ${WORKSPACE_NAMESPACE} to the workspace namespace that kube-prometheus-stack is deployed in:

    kubectl create secret generic -n ${WORKSPACE_NAMESPACE} \
      alertmanager-kube-prometheus-stack-alertmanager \
      --from-file=alertmanager.yaml \
      --from-file=notification.tmpl \
      --dry-run=client --save-config -o yaml | kubectl apply -f -
    CODE

Monitor applications

Before attempting to monitor your own applications, you should be familiar with the Prometheus conventions for exposing metrics. In general, there are two key recommendations:

  • You should expose metrics using an HTTP endpoint named /metrics.

  • The metrics you expose must be in a format that Prometheus can consume.

By following these conventions, you ensure that your application metrics can be consumed by Prometheus itself or by any Prometheus-compatible tool that can retrieve metrics, using the Prometheus client endpoint.

The kube-prometheus-stack for Kubernetes provides easy monitoring definitions for Kubernetes services and deployment and management of Prometheus instances. It provides a Kubernetes resource called ServiceMonitor.

By default, the kube-prometheus-stack provides the following service monitors to collect internal Kubernetes components:

  • kube-apiserver

  • kube-scheduler

  • kube-controller-manager

  • etcd

  • kube-dns/coredns

  • kube-proxy

The operator is in charge of iterating over all of these ServiceMonitor objects and collecting the metrics from these defined components.

The following example illustrates how to retrieve application metrics. In this example, there are:

  • Three instances of a simple app named my-app

  • The sample app listens and exposes metrics on port 8080

  • The app is assumed to already be running

To prepare for monitoring of the sample app, create a service that selects the pods that have my-app as the value defined for their app label setting.

The service object also specifies the port on which the metrics are exposed. The ServiceMonitor has a label selector to select services and their underlying endpoint objects. For example:

kind: Service
apiVersion: v1
metadata:
  name: my-app
  namespace: my-namespace
  labels:
    app: my-app
spec:
  selector:
    app: my-app
  ports:
  - name: metrics
    port: 8080
CODE

This service object is discovered by a ServiceMonitor, which defines the selector to match the labels with those defined in the service. The app label must have the value my-app.

In this example, in order for kube-prometheus-stack to discover this ServiceMonitor, add a specific label prometheus.kommander.d2iq.io/select: "true" in the yaml:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: my-app-service-monitor
  namespace: my-namespace
  labels:
    prometheus.kommander.d2iq.io/select: "true"
spec:
  selector:
    matchLabels:
      app: my-app
  endpoints:
  - port: metrics
CODE

In this example, you would modify the Prometheus settings to have the operator collect metrics from the service monitor by appending the following configuration to the overrides ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-prometheus-stack-overrides
  namespace: ${WORKSPACE_NAMESPACE}
data:
  values.yaml: |
    ---
    prometheus:
      additionalServiceMonitors:
        - name: my-app-service-monitor
          selector:
            matchLabels:
              app: my-app
          namespaceSelector:
            matchNames:
              - my-namespace
          endpoints:
            - port: metrics
              interval: 30s
CODE

Official documentation about using a ServiceMonitor to monitor an app with the Prometheus-operator on Kubernetes can be found on this GitHub repository.

Set a specific storage capacity for Prometheus

When defining the requirements of a cluster, you can specify the capacity and resource requirements of Prometheus by modifying the settings in the overrides ConfigMap definition as shown below:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-prometheus-stack-overrides
  namespace: ${WORKSPACE_NAMESPACE}
data:
  values.yaml: |
    ---
    prometheus:
      prometheusSpec:
        resources:
          limits:
            cpu: "4"
            memory: "8Gi"
          requests:
            cpu: "2"
            memory: "6Gi"
      storageSpec:
        volumeClaimTemplate:
          spec:
            resources:
              requests:
                storage: "100Gi"
CODE