Alerts

Kubecost alerts allow teams to receive updates on real-time Kubernetes spend. They are configurable via the Kubecost UI or Helm values. This resource gives an overview of how to configure alerts sent through email, Slack, and Microsoft Teams using Kubecost Helm chart values. Alerts are either created to monitor specific data sets and trends, or they must be toggled on or off. The following alert types are supported:

  1. Allocation Budget: Sends an alert when spending crosses a defined threshold

  2. Allocation Efficiency: Detects when a Kubernetes tenant is operating below a target cost-efficiency threshold

  3. Allocation Recurring Update: Sends an alert with cluster spending across all or a subset of Kubernetes resources.

  4. Allocation Spend Change: Sends an alert reporting unexpected spend increases relative to moving averages

  5. Asset Budget: Sends an alert when spend for a particular set of assets crosses a defined threshold.

  6. Asset Recurring Update: Sends an alert with asset spend across all or a subset of cloud resources.

  7. Cloud Cost Budget: Sends an alert when the total cost of cloud spend goes over a set budget limit.

  8. Monitor Cluster Health: Used to determine if the cluster's health score changes by a specific threshold. Can only be toggled on/off.

  9. Monitor Kubecost Health: Used for production monitoring for the health of Kubecost itself. Can only be toggled on/off.

Configuring alerts in Helm

values.yaml is the definitive source of truth. Alerts set through values.yaml will overwrite any alerts manually configured through the Kubecost UI. Alerts can be configured both ways, but make sure there are no conflicts.

Global alert parameters

The alert settings, under global.notifications.alertConfigs in cost-analyzer/values.yaml, accept four global fields:

  • frontendUrl optional, your cost analyzer front-end URL used for linkbacks in alert bodies

  • globalSlackWebhookUrl optional, a global Slack webhook used for alerts, enabled by default if provided

  • globalMsTeamWebhookUrl optional, a global Microsoft Teams webhook used for alerts, enabled by default if provided

  • globalAlertEmails a global list of emails for alerts

Example Helm values.yaml:

notifications:
    # Kubecost alerting configuration
    alertConfigs:
      frontendUrl: http://localhost:9090 
      globalSlackWebhookUrl: https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX 
      globalMsTeamsWebhookUrl: https://m365x682156.webhook.office.com
      globalAlertEmails:
        - recipient@example.com
        - additionalRecipient@example.com
      # Alerts generated by kubecost, about cluster data
      alerts:
      # list of individual alerts
      # ...

Configuring each alert type

In addition to all global... fields, every alert allows optional individual ownerContact (a list of email addresses), slackWebhookUrl (if different from globalSlackWebhookUrl), and msTeamsWebhookUrl (if different from globalMsTeamsWebhookUrl) fields. Alerts will default to the global settings if these optional fields are not supplied.

Allocation Budget

Defines spend budgets and alerts on budget overruns.

Example Helm values.yaml:

# Daily namespace budget alert on namespace `kubecost`
- type: budget
  threshold: 50
  window: daily # or 1d
  aggregation: namespace
  filter: kubecost
# 3d cluster budget alert on cluster `cluster-one`
- type: budget
  threshold: 600
  window: 3d
  aggregation: cluster
  filter: cluster-one

Allocation Efficiency

Alerts when Kubernetes tenants, e.g. namespaces or label sets, are running below defined cost-efficiency thresholds.

The example below sends a Slack alert when any namespace spending is running below 40% cost efficiency and has spent more than $100 during the last day.

- type: efficiency
  efficiencyThreshold: 0.4
  spendThreshold: 100
  window: 1d 
  aggregation: namespace
  slackWebhookUrl: ‘https://hooks.slack.com/services/TE6GRBNET/BFFK0P848/jFWmsadgfjhiBJp30p’

Allocation Recurring Update

Sends a recurring alert with a summary report of cost and efficiency metrics.

Additional window values:

  • <N>d where N in [1, 7) for every N days

  • 7d or weekly for 0:00:00 UTC every Monday

  • 30d or monthly for 0:00:00 UTC on the first day of the month.

Additional aggregation values:

  • label requires the following format: label:<label_name>

  • annotation requires the following format: annotation:<annotation_name>

This example sends a recurring alert for allocation data for all namespaces every seven days:

# Recurring weekly namespace update on all namespaces
- type: recurringUpdate
  window: weekly
  aggregation: namespace
  filter:
# Recurring weekly namespace update on kubecost namespace
- type: recurringUpdate
  window: weekly
  aggregation: namespace
  filter: kubecost
  ownerContact: # optional, overrides globalAlertEmails default
    - owner@example.com
    - owner2@example.com
  slackWebhookUrl: https://hooks.slack.com/services/<different-from-global> # optional, overrides globalSlackWebhookUrl default

Allocation Spend Change

Detects unexpected spend increases/decreases relative to historical moving averages.

Example Helm values.yaml:

# Daily spend change alert 
- type: spendChange
  relativeThreshold: 0.20   # change relative to baseline average cost. Must be greater than -1 (can be negative).
  window: 1d                # accepts ‘d’, ‘h’
  baselineWindow: 30d       # previous window, offset by window
  aggregation: namespace
  filter: kubecost, default # accepts csv

Asset Budget

Defines asset budgets and alerts when Kubernetes assets overrun the threshold set.

Example Helm values.yaml:

# Daily namespace budget alert on type `Node,Loadbalancer`
- type: assetBudget
  threshold: 50
  window: daily # or 1d
  aggregation: type
  filter: Node,LoadBalancer
# 3d cluster budget alert on cluster with no filter set
- type: assetBudget
  threshold: 100
  window: 3d
  aggregation: cluster
  filter: ''

Asset Recurring Update

Sends a recurring alert with a Kubernetes assets summary report.

Additional window values:

  • <N>d where N in [1, 7) for every N days

  • 7d or weekly for 0:00:00 UTC every Monday

  • 30d or monthly for 0:00:00 UTC on the first day of the month.

Additional aggregation values:

  • label requires the following format: label:<label_name>

  • annotation requires the following format: annotation:<annotation_name>

Two example alerts, one which provides weekly summaries of Kubernetes asset spend data aggregated by cluster, and one which provides weekly summaries of asset spend data for one specific cluster:

# Recurring weekly cloud update on all cluster
- type: cloudReport
  window: weekly  # or 7d
  aggregation: cluster
  filter: ''
# Recurring weekly cloud update on only cluster-one
- type: cloudReport
  window: weekly  # or 7d
  aggregation: cluster
  filter: cluster-one
  ownerContact: # optional, overrides globalAlertEmails default
    - owner@example.com
    - owner2@example.com
  slackWebhookUrl: https://hooks.slack.com/services/<different-from-global> # optional, overrides globalSlackWebhookUrl default

Cloud Cost Budget

Defines cloud cost budgets and alerts when cloud spend overruns the threshold set.

Monitor Cluster Health

Cluster health alerts occur when the cluster health score changes by a specific threshold. The health score is calculated based on the following criteria:

  • Low Cluster Memory

  • Low Cluster CPU

  • Too Many Pods

  • Crash Looping Pods

  • Out of Memory Pods

  • Failed Jobs

Example Helm values.yaml:

# Health Score Alert 
- type: health              # Alerts when health score changes by a threshold
  window: 10m
  threshold: 5              # Send Alert if health scores changes by 5 or more
  ownerContact: # optional, overrides globalAlertEmails default
    - owner@example.com
    - owner2@example.com
  slackWebhookUrl: https://hooks.slack.com/services/<different-from-global> # optional, overrides globalSlackWebhookUrl default

Monitor Kubecost Health

Enabling diagnostic alerts in Kubecost occursthe when an event impacts product uptime. This feature can be enabled in seconds from a values file. The following events are grouped into distinct categories that each result in a separate alert notification:

  • Prometheus is unreachable

  • Kubecost Metrics Availability:

    • Kubecost exported metrics missing over last 5 minutes

    • cAdvisor exported metrics missing over last 5 minutes

    • cAdvisor exported metrics missing expected labels in the last 5 minutes

    • Kubestate Metrics (KSM) exported metrics missing over last 5 minutes

    • Kubestate Metrics (KSM) unexpected version

    • Node Exporter metrics are missing over last 5 minutes.

    • Scrape Interval prometheus self-scraped metrics missing over last 5 minutes

    • CPU Throttling detected on cost-model in the last 10 minutes

  • Clusters Added/Removed (Enterprise Multicluster Support Only)

Required parameters:

  • type: diagnostic

  • window: <N>m -- configurable, N > 0

Optional parameters:

  • diagnostics -- object containing specific diagnostic checks to run (default is true for all). See configuration example below for options:

Example Helm values.yaml:

# Kubecost Health Diagnostic
- type: diagnostic
  window: 10m
  ownerContact: # optional, overrides globalAlertEmails default
    - owner@example.com
    - owner2@example.com
  slackWebhookUrl: https://hooks.slack.com/services/<different-from-global> # optional, overrides globalSlackWebhookUrl default
  diagnostics: 
    prometheusUp: true
    cAdvisor: true
    ksm: true
    kubecost: true 
    nodeExporter: true
    scrapeInterval: true
    cpuThrottling: true
    clusterJoinLeave: true

Configuring alerts in the Kubecost UI

Cluster and Kubecost Health Alerts

Cluster Health Alerts and Kubecost Health Alerts work differently from other alert types. While other alerts monitor cost data for cost or efficiency anomalies, these two monitor the health of Kubecost itself, as well as the health of the cluster running Kubecost. For this reason, multiple of these alert types cannot be created. In the UI, switches for these alert types can be toggled either on or off, managing a single instance of each, and allowing the settings of these single instances to be adjusted.

There is no validation around Cluster Health Alerts. If a Health Alert configuration is invalid, it will appear to save, but will not actually take effect. Please check carefully that the alert has a Window and Threshold properly specified.

Global recipients

Global recipients specify a default fallback recipient for each type of message. If an alert does not define any email recipients, its messages will be sent to any emails specified in the Global Recipients email list. Likewise, if an alert does not define a webhook, its messages will be sent to the global webhook, if one is present. Alerts that do define recipients will ignore the global setting for recipients of that type.

Budget, efficiency, spend change, and recurring update alerts

The remaining alert types all target a set of allocation data with window, aggregation and filter parameters, and trigger based on the target data. The table results can be filtered using the Filter alerts search bar next to + Create Alert. This input can be used to filter based on alert name, type, aggregation, window, and/or filter.

Select + Create Alert to open the Create Alert window where you configure the details of your alert.

The fields for each alert type should resemble their corresponding Helm values in the above tables.

Alerts can also be edited, removed, and tested from the table. Editing opens a dialog similar to the alert creation dialog, for editing the chosen alert.

When creating an alert, you can have these alerts sent through email, Slack, or Microsoft Teams. You can customize the subject field for an email, and attach multiple recipients. Alerts sent via email will contain a PDF of your report which shows the Kubecost UI for your Allocation/Asset page(s). This can be helpful for distributing visual information to those without immediate access to Kubecost. If a custom SMTP configuration has been created, all outbound alert emails will be sent through it.

Testing alerts

The Test arrow icons, as well as a separate Test Alert button in the Edit Alert window, can be used to issue a "test" alert. This can be useful to ensure that alerting infrastructure is working correctly and that an alert is properly configured. Issuing a test from the alert edit modal tests the alert with any modifications that have not yet been saved.

Alerts scheduler

All times in UTC. Alert send times are determined by parsing the supplied window parameter. Alert diagnostics with the next and last scheduled run times are available via <your-kubecost-url>/model/alerts/status.

Supported: weekly and daily special cases, <N>d, <M>h (1 ≤ N ≤ 7, 1 ≤ M ≤ 24) Currently Unsupported: time zone adjustments, windows greater than 7d, windows less than 1h

Scheduler behavior

An <N>d alert sends at 00:00 UTC N day(s) from now, i.e., N days from now rounded down to midnight.

For example, a 5d alert scheduled on Monday will send on Saturday at 00:00, and subsequently the next Thursday at 00:00

An <N>h alert sends at the earliest time of day after now that is a multiple of N.

For example, a 6h alert scheduled at any time between 12 pm and 6 pm will send next at 6 pm and subsequently at 12 am the next day.

If 24 is not divisible by the hourly window, schedule at next multiple of <N>h after now, starting from the current day at 00:00.

For example, a 7h alert scheduled at 22:00 checks 00:00, 7:00, 14:00, and 21:00, before arriving at the next send time of 4:00 tomorrow.

Troubleshooting

Review these steps to verify alerts are being passed to the Kubecost application correctly.

  • Check /model/alerts/configs to ensure the alerts system has been configured properly.

  • Check /model/alerts/status to ensure alerts have been scheduled correctly.

    • The status endpoint returns all of the running alerts including schedule metadata:

      • scheduledOn: The date and time (UTC) that the alert was scheduled.

      • lastRun: The date and time (UTC) that the alert last ran checks (will be set to 0001-01-01T00:00:00Z if the alert has never run).

      • nextRun: The date and time (UTC) that the alert will next run checks.

      • lastError: If running the alert checks fails for unexpected reasons, this field will contain the error message.

If using Helm:

  • Run kubectl get configmap alert-configs -n kubecost -o json to view the alerts ConfigMap.

  • Ensure that the Helm values are successfully read into the ConfigMap under alerts.json under the data field. See below:

{
    "apiVersion": "v1",
    "data": {
        "alerts.json": "{\"globalAlertEmails\":[\"foo@fooexample1.com\",\"recipient@example.com\"],\"frontendUrl\":\"http://localhost:3000\",\"globalSlackWebhookUrl\":\"\",\"alerts\":[{\"type\":\"budget\",\"threshold\":50,\"window\":\"1d\",\"aggregation\":\"namespace\",\"filter\":\"kubecost\"},{\"type\":\"spendChange\",\"relativeThreshold\":0.2,\"window\":\"1d\",\"baselineWindow\":\"30d\",\"aggregation\":\"namespace\",\"filter\":\"kubecost, default\"},{\"type\":\"budget\",\"threshold\":202.1,\"window\":\"1d\",\"aggregation\":\"cluster\",\"filter\":\"cluster-one\"},{\"type\":\"recurringUpdate\",\"window\":\"7d\",\"aggregation\":\"namespace\",\"filter\":\"*\"}]}\n"
    },
    "kind": "ConfigMap",
    "metadata": {
        "creationTimestamp": "2022-01-05T23:43:27Z",
		"labels": {
            "app": "cost-analyzer",
            "app.kubernetes.io/instance": "kubecost-stage",
            "app.kubernetes.io/managed-by": "Helm",
	    ...
        },
        "name": "alert-configs",
        "namespace": "kubecost",
    ...
    }
}
  • Ensure that the .JSON string is successfully mapped to the appropriate configs

Confirm that Kubecost has received configuration data:

  • Visit the Alerts page in the Kubecost UI to view configured alert settings as well as any of the alerts configured from Helm.

  • Alerts set up in the UI will be overwritten by Helm values.yaml if the pod restarts.

Additionally, confirm that the alerts scheduler has properly parsed and scheduled the next run for each alert by visiting <your-kubecost-url>/model/alerts/status to view individual alert parameters as well as the next and last scheduled run times for individual alerts.

Confirm that nextRun has been updated from "0001-01-01T00:00:00Z"

If nextRun fails to update, or alerts are not sent at the nextRun time, check pod logs by running kubectl logs $(kubectl get pods -n kubecost | awk '{print $1}' | grep "^kubecost-cost-analyzer.\{16\}") -n kubecost -c cost-model > kubecost-logs.txt

Common causes of misconfiguration include the following:

  • Unsupported CSV filters: spendChange alerts accept multiple filter values when comma-separated; other alert types do not.

  • Unsupported alert type: all alert type names are in camelCase. Check spelling and capitalization for all alert parameters.

  • Unsupported aggregation parameters: see the Allocation API doc for details.

Last updated