Integrating Kubecost with an existing Prometheus installation can be nuanced. We recommend first installing Kubecost with a bundled Prometheus (instructions) as a dry run before integrating with an external Prometheus deployment. We also recommend getting in touch (email@example.com) for assistance.
Note: integrating with an existing Prometheus is only supported under Kubecost paid plans.
Kubecost requires the following dependency versions:
- node-exporter - v0.16 (May 18)
- kube-state-metrics - v1.6.0 (May 19)
- cAdvisor - kubelet v1.11.0 (May 18)
Copy values.yaml and update the following parameters:
prometheus.fqdnto match your local Prometheus with this format ` http://
Pass this updated file to the Kubecost helm install command with
- job_name: kubecost honor_labels: true scrape_interval: 1m scrape_timeout: 10s metrics_path: /metrics scheme: http dns_sd_configs: - names: - kubecost-cost-analyzer.<namespace-of-your-kubecost> type: 'A' port: 9003
This config needs to be added under
extraScrapeConfigs in Prometheus configuration. Example
You can confirm that this job is successfully running with the Targets view in Prometheus.
Kubecost uses Prometheus recording rules to enable certain product features and to help improve product performance. These are recommended additions, especially for medium and large-sized clusters using their own Prometheus installation. You can find our recording rules under rules in this values.yaml file.
Common issues include the following:
Wrong Prometheus FQDN: evidenced by the following pod error message
No valid prometheus config file at .... We recommend running
curl <your_prometheus_url>/api/v1/status/configfrom a pod in the cluster to confirm that your Prometheus config is returned. If not, this is an indication that an incorrect Prometheus Url has been provided. If a config file is returned, then the Kubecost pod likely has its access restricted by a cluster policy, service mesh, etc.
Prometheus throttling – ensure Prometheus isn’t being CPU throttled due to a low resource request.
Wrong dependency version – see the section above about Requirements
Missing scrape configs – visit Prometheus Target page (screenshot above)
On recent versions of the Prometheus Operator, cadvisor
instancelabels do not match internal Kubernetes node names. The solution is to add the following block into your kubelet/cadvisor scrape config.
metric_relabel_configs: - source_labels: [node] separator: ; regex: (.*) target_label: instance replacement: $1 action: replace
- Data incorrectly is a single namespace – make sure that honor_labels is enabled
You can visit Settings in Kubecost to see basic diagnostic information on these Prometheus metrics:
Using an existing Grafana deployment can be accomplished with either of the following two options:
1) Option: Configure in Kubecost product. After the default Kubecost installation, visit Settings and update Grafana Address to a URL (e.g. http://demo.kubecost.com/grafana) that is visible to users accessing Grafana dashboards. Next, import Kubecost Grafana dashboards as JSON from this folder.
2) Option: Deploy with Grafana sidecar enabled. Passing the Grafana parameters below in your values.yaml will install ConfigMaps for Grafana dashboards that will be picked up by the Grafana sidecar if you have Grafana with the dashboard sidecar already installed.
global: grafana: enabled: false domainName: cost-analyzer-grafana.default #example where format is <service-name>.<namespace> grafana: sidecar: dashboards: enabled: true datasources: enabled: false
For Option 2, ensure that the following flags are set in your Operator deployment:
1. sidecar.dashboards.enabled = true
2. sidecar.dashboards.searchNamespace isn’t restrictive, use
ALL if Kubecost runs in another ns