Prometheus Configuration Guide
Bring your own Prometheus
There are several considerations when disabling the Kubecost included Prometheus deployment. Kubecost strongly recommends installing Kubecost with the bundled Prometheus in most environments.
The Kubecost Prometheus deployment is optimized to not interfere with other observability instrumentation and by default only contains metrics that are useful to the Kubecost product. This results in 70-90% fewer metrics than a Prometheus deployment using default settings.
Additionally, if multi-cluster metric aggregation is required, Kubecost provides a turnkey solution that is highly tuned and simple to support using the included Prometheus deployment.
This feature is accessible to all users. However, please note that comprehensive support is provided with a paid support plan.
Dependency requirements
Kubecost requires the following minimum versions:
Prometheus: v2.18 (v2.13-2.17 supported with limited functionality)
kube-state-metrics: v1.6.0+
cAdvisor: kubelet v1.11.0+
node-exporter: v0.16+ (Optional)
Instructions
Disable node-exporter and kube-state-metrics (recommended)
If you have node-exporter and/or KSM running on your cluster, follow this step to disable the Kubecost included versions. Additional detail on KSM requirements.
In contrast to our recommendation above, we do recommend disabling the Kubecost's node-exporter and kube-state-metrics if you already have them running in your cluster.
Disabling Kubecost's Prometheus deployment
This process is not recommended. Before continuing, review the Bring your own Prometheus section if you haven't already.
Pass the following parameters in your Helm install:
The FQDN can be a full path via https://prometheus-prod-us-central-x.grafana.net/api/prom/
if you use Grafana Cloud-managed Prometheus. Learn more in the Grafana Cloud Integration for Kubecost doc.
Have your Prometheus scrape the cost-model
/metrics
endpoint. These metrics are needed for reporting accurate pricing data. Here is an example scrape config:
This config needs to be added to extraScrapeConfigs
in the Prometheus configuration. See the example extraScrapeConfigs.yaml.
By default, the Prometheus chart included with Kubecost (bundled-Prometheus) contains scrape configs optimized for Kubecost-required metrics. You need to add those scrape configs jobs into your existing Prometheus setup to allow Kubecost to provide more accurate cost data and optimize the required resources for your existing Prometheus.
You can find the full scrape configs of our bundled-Prometheus here. You can check Prometheus documentation for more information about the scrape config, or read this documentation if you are using Prometheus Operator.
Recording rules
This step is optional. If you do not set up Kubecost's CPU usage recording rule, Kubecost will fall back to a PromQL subquery which may put unnecessary load on your Prometheus.
Kubecost-bundled Prometheus includes a recording rule used to calculate CPU usage max, a critical component of the request right-sizing recommendation functionality. Add the recording rules to reduce query load here.
Alternatively, if your environment supports serviceMonitors
and prometheusRules
, pass these values to your Helm install:
To confirm this job is successfully scraped by Prometheus, you can view the Targets page in Prometheus and look for a job named kubecost
.
Node exporter metric labels
This step is optional, and only impacts certain efficiency metrics. View issue/556 for a description of what will be missing if this step is skipped.
You'll need to add the following relabel config to the job that scrapes the node exporter DaemonSet.
This does not override the source label. It creates a new label called kubernetes_node
and copies the value of pod into it.
Distinguishing clusters
In order to distinguish between multiple clusters, Kubecost needs to know the label used in prometheus to identify the name. Use the .Values.kubecostModel.promClusterIDLabel
. The default cluster label is cluster_id
, though many environments use the key of cluster
.
Data retention
By default, metric retention is 91 days, however the retention of data can be further increased with a configurable value for a property etlDailyStoreDurationDays
. You can find this value here.
Increasing the default etlDailyStorageDurationDays
value will naturally result in greater memory usage. At higher values, this can cause errors when trying to display this information in the Kubecost UI. You can remedy this by increasing the Step size when using the Allocations dashboard.
Troubleshooting
The Diagnostics page (Settings > View Full Diagnostics) provides diagnostic info on your integration. Scroll down to Prometheus Status to verify that your configuration is successful.
Below you can find solutions to common Prometheus configuration problems. View the Kubecost Diagnostics doc for more information.
Misconfigured Prometheus FQDN
Evidenced by the following pod error message No valid prometheus config file at ...
and the init pods hanging. We recommend running curl <your_prometheus_url>/api/v1/status/config
from a pod in the cluster to confirm that your Prometheus config is returned. Here is an example, but this needs to be updated based on your pod name and Prometheus address:
In the above example, <your_prometheus_url> may include a port number and/or namespace, example: http://prometheus-operator-kube-p-prometheus.monitoring:9090/api/v1/status/config
If the config file is not returned, this is an indication that an incorrect Prometheus address has been provided. If a config file is returned from one pod in the cluster but not the Kubecost pod, then the Kubecost pod likely has its access restricted by a network policy, service mesh, etc.
Context deadline exceeded
Network policies, Mesh networks, or other security related tooling can block network traffic between Prometheus and Kubecost which will result in the Kubecost scrape target state as being down in the Prometheus targets UI. To assist in troubleshooting this type of error you can use the curl
command from within the cost-analyzer container to try and reach the Prometheus target. Note the "namespace" and "deployment" name in this command may need updated to match your environment, this example uses the default Kubecost Prometheus deployment.
When successful, this command should return all of the metrics that Kubecost uses. Failures may be indicative of the network traffic being blocked.
Prometheus throttling
Ensure Prometheus isn't being CPU throttled due to a low resource request.
Wrong dependency version
Review the Dependency Requirements section above
Missing scrape configs
Visit Prometheus Targets page (screenshot above)
Data incorrectly is a single namespace
Make sure that honor_labels is enabled
Negative idle reported
Single cluster tests
Ensure results are not null for both queries below.
Make sure Prometheus is scraping Kubecost search metrics for:
node_total_hourly_cost
Ensure kube-state-metrics are available:
kube_node_status_capacity
For both queries, verify nodes are returned. A successful response should look like:
An error will look like:
Enterprise multi-cluster test
Ensure that all clusters and nodes have values- output should be similar to the above Single Cluster Tests
Make sure Prometheus is scraping Kubecost search metrics for:
node_total_hourly_cost
On macOS, change date -d '1 day ago'
to date -v '-1d'
Ensure kube-state-metrics are available:
kube_node_status_capacity
For both queries, verify nodes are returned. A successful response should look like:
An error will look like:
Last updated