Once an installation is complete, access the Kubecost UI to view the status of the product (select Settings > Diagnostics > View Full Diagnostics). If the Kubecost UI is unavailable, review these troubleshooting resources to determine the problem.
These Kubernetes commands can be helpful when finding issues with deployments.
This command will find all events that aren't normal, with the most recent listed last. Use this if pods are not even starting:
Another option is to check for a describe command of the specific pod in question. This command will give a list of the Events specific to this pod.
If a pod is in CrashLoopBackOff, check its logs. Commonly it will be a misconfiguration in Helm. If the cost-analyzer pod is the issue, check the logs with:
Alternatively, Lens is a great tool for diagnosing many issues in a single view. See our blog post on using Lens with Kubecost to learn more.
The log output can be adjusted while deploying through Helm by using the LOG_LEVEL
and/or LOG_FORMAT
environment variables. These variables include:
trace
debug
info
warn
error
fatal
For example, to set the log level to debug
, add the following flag to the Helm command:
You can set LOG_LEVEL to generate two different outputs.
Setting it to JSON will generate a structured logging output: {"level":"info","time":"2006-01-02T15:04:05.999999999Z07:00","message":"Starting cost-model (git commit \"1.91.0-rc.0\")"}
Setting LOG_LEVEL
to pretty
will generate a nice human-readable output: 2006-01-02T15:04:05.999999999Z07:00 INF Starting cost-model (git commit "1.91.0-rc.0")
To temporarily set the log level without restarting the pod, you can send a POST request to /logs/level
with one of the valid log levels. This does not persist between pod restarts, Helm deployments, etc. Here's an example:
A GET request can be sent to the same endpoint to retrieve the current log level.
If your Kubecost installation fails and you are unable to download the cost-analyzer Helm chart from GitHub chart repository, run helm repo update
, then run your install command again. The install should run successfully.
Some AKS users have reported that the cost-model container in the kubecost-cost-analyzer pod will panic with the following message when using the Azure Files Container Storage Interface (CSI) driver:
To resolve this issue, please use an alternate storage class for the Kubecost cost-analyzer
PV. For example the Azure Disk Container Storage Interface (CSI).
Your clusters need a default storage class for the Kubecost and Prometheus persistent volumes to be successfully attached. To check if a storage class exists, run:
You should see a StorageCase name with (default)
next to it. See:
If you see a name but no (default)
next to it, run:
kubectl patch storageclass <name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
If you don’t see a name, you need to add a StorageClass. For help doing this, see this Kubernetes article on Storage Classes for assistance.
Alternatively, you can deploy Kubecost without persistent storage to store by following these steps:
This setup is only for experimental purpose. The metric data is reset when Kubecost's pod is rescheduled.
In your terminal, run this command to add the Kubecost Helm repository:
helm repo add kubecost https://kubecost.github.io/cost-analyzer/
Next, run this command to deploy Kubecost without persistent storage:
If the PVC is in a pending state for more than 5 minutes, and the cluster is Amazon EKS 1.23+, the error message appears as the following example:
Example result:
YTo fix this, you need to install the AWS EBS CSI driver. This may be because the Amazon EKS cluster version 1.23+ uses the "ebs.csi.aws.com" provisioner, and the AWS EBS CSI driver has not been installed yet.
Review the output of the port-forward command:
Forwarding from 127.0.0.1
indicates Kubecost should be reachable via a browser at http://127.0.0.1:9090
or http://localhost:9090
.
In some cases it may be necessary for kubectl to bind to all interfaces. This can be done with the addition of the flag --address 0.0.0.0
.
Navigating to Kubecost while port-forwarding should result in "Handling connection" output in the terminal:
To troubleshoot further, check the status of pods in the Kubecost namespace:
All kubecost-*
pods should have Running
or Completed
status.
If the cost-analyzer or prometheus-server pods are missing, we recommend reinstalling with Helm using --debug
which enables verbose output.
If any pod is not Running other than cost-analyzer-checks, you can use the following command to find errors in the recent event log:
If there is an existing node-exporter DaemonSet, the Kubecost Helm chart may timeout due to a conflict. You can disable the installation of node-exporter by passing the following parameters to the Helm install.
You may encounter the following screen if the Kubecost UI is unable to connect with a live Kubecost server.
Recommended troubleshooting steps are as follows:
If you are using a port other than 9090 for your port-forward, try adding the URL with 'port' to the "Add new cluster" dialog.
Next, you can review messages in your browser's developer console. Any meaningful errors or warnings may indicate an unexpected response from the Kubecost server.
Next, point your browser to the /model
endpoint on your target URL. For example, visit http://localhost:9090/model/
in the scenario shown above. You should expect to see a Prometheus config file at this endpoint. If your cluster address has changed, you can visit Settings in the Kubecost product to update or you can also add a new cluster.
If you are unable to successfully retrieve your config file from this /model
endpoint, we recommend the following:
Check your network connection to this host
View the status of all Prometheus and Kubecost pods in this cluster's deployment to determine if any container are not in a Ready
or Completed
state. When performing the default Kubecost install this can be completed with kubectl get pods -n kubecost
. All pods should be either Running or Completed. You can run kubectl describe
on any pods not currently in this state.
Finally, view pod logs for any pod that is not in the Running
or Completed
state to find a specific error message.
If all Kubecost pods are running and you can connect/port-forward to the kubecost-cost-analyzer pod, but none of the app's UI will load, we recommend testing the following:
Connect directly to a backend service with the following command: kubectl port-forward --namespace kubecost service/kubecost-cost-analyzer 9001
Ensure that http://localhost:9001
returns the Prometheus YAML file
If this is true, you are likely to be hitting a CoreDNS routing issue. We recommend using local routing as a solution:
Go to this cost-analyzer-frontend-config-map-template.yaml.
Replace {{ $serviceName }}.{{ .Release.Namespace }}
with localhost
kubecost-grafana
and kubecost-cost-analyzer-psp
PodSecurityPolicy (PSP) has been removed from Kubernetes v1.25. This will result in the following error during install.
To disable PSP in your deployment:
kubecost-grafana
and kubecost-cost-analyzer-psp
in existing Kubecost installsSince PodSecurityPolicy (PSP) has been removed from Kubernetes v1.25, it's possible to encounter a state where all Kubecost-related Helm commands fail after Kubernetes has been upgraded to v1.25.
To prevent this Helm error state please upgrade Kubecost to at least v1.99 prior to upgrading Kubernetes to v1.25. Additionally please follow the above instructions for disabling PSP.
If Kubecost PSP is not disabled prior to Kubernetes v1.25 upgrades, you may need to manually delete the Kubecost install. Prior to doing this please ensure you have ETL backups enabled as well as Helm values, and Prometheus/Thanos data backed up. Manual removal can be done by deleting the Kubecost namespace.
kube-state-metrics
pod fails to start, Failed to list *v1beta1.Ingress
and or Failed to list *v1beta1.CertificateSigningRequest
This error found in the kube-state-metrics
logs occurs when API's are not present in Kubernetes. This will cause the KSM pod startup to fail. The full error is as follows.
To resolve this error you can disable the corresponding KSM metrics collectors by setting the following Helm values to false
.
You can verify the changes are in place by describing the KSM deployment, the collectors should no longer be present in the Container Arguments list.
This error appears when you install Kubecost using AWS optimized version on your Amazon EKS cluster. There are a few reasons that generate this error message:
Check our ECR public gallery for the latest available version at https://gallery.ecr.aws/kubecost/cost-analyzer
Try to login to the Amazon ECR public gallery again to refresh the authentication token with the following commands:
Edit nginx configmap kubectl edit cm nginx-conf -n kubecost
Search for 9001 and 9003 (should find kubecost-cost-analyzer.kubecost:9001 & kubecost-cost-analyzer.kubecost:9003)
Change both entries to localhost:9001 and localhost:9003
Restart the kubecost-cost-analyzer pod in the kubecost namespace
.Values.kubecostToken
and Values.kubecostProductConfigs.productKey
?.Values.kubecostToken
is primarily used to manage trial access and is provided to you when visiting http://kubecost.com/install.
.Values.kubecostProductConfigs.productKey
is used to apply a Enterprise license. More info in this doc.
Kubecost makes use of cloud provider metadata servers to access instance and cluster metadata. If a restrictive network policy is place this may need to be modified to allow connections from the kubecost pod or namespace. Example:
Do you have a question not answered on this page? Email us at support@kubecost.com or join the Kubecost Slack community!