This feature is only officially supported on Kubecost Enterprise plans.
Kubecost leverages Thanos and durable storage for three different purposes:
Centralize metric data for a global multi-cluster view into Kubernetes costs via a Prometheus sidecar
Allow for unlimited data retention
Backup Kubecost ETL data
To enable Thanos, follow these steps:
This step creates the object-store.yaml file that contains your durable storage target (e.g. GCS, S3, etc.) configuration and access credentials. The details of this file are documented thoroughly in Thanos documentation.
We have guides for using cloud-native storage for the largest cloud providers. Other providers can be similarly configured.
Use the appropriate guide for your cloud provider:
Create a secret with the .yaml file generated in the previous step:
Each cluster needs to be labelled with a unique Cluster ID, which is done in two places.
values-clusterName.yaml
The Thanos subchart includes thanos-bucket
, thanos-query
, thanos-store
, thanos-compact
, and service discovery for thanos-sidecar
. These components are recommended when deploying Thanos on the primary cluster.
These values can be adjusted under the thanos
block in values-thanos.yaml. Available options are here: thanos/values.yaml
The thanos-store
container is configured to request 2.5GB memory, this may be reduced for smaller deployments. thanos-store
is only used on the primary Kubecost cluster.
To verify installation, check to see all Pods are in a READY state. View Pod logs for more detail and see common troubleshooting steps below.
Thanos sends data to the bucket every 2 hours. Once 2 hours have passed, logs should indicate if data has been sent successfully or not.
You can monitor the logs with:
Monitoring logs this way should return results like this:
As an aside, you can validate the Prometheus metrics are all configured with correct cluster names with:
To troubleshoot the IAM Role Attached to the serviceaccount, you can create a Pod using the same service account used by the thanos-sidecar (default is kubecost-prometheus-server
):
s3-pod.yaml
This should return a list of objects (or at least not give a permission error).
If a cluster is not successfully writing data to the bucket, review thanos-sidecar
logs with the following command:
Logs in the following format are evidence of a successful bucket write:
/stores
endpointIf thanos-query can't connect to both the sidecar and the store, you may want to directly specify the store gRPC service address instead of using DNS discovery (the default). You can quickly test if this is the issue by running:
kubectl edit deployment kubecost-thanos-query -n kubecost
and adding
--store=kubecost-thanos-store-grpc.kubecost:10901
to the container args. This will cause a query restart and you can visit /stores
again to see if the store has been added.
If it has, you'll want to use these addresses instead of DNS more permanently by setting .Values.thanos.query.stores in values-thanos.yaml.
A common error is as follows, which means you do not have the correct access to the supplied bucket:
Assuming pods are running, use port forwarding to connect to the thanos-query-http
endpoint:
Then navigate to http://localhost:8080 in your browser. This page should look very similar to the Prometheus console.
If you navigate to Stores using the top navigation bar, you should be able to see the status of both the thanos-store
and thanos-sidecar
which accompanied the Prometheus server:
Also note that the sidecar should identify with the unique cluster_id
provided in your values.yaml in the previous step. Default value is cluster-one
.
The default retention period for when data is moved into the object storage is currently 2h. This configuration is based on Thanos suggested values. By default, it will be 2 hours before data is written to the provided bucket.
Instead of waiting 2h to ensure that Thanos was configured correctly, the default log level for the Thanos workloads is debug
(it's very light logging even on debug). You can get logs for the thanos-sidecar
, which is part of the prometheus-server
Pod, and thanos-store
. The logs should give you a clear indication of whether or not there was a problem consuming the secret and what the issue is. For more on Thanos architecture, view this resource.