Kubecost provides the ability to allocate out-of-cluster (OOC) costs, e.g. Cloud SQL instances and Cloud Storage buckets, back to Kubernetes concepts like namespaces and deployments.
Read the Cloud Billing Integrations doc for more information on how Kubecost connects with cloud service providers.
The following guide provides the steps required for allocating OOC costs in GCP.
A GitHub repository with sample files used in the below instructions can be found here.
Begin by reviewing Google's documentation on exporting cloud billing data to BigQuery.
GCP users must create a detailed billing export to gain access to all Kubecost CloudCost features including reconciliation. Exports of type "Standard usage cost data" and "Pricing Data" do not have the correct information to support CloudCosts.
If you are using the alternative multi-cloud integration method, Step 2 is not required.
If your Big Query dataset is in a different project than the one where Kubecost is installed, please see the section on Cross-Project Service Accounts.
Add a service account key to allocate OOC resources (e.g. storage buckets and managed databases) back to their Kubernetes owners. The service account needs the following:
If you don't already have a GCP service account with the appropriate rights, you can run the following commands in your command line to generate and export one. Make sure your GCP project is where your external costs are being run.
After creating the GCP service account, you can connect it to Kubecost in one of two ways before configuring:
You can set up an IAM policy binding to bind a Kubernetes service account to your GCP service account as seen below, where:
NAMESPACE
is the namespace Kubecost is installed into
KSA_NAME
is the name of the service account attributed to the Kubecost deployment
You will also need to enable the IAM Service Account Credentials API in the GCP project.
Create a service account key:
Once the GCP service account has been connected, set up the remaining configuration parameters.
You're almost done. Now it's time to configure Kubecost to finalize your connectivity.
It is recommended to provide the GCP details in your values.yaml to ensure they are retained during an upgrade or redeploy. First, set the following configs:
If you've connected using Workload Identity Federation, add these configs:
Otherwise, if you've connected using a service account key, create a secret for the GCP service account key you've created and add the following configs:
When managing the service account key as a Kubernetes secret, the secret must reference the service account key JSON file, and that file must be named compute-viewer-kubecost-key.json.
In Kubecost, select Settings from the left navigation, and under Cloud Integrations, select Add Cloud Integration > GCP, then provide the relevant information in the GCP Billing Data Export Configuration window:
GCP Service Key: Optional field. If you've created a service account key, copy the contents of the compute-viewer-kubecost-key.json file and paste them here. If you've connected using Workload Identity federation in Step 3, you should leave this box empty.
GCP Project Id: The ID of your GCP project.
GCP Billing Database: Requires a BigQuery dataset prefix (e.g. billing_data
) in addition to the BigQuery table name. A full example is billing_data.gcp_billing_export_resource_v1_XXXXXX_XXXXXX_XXXXX
Be careful when handling your service key! Ensure you have entered it correctly into Kubecost. Don't lose it or let it become publicly available.
You can now label assets with the following schema to allocate costs back to their appropriate Kubernetes owner. Learn more here on updating GCP asset labels.
To use an alternative or existing label schema for GCP cloud assets, you may supply these in your values.yaml under the kubecostProductConfigs.labelMappingConfigs.<aggregation>_external_label
.
Google generates special labels for GKE resources (e.g. "goog-gke-node", "goog-gke-volume"). Values with these labels are excluded from OOC costs because Kubecost already includes them as in-cluster assets. Thus, to make sure all cloud assets are included, we recommend installing Kubecost on each cluster where insights into costs are required.
Project-level labels are applied to all the Assets built from resources defined under a given GCP project. You can filter GCP resources in the Kubecost Cloud Costs Explorer (or API).
If a resource has a label with the same name as a project-level label, the resource label value will take precedence.
Modifications incurred on project-level labels may take several hours to update on Kubecost.
Due to organizational constraints, it is common that Kubecost must be run in a separate project from the project containing the billing data Big Query dataset, which is needed for Cloud Integration. Configuring Kubecost in this scenario is still possible, but some of the values in the above script will need to be changed. First, you will need the project id of the projects where Kubecost is installed, and the Big Query dataset is located. Additionally, you will need a GCP user with the permissions iam.serviceAccounts.setIamPolicy
for the Kubecost project and the ability to manage the roles listed above for the Big Query Project. With these, fill in the following script to set the relevant variables:
Once these values have been set, this script can be run and will create the service account needed for this configuration.
Now that your service account is created follow the normal configuration instructions.
There are cases where labels applied at the account label do not show up in the date-partitioned data. If account level labels are not showing up, you can switch to querying them unpartitioned by setting an extraEnv in Kubecost: GCP_ACCOUNT_LABELS_NOT_PARTITIONED: true
. See here.
InvalidQuery
400 error for GCP integrationIn cases where Kubecost does not detect a connection following GCP integration, revisit Step 1 and ensure you have enabled detailed usage cost, not standard usage cost. Kubecost uses detailed billing cost to display your OOC spend, and if it was not configured correctly during installation, you may receive errors about your integration.
In order to create a Google service account for use with Thanos, navigate to the Google Cloud Platform home page and select IAM & Admin > Service Accounts.
From here, select the option Create Service Account.
Provide a service account name, ID, and description, then select Create and Continue.
You should now be at the Service account permissions (optional) page. Select the first Role dropdown and select Storage Object Creator. Select Add Another Role, then select Storage Object Viewer from the second dropdown. Select Continue.
You should now be prompted to allow specific accounts access to this service account. This should be based on specific internal needs and is not a requirement. You can leave this empty and select Done.
Once back to the Service accounts page, select the Actions icon > Manage keys. Then, select the Add Key dropdown and select Create new key. A Create private key window opens.
Select JSON as the Key type and select Create. This will download a JSON service account key entry for use with the Thanos object-store.yaml
mentioned in the initial setup step.
Certain features of Kubecost, including Savings Insights like Orphaned Resources and Reserved Instances, require access to the cluster's GCP account. This is usually indicated by a 403 error from Google APIs which is due to 'insufficient authentication scopes'. Viewing this error in the Kubecost UI will display the cause of the error as "ACCESS_TOKEN_SCOPE_INSUFFICIENT"
.
To obtain access to these features, follow this tutorial which will show you how to configure your Google IAM Service Account and Workload Identity for your application.
Go to your GCP Console and select APIs & Services > Credentials from the left navigation. Select + Create Credentials > API Key.
On the Credentials page, select the icon in the Actions column for your newly-created API key, then select Edit API key. The Edit API key page opens.
Under ‘API restrictions’, select Restrict key, then from the dropdown, select only Cloud Billing API. Select OK to confirm. Then select Save at the bottom of the page.
From here, consult Google Cloud's guide to perform the following steps:
Enable Workload Identity on an existing GCP cluster, or spin up a new cluster which will have Workload Identity enabled by default
Migrate any existing workloads to Workload Identity
Configure your applications to use Workload Identity
Create both a Kubernetes service account (KSA) and an IAM service account (GSA).
Annotate the KSA with the email of the GSA.
Update your pod spec to use the annotated KSA, and ensure all nodes on that workload use Workload Identity.
You can stop once you have modified your pod spec (before 'Verify the Workload Identity Setup'). You should now have a GCP cluster with Workload Identity enabled, and both a KSA and a GSA, which are connected via the role roles/iam.workloadIdentityUser
.
In the GCP Console, select IAM & Admin > IAM. Find your newly-created GSA and select the Edit Principal pencil icon. You will need to provide the following roles to this service account:
BigQuery Data Viewer
BigQuery Job User
BigQuery User
Compute Viewer
Service Account Token Creator
Select Save.
The following roles need to be added to your IAM service account:
roles/bigquery.user
roles/compute.viewer
roles/bigquery.dataViewer
roles/bigquery.jobUser
roles/iam.serviceAccountTokenCreator
Use this command to add each role individually to the GSA:
From here, restart the pod(s) to confirm your changes. You should now have access to all expected Kubecost functionality through your service account with Identity Workload.