Cluster Controller
The Cluster Controller is currently in beta. Please read the documentation carefully.
Kubecost's Cluster Controller contains Kubecost's automation features, and thus has write permission to certain resources on your cluster. For this reason, the Cluster Controller is disabled by default.
The Cluster Controller enables features like:
The Cluster Controller can be enabled on any cluster, but certain functionality will only be enabled based on your cloud service provider (CSP) and setup:
- The Controller itself and container RRS are available for all clusters and configurations.
- Cluster turndown, cluster right-sizing, and Kubecost Actions are only available for GKE, EKS, and Kops-on-AWS clusters, after setting up a provider service key.
Therefore, the Provider service key setup section below is optional, but will limit functionality if you choose to skip it.
If you are enabling the Cluster Controller for a GKE/EKS/Kops AWS cluster, follow the specialized instructions for your CSP(s) below. If you aren't using a GKE/EKS Kops AWS cluster, skip ahead to the Deploying section below.
/bin/bash -c "$(curl -fsSL https://github.com/kubecost/cluster-turndown/releases/latest/download/gke-create-service-key.sh)" -- <Project ID> <Service Account Name> <Namespace> cluster-controller-service-key
- Project ID: The GCP project identifier. Can be found via:
gcloud config get-value project
- Namespace: The namespace which Kubecost will be installed, e.g
kubecost
- Service Account Name: The name of the service account to be created. Should be between 6 and 20 characters, e.g.
kubecost-controller
- Secret Name: This should always be set to
cluster-controller-service-key
, which is the secret name mounted by the Kubecost Helm chart.
For EKS cluster provisioning, if using
eksctl
, make sure that you use the --managed
option when creating the cluster. Unmanaged node groups should be upgraded to managed. More info.Create a new User with
AutoScalingFullAccess
permissions, plus the following EKS-specific permissions:{
"Effect": "Allow",
"Action": [
"eks:ListClusters",
"eks:DescribeCluster",
"eks:DescribeNodegroup",
"eks:ListNodegroups",
"eks:CreateNodegroup",
"eks:UpdateClusterConfig",
"eks:UpdateNodegroupConfig",
"eks:DeleteNodegroup",
"eks:ListTagsForResource",
"eks:TagResource",
"eks:UntagResource"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:ListAttachedRolePolicies",
"iam:PassRole"
],
"Resource": "*"
}
Create a new file, service-key.json, and use the access key ID and secret access key to fill out the following template:
{
"aws_access_key_id": "<ACCESS_KEY_ID>",
"aws_secret_access_key": "<SECRET_ACCESS_KEY>"
}
Then, run the following to create the secret:
$ kubectl create secret generic cluster-controller-service-key -n <NAMESPACE> --from-file=service-key.json
Here is a full example of this process using the AWS CLI and a simple IAM user (requires
jq
):aws iam create-user \
--user-name "<your user>"
aws iam attach-user-policy \
--user-name "<your user>" \
--policy-arn "arn:aws:iam:$(aws sts get-caller-identity | jq -r '.Account'):aws:policy/AutoScalingFullAccess"
read -r -d '' EKSPOLICY << EOM
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:ListClusters",
"eks:DescribeCluster",
"eks:DescribeNodegroup",
"eks:ListNodegroups",
"eks:CreateNodegroup",
"eks:UpdateClusterConfig",
"eks:UpdateNodegroupConfig",
"eks:DeleteNodegroup",
"eks:ListTagsForResource",
"eks:TagResource",
"eks:UntagResource"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:ListAttachedRolePolicies",
"iam:PassRole"
],
"Resource": "*"
}
]
}
EOM
aws iam put-user-policy \
--user-name "<your user>" \
--policy-name "eks-permissions" \
--policy-document "${EKSPOLICY}"
aws iam create-access-key \
--user-name "<your user>" \
> /tmp/aws-key.json
AAKI="$(jq -r '.AccessKey.AccessKeyId' /tmp/aws-key.json)"
ASAK="$(jq -r '.AccessKey.SecretAccessKey' /tmp/aws-key.json)"
kubectl create secret generic \
cluster-controller-service-key \
-n kubecost \
--from-literal="service-key.json={\"aws_access_key_id\": \"${AAKI}\", \"aws_secret_access_key\": \"${ASAK}\"}"
Create a new user or IAM role with
AutoScalingFullAccess
permissions. JSON definition of those permissions:{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "autoscaling:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "cloudwatch:PutMetricAlarm",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeAccountAttributes",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeImages",
"ec2:DescribeInstanceAttribute",
"ec2:DescribeInstances",
"ec2:DescribeKeyPairs",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DescribePlacementGroups",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSpotInstanceRequests",
"ec2:DescribeSubnets",
"ec2:DescribeVpcClassicLink"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeTargetGroups"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": "autoscaling.amazonaws.com"
}
}
}
]
}
Create a new file, service-key.json, and use the access key ID and secret access key to fill out the following template:
{
"aws_access_key_id": "<ACCESS_KEY_ID>",
"aws_secret_access_key": "<SECRET_ACCESS_KEY>"
}
Then run the following to create the secret:
$ kubectl create secret generic cluster-controller-service-key -n <NAMESPACE> --from-file=service-key.json
You can now enable the Cluster Controller in the Helm chart by finding the
clusterController
config block and setting enabled: true
clusterController:
enabled: true
You may also enable via
--set
when running Helm install:--set clusterController.enabled=true
You can verify that the Cluster Controller is running by issuing the following:
kubectl get pods -n kubecost -l app=kubecost-cluster-controller
Last modified 20d ago