Kubescaler
This feature is in currently in alpha. Please read the documentation carefully.
Kubecost's Kubescaler implements continuous request right-sizing: the automatic application of Kubecost's high-fidelity recommendations to your containers' resource requests. This provides an easy way to automatically improve your allocation of cluster resources by improving efficiency.
Kubescaler can be enabled and configured on a per-workload basis so that only the workloads you want edited will be edited.
Setup
Kubescaler is part of Cluster Controller, and should be configured after the Cluster Controller is enabled.
Usage
Kubescaler is configured on a workload-by-workload basis via annotations. Currently, only deployment workloads are supported.
Annotation | Description | Example(s) |
---|---|---|
| Whether to autoscale the workload. See note on |
|
| How often to autoscale the workload, in minutes. If unset, a conservative default is used. |
|
| Optional augmentation to the frequency parameter. If both are set, the workload will be resized on the scheduled frequency, aligned to the start. If frequency is 24h and the start is midnight, the workload will be rescheduled at (about) midnight every day. Formatted as RFC3339. |
|
| Target utilization (CPU) for the recommendation algorithm. If unset, the backing recommendation service's default is used. |
|
| Target utilization (Memory/RAM) for the recommendation algorithm. If unset, the backing recommendation service's default is used. |
|
| Value of the |
|
Notable Helm values:
Helm value | Description | Example(s) |
---|---|---|
| If true, Kubescaler will switch to default-enabled for all workloads unless they are annotated with |
|
Supported workload types
Kubescaler supports apps/v1 Deployments.
Kubescaler does not support "bare" pods. Learn more in this GitHub issue.
Example
Kubescaler will take care of the rest. It will apply the best-available recommended requests to the annotated controller every 11 hours. If the recommended requests exceed the current limits, the update is currently configured to set the request to the current limit.
To check current requests for your Deployments, use the following command:
Last updated