Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Actions is currently in beta. Please read the documentation carefully.
Actions is only available with a Kubecost Enterprise plan.
The Actions page is where you can create scheduled savings actions that Kubecost will execute for you. The Actions page supports creating actions for multiple turndown and right-sizing features.
Actions are only able to be applied to your primary cluster. To use Actions on a secondary cluster, you must manually switch to that cluster via front end.
The Actions page will exist inside the Savings folder in the left navigation, but must first be enabled before it appears. The two steps below which enable Kubecost Actions do not need to be performed sequentially as written.
Because the Actions page is currently a beta feature, it does not appear as part of Kubecost's base functionality. To enable alpha features, select Settings from the left navigation. Then toggle on the Enable experimental features switch. Select Save at the bottom of the Settings page to confirm your changes. The Actions page will now appear in your left navigation, but you will not be able to perform any actions until you've enabled the Cluster Controller (see below).
Some features included in Kubecost Actions are only available in GKE/EKS environments. See the Cluster Controller doc for more clarity on which features you will have access to after enabling the Cluster Controller.
On the Actions page, select Create Action in the top right. The Create New Action window opens.
You will have the option to perform one of several available Actions:
Cluster Turndown: Schedule clusters to spin down when unused and back up when needed
Request Sizing: Ensure your containers aren't over-provisioned
Cluster Sizing: Configure your cluster in the most cost-effective way
Namespace Turndown: Schedule unused workloads to spin down
Guided Sizing: Continuous container and node right-sizing
Selecting one of these Actions will take you off the Actions page to a Action-specific page which will allow to perform the action in moments.
If the Cluster Controller was not properly enabled, the Create New Action window will inform you and limit functionality until the Cluster Controller has been successfully enabled.
Cluster Turndown is a scheduling feature that allows you to reduce costs for clusters when they are not actively being used, without spinning them down completely. This is done by temporarily removing all existing nodes except for master nodes. The Cluster Turndown page allows you to create a schedule for when to turn your cluster down and up again.
Selecting Cluster Turndown from the Create new action window will take you to the Cluster Turndown page. The page should display available clusters for turndown. Begin by selecting Create Schedule next to the cluster you wish to turn down. Select what date and time you wish to turn down the cluster, and what date and time you wish to turn it back up. Select Apply to finalize.
You can delete an existing turndown schedule by selecting the trash can icon.
Learn more about cluster turndown's advanced functionality here.
See the existing documentation on Automatic Request Right-Sizing to learn more about this feature. If you have successfully enabled the Cluster Controller, you can skip the Setup section of that article.
Cluster Sizing will provide right-sizing recommendations for your cluster by determining the cluster's needs based on the type of work running, and the resource requirements. You will receive a simple (uses one node type) and a complex (uses two or more node types) recommendation.
Kubecost may hide the complex recommendation when it is more expensive than the simple recommendation, and present a single recommendation instead.
Visiting the Cluster Sizing Recommendations page from the Create New Action window will immediately prompt you with a suggested recommendation that will replace your current node pools with the displayed node pools. You can select Adopt to immediately resize, or select Cancel if you want to continue exploring.
Learn more about cluster right-sizing functionality here.
Namespace turndown allows you to take action to delete your abandoned workloads. Instead of requiring the user to manually size down or delete their unused workloads, Kubecost can delete namespaces full of idle pods in one moment or on a continual basis. This can be helpful for routine cleanup of neglected resources. Namespace turndown is supported on all cluster types.
Selecting Namespace Turndown from the Create new action window will open the Namespace Turndown page.
Begin by providing a name for your Action in the Job Name field. For the schedule, provide a cron string that determines when the turndown occurs (leave this field as 0 0 * * *
by default to perform turndown every night at midnight).
For schedule type, select Scheduled or Smart from the dropdown.
Scheduled turndown will delete all non-ignored namespaces.
Smart turndown will confirm that all workloads in the namespace are idle before deleting.
Then you can provide optional values for the following fields:
Ignore Targets: Filter out namespaces you don't want turned down. Supports "wildcard" filtering: by ending your filter with *
, you can filter for multiple namespaces which include that filter. For example, entering kube*
will prevent any namespace featuring kube
from being turned down. Namespace turndown will ignore namespaces named kube-*
, the default
namespace, and the namespace the Cluster Controller is enabled on.
Ignore labels: Filter out key-alue labels that you don't want turned down.
Select Create Schedule to finalize.
Guided Kubernetes Sizing provides a one-click or continuous right-sizing solution in two steps, request sizing and then cluster sizing. These implementations function exactly like Kubecost's existing container and cluster right-sizing features.
In the first collapsible tab, you can configure your container request sizing.
The Auto resizing toggle switch will determine whether you want to perform a one-time resize, or a continuous auto-resize. Default is one-time (off).
Frequency: Only available when Auto resizing is toggled on. Determines how frequently right-sizing will occur. Options are Day, Week, Monthly, or Quarterly.
Start Time: Only available when Auto resizing is toggled on. Determines the day, and time of day, that auto-resizing will begin occurring. Will default to the current date and time if left blank.
Select Start One-Time Resize/Start Auto-Resizing Now to finalize.
In the second collapsible tab, you can configure continuous cluster sizing.
Architecture: Supports x86 or ARM.
Target Utilization: How much excess resource nodes should be configured with to account for variable or increasing resource consumption. Default is 0.8.
Frequency: Determines how frequently right-sizing will occur. Options are Day, Week, Monthly, or Quarterly.
Start Time: Determines the day, and time of day, that auto-resizing will begin occurring. Will default to the current date and time if left blank.
Select Enable Auto-Resizing Now to finalize.
Once you have successfully created an Action, you will see it on the Actions page under Scheduled Actions. Here you will be able to view a Schedule, the Next Run, Affected Workloads, and the Status. You can select Details to view more information about a specific Action, or delete the scheduled Action by selecting the trash can icon.
Kubecost can provide and implement recommendations for right-sizing your supported clusters to ensure they are configured in the most cost-effective way. Recommendations are available for any and all clusters. Kubecost in certain configurations is also capable of taking a recommendation and applying it directly to your cluster in one moment. These two processes should be distinguished respectively as viewing cluster recommendations vs. adopting cluster recommendations.
Kubecost is also able to implement cluster sizing recommendations on a user-scheduled interval, known as continuous cluster right-sizing.
You can access cluster right-sizing by selecting Savings in the left navigation, then select the Right-size your cluster nodes panel.
Kubecost will offer two recommendations: simple (uses one node type) and complex (uses two or more node types). Kubecost may hide the complex recommendation when it is more expensive than the simple recommendation, and present a single recommendation instead. These recommendations and their metrics will be displayed in a chart next to your existing configuration in order to compare values like total cost, node count, and usage.
Kubecost provides its right-sizing recommendations based on the characteristics of your cluster. You have the option to edit certain properties to generate relevant recommendations.
There are multiple dropdown menus to consider:
In the Cluster dropdown, you can select the individual cluster you wish to apply right-sizing recommendations to.
In the Window dropdown, select the number of days to query for your cluster's most recent activity. Options range from 1 day to 7 days. If your cluster has varying performance on different days of the week, it's better to select a longer interval for the most consistent recommendations.
You can toggle on Show optimization inputs to view resources which will determine the minimum size of your nodes. These resources are:
DaemonSet VCPUs/RAM: Resources allocated by DaemonSets on each node.
Max pod VCPUs/RAM: Largest resource allocation by any single Pod in the cluster.
Non-DaemonSet/static VCPUs/RAM: Sum of resources allocated to Pods not controlled by DaemonSets.
Finally, you can select Edit to provide information about the function of your cluster.
In the Profile dropdown, select the most relevant category of your cluster. You can select Production, Development, or High Availability.
Production: Stable cluster activity, will provide some extra space for potential spikes in activity.
Development: Cluster can tolerate small amount of instability, will run cluster somewhat close to capacity.
High availability: Cluster should avoid instability at all costs, will size cluster with lots of extra space to account for unexpected spikes in activity.
In the Architecture dropdown, select either x86 or ARM. You may only see x86 as an option. This is normal. At the moment, ARM architecture recommendations are only supported on AWS clusters.
With this information provided, Kubecost can provide the most accurate recommendations for running your clusters efficiently. By following some additional steps, you will be able to adopt Kubecost's recommendation, applying it directly to your cluster.
To receive cluster right-sizing recommendations, you must first:
Have a GKE/EKS/AWS Kops cluster
To adopt cluster right-sizing recommendations, you must:
Have a GKE/EKS/AWS Kops cluster
Enable the Cluster Controller on that cluster and perform the provider service key setup
In order for Kubecost to apply a recommendation, it needs write access to your cluster. Write access to your cluster is enabled with the Cluster Controller.
To adopt a recommendation, select Adopt recommendation > Adopt. Implementation of right-sizing for your cluster should take roughly 10-30 minutes.
If you have Kubecost Actions enabled, you can also perform immediate right-sizing by selecting Savings, then selecting Actions. On the Actions page, select Create Action > Cluster Sizing to receive immediate recommendations and the option to adopt them.
Recommendations via Kubecost Actions can only be adopted on your primary cluster. To adopt recommendations on a secondary cluster via Kubecost Actions, you must first manually switch to that cluster's Kubecost frontend.
Continuous cluster right-sizing has the same requirements needed as implementing any cluster right-sizing recommendations. See above for a complete description of prerequisites.
Continuous Cluster Right-Sizing is accessible via Actions. On the Actions page, select Create Action > Guided Sizing. This feature implements both cluster right-sizing and container right-sizing.
For a tutorial on using Guided Sizing, see here.
If you are using Persistent Volumes (PVs) with AWS's Elastic Block Store (EBS) Container Storage Interface (CSI), you may run into a problem post-resize where pods are in a Pending state because of a "volume node affinity conflict". This may be because the pod needs to mount an already-created PV which is in an Availability Zone (AZ) without node capacity for the pod. This is a limitation of the EBS CSI.
Kubecost mitigates this problem by ensuring continuous cluster right-sizing creates at least one node per AZ by forcing NodeGroups to have a node count greater than or equal to the number of AZs of the EKS cluster. This will also prevent you from setting a minimum node count for your recommendation below the number of AZs for your cluster. If the EBS CSI continues to be problematic, you can consider switching your CSI to services like Elastic File System (EFS) or FSx for Lustre.
Using Cluster Autoscaler on AWS may result in a similar error. See more here.
The Spot Readiness Checklist investigates your Kubernetes workloads to attempt to identify those that are candidates to be schedulable on Spot (preemptible) nodes. Spot nodes are deeply-discounted nodes (up to 90% cheaper) from your cloud provider that do not come with an availability guarantee. They can disappear at any time, though most cloud providers guarantee some sort of alert and a small shutdown window, on the order of tens of seconds to minutes, before the node disappears.
Spot-ready workloads, therefore, are workloads that can tolerate some level of instability in the nodes they run on. Examples of Spot-ready workloads are usually state-free: many microservices, Spark/Hadoop nodes, etc.
The Spot Checklist performs a series of checks that use your own workload configuration to determine readiness:
Controller Type (Deployment, StatefulSet, etc.)
Replica count
Local storage
Controller Pod Disruption Budget
Rolling update strategy (Deployment-only)
Manual annotation overrides
You can access the Spot Checklist in the Kubecost UI by selecting Settings > Spot Instances > Spot Checklist.
The checklist is configured to investigate a fixed set of controllers, currently only Deployments and StatefulSets.
Deployments are considered Spot-ready because they are relatively stateless, intended to only ensure a certain number of pods are running at a given time.
StatefulSets should generally be considered not Spot ready; they, as their name implies, usually represent stateful workloads that require the guarantees that StatefulSets. Scheduling StatefulSet pods on Spot nodes can lead to data loss.
Workloads with a configured replica count of 1 are not considered Spot-ready because if the single replica is removed from the cluster due to a Spot node outage, the workload goes down. Replica counts greater than 1 signify a level of Spot-readiness because workloads that can be replicated tend to also support a variable number of replicas that can occur as a result of replicas disappearing due to Spot node outages.
Currently, workloads are only checked for the presence of an emptyDir
volume. If one is present, the workload is assumed to be not Spot-ready.
More generally, the presence of a writable volume implies a lack of Spot readiness. If a pod is shut down non-gracefully while it is in the middle of a write, data integrity could be compromised. More robust volume checks are currently under consideration.
If you are considering this check while evaluating your workloads for Spot-readiness, do not immediately discount them because of this check failing. Workloads should always be evaluated on a case-by-case basis and it is possible that an unnecessarily strict PDB was configured.
Deployments have multiple options for update strategies and by default they are configured with a Rolling Update Strategy (RUS) with 25% max unavailable. If a deployment has an RUS configured, we do a similar min available (calculated from max unavailable in rounded-down integer form and replica count) calculation as with PDBs, but threshold it at 0.9 instead of 0.5. Doing so ensures that default-configured deployments with replica counts greater than 3 will pass the check.
We also support manually overriding the Spot readiness of a controller by annotating the controller itself or the namespace it is running in with spot.kubecost.com/spot-ready=true
.
Kubecost marking a workload as Spot ready is not a guarantee. A domain expert should always carefully consider the workload before approving it to run on Spot nodes.
Most cloud providers support a mix of Spot and non-Spot nodes in the cluster and they have guides:
Different cloud providers have different guarantees on shutdown windows and automatic draining of Spot nodes that are about to be removed. Consult your provider’s documentation before introducing Spot nodes to your cluster.
Additionally, it is generally wise to use smaller size Spot nodes. This minimizes the scheduling impact of individual Spot nodes being reclaimed by your cloud provider. Consider one Spot node of 20 CPU cores and 120 GB RAM against 5 Spot nodes of 4 CPU and 24 GB. In the first case, that single node being reclaimed could force tens of pods to be rescheduled, potentially causing scheduling problems, especially if capacity is low and spinning up a new node takes too long. In the second case, fewer pods are forced to be rescheduled if a reclaim event occurs, thus lowering the likelihood of scheduling problems.
The Savings page provides miscellaneous functionality to help you use resources more effectively and assess wasteful spending. In the center of the page, you will see your estimated monthly savings available. The savings value is calculated from all enabled Savings features, across your clusters and the designated cluster profile via dropdowns in the top right of the page.
The Savings page provides an array of panels containing different insights capable of lowering your Kubernetes and cloud spend.
The monthly savings values on this page are precomputed every hour for performance reasons, while per-cluster views of these numbers, and the numbers on each individual Savings insight page, are computed live. This may result in some discrepancies between estimated savings values of the Savings page and the pages of individual Savings insights.
Reserve instances
You can archive individual Savings insights if you feel they are not helpful, or you cannot perform those functions within your organization or team. Archived Savings insights will not add to your estimated monthly savings available.
To temporarily archive a Savings insight, select the three horizontal dots icon inside its panel, then select Archive. You can unarchive an insight by selecting Unarchive.
You can also adjust your insight panels display by selecting View. From the View dropdown, you have the option to filter your insight panels by archived or unarchived insights. This allows you to effectively hide specific Savings insights after archiving them. Archived panels will appear grayed out, or disappear depending on your current filter.
By default, the Savings page and any displayed metrics (For example, estimated monthly savings available) will apply to all connected clusters. You can view metrics and insights for a single cluster by selecting it from the dropdown in the top right of the Savings page.
Functionality for most cloud insight features only exists when All Clusters is selected in the cluster dropdown. Individual clusters will usually only have access to Kubernetes insight features.
On the Savings page, as well as on certain individual Savings insights, you have the ability to designate a cluster profile. Savings recommendations such as right-sizing are calculated in part based on your current cluster profile:
Production: Expects stable cluster activity, will provide some extra space for potential spikes in activity.
Development: Cluster can tolerate small amount of instability, will run cluster somewhat close to capacity.
High availability: Cluster should avoid instability at all costs, will size cluster with lots of extra space to account for unexpected spikes in activity.
Kubecost displays all local disks it detects with low usage, with recommendations for resizing and predicted cost savings.
You can access the Local Disks page by selecting Settings in the left navigation, then selecting Manage local disks.
You will see a table of all disks in your environment which fall under 20% current usage. For each disk, the table will display its connected cluster, its current utilization, resizing recommendation, and potential savings. Selecting an individual line item will take you offsite to a Grafana dashboard for more metrics relating to that disk.
In the Cluster dropdown, you can filter your table of disks to an individual cluster in your environment.
In the Profile dropdown, you can configure your desired overhead percentage, which refers to the percentage of extra usage you would like applied to each disk in relation to its current usage. The following overhead percentages are:
Development (25%)
Production (50%)
High Availability (100%)
The value of your overhead percentage will affect your resizing recommendation and estimated savings, where a higher overhead percentage will result in higher average resize recommendation, and lower average estimated savings. The overhead percentage is applied to your current usage (in GiB), then added to your usage obtain a value which Kubecost should round up to for its resizing recommendation. For example, for a disk with a usage of 12 GiB, with Production (50%) selected from the Profile dropdown, 6 GiB (50% of 12) will be added to the usage, resulting in a resizing recommendation of 18 GiB.
Kubecost can only provide detection of underused disks with recommendations for resizing. It does not assist with node turndown.
The Abandoned Workloads page can detect workloads which have not sent or received a meaningful rate of traffic over a configurable duration.
You can access the Abandoned Workloads page by selecting Savings in the left navigation, then selecting Manage abandoned workloads.
The Abandoned Workloads page will display front and center an estimated savings amount per month based on a number of detected workloads considered abandoned, defined by two values:
Traffic threshold (bytes/sec): This slider will determine a meaningful rate of traffic (bytes in and out per second) to detect activity of workloads. Only workloads below the threshold will be taken into account, therefore, as you increase the threshold, you should observe the total detected workloads increase.
Window (days): From the main dropdown, you will be able to select the duration of time to check for activity. Presets include 2 days, 7 days, and 30 days. As you increase the duration, you should observe the total detected workloads increase.
Beneath your total savings value and slider scale, you will see a dashboard containing all abandoned workloads. The number of total line items should be equal to the number of workloads displayed underneath your total savings value.
You can filter your workloads through four dropdowns; across clusters, namespaces, owners, and owner kinds.
Selecting an individual line item will expand the item, providing you with additional traffic data for that item.
Kubecost displays all nodes with both low CPU/RAM utilization, indicating they may need to be turned down or resized, while providing checks to ensure safe drainage can be performed.
You can access the Underutilized Nodes page by selecting Savings in the left navigation, then selecting Manage underutilized nodes.
To receive accurate recommendations, you should set the maximum utilization percentage for CPU/RAM for your cluster. This is so Kubecost can determine if your environment can perform successfully below the selected utilization once a node has been drained. This is visualized by the Maximum CPU/RAM Request Utilization slider bar. In the Profile dropdown, you can select three preset values, or a custom option:
Development: Sets the utilization to 80%.
Production: Sets the utilization to 65%.
High Availability: Sets the utilization to 50%.
Custom: Allows you to manually move the slider.
Kubecost provides recommendations by performing a Node Check and a Pod Check to determine if a node can be drained without creating problems for your environment. For example, if draining the node would put the cluster above the utilization request threshold, the Node Check will fail. Only a node that passes both Checks will be recommended for safe drainage. For nodes that fail at least one Check, selecting the node will provide a window of potential pod issues.
Kubecost does not directly assist in turning nodes down.
Spot Commander is a Savings feature which identifies workloads where it is available and cost-effective to switch to Spot nodes, resizing the cluster in the process. Spot-readiness is determined through a which analyzes the workload and assesses the minimal cost required. It also generates CLI commands to help you implement the recommendation.
The recommended Spot cluster configuration uses all of the data available to Kubecost to compute a "resizing" of your cluster's nodes into a set of on-demand (standard) nodes O
and a set of spot (preemptible) nodes S
. This configuration is produced from applying a scheduling heuristic to the usage data for all of your workloads. This recommendation offers a more accurate picture of the savings possible from implementing spot nodes because nodes are what the cost of a cluster is made up of; once O
and S
have been determined, the savings are the current cost of your nodes minus the estimated cost of O
and S
.
The recommended configuration assumes that all workloads considered spot-ready by the will be schedulable on spot nodes and that workloads considered not spot-ready will only be schedulable on on-demand nodes. Kubernetes has for achieving this behavior. Cloud providers usually have guides for using spot nodes with taints and tolerations in your managed cluster:
Different cloud providers have different guarantees on shutdown windows and automatic draining of spot nodes that are about to be removed. Consult your provider’s documentation before introducing spot nodes to your cluster.
Kubecost marking a workload as spot ready is not a guarantee. A domain expert should always carefully consider the workload before approving it to run on spot nodes.
Determining O
and S
is achieved by first partitioning all workloads on the cluster (based on the results of the Checklist) into sets: spot-ready workloads R
and non-spot-ready workloads N
. Kubecost consults its maximum resource usage data (in each Allocation, Kubecost records the MAXIMUM CPU and RAM used in the window) and determines the following for each of R
and N
:
The maximum CPU used by any workload
The maximum RAM used by any workload
The total CPU (sum of all individual maximums) required by non-DaemonSet workloads
The total RAM (sum of all individual maximums) required by non-DaemonSet workloads
The total CPU (sum of all individual maximums) required by DaemonSet workloads
The total RAM (sum of all individual maximums) required by DaemonSet workloads
Kubecost uses this data with a configurable target utilization (e.g., 90%) for R
and N
to create O
and S
:
Every node in O
and S
must reserve 100% - target utilization
(e.g., 100% - 90% = 10%
) of its CPU and RAM
Every node in O
must be able to schedule the DaemonSet requirements in R
and N
Every node in S
must be able to schedule the DaemonSet requirements in R
With the remaining resources:
The largest CPU requirement in N
must be schedulable on a node in O
The largest RAM requirement in N
must be schedulable on a node in O
The largest CPU requirement in R
must be schedulable on a node in S
The largest RAM requirement in R
must be schedulable on a node in S
The total CPU requirements of N
must be satisfiable by the total CPU available in O
The total RAM requirements of N
must be satisfiable by the total RAM available in O
The total CPU requirements of R
must be satisfiable by the total CPU available in S
The total RAM requirements of R
must be satisfiable by the total RAM available in S
It is recommended to set the target utilization at or below 95% to allow resources for the operating system and the kubelet.
The configuration currently only recommends one node type for O
and one node type for S
but we are considering adding multiple node type support. If your cluster requires specific node types for certain workloads, consider using Kubecost's recommendation as a launching point for a cluster configuration that supports your specific needs.
Kubecost is able to provide recommendations for resizing your PVs by comparing their average usage to their maximum capacity, and can recommend sizing down to smaller storage sizes.
To access the Persistent Volume Right-Sizing Recommendations page, select Savings from the left navigation, then select Right-size persistent volumes.
Kubecost will display a table containing all PVs in your environment. Table columns include the PV name and its corresponding cluster, and metrics pertaining usage and savings. The estimated savings per month per table item is calculated by subtracting your recommended cost from the current cost.
You can filter your table of PVs using the Cluster dropdown to view PVs in an individual cluster, or across all connected clusters.
You can also adjust Kubecost’s average recommended capacity size using the Profile dropdown, which establishes how much minimum excess capacity you will for every PV, using their local usage data from the past six hours. The percentage value associated with each Profile is the minimum unused capacity required per PV, which is then added to the max usage to obtain Kubecost’s recommendation. Recommended capacity is calculated as (max usage + (max usage * overhead percentage)) in GiB. This is then converted to GB and rounded to the nearest tenth when displayed in the UI (A capacity of 1 GiB will be converted to 1.1 GB). Max Usage is also converted in this way from GiB to GB. The smallest denomination Kubecost will recommend per PV is 1.1 GB. From here, the recommended capacity increases in intervals of 1 GiB. The higher the minimum excess capacity needed, the higher the average recommended capacity, and therefore the lower the average savings.
For example, for a PV with a max usage of 2 GiB, and a selected Production Profile (which requires 50% excess capacity), the overhead will be calculated as 2 * .5, then added to the max usage, resulting in a minimum recommended capacity of 3 GiB. This will then be converted to approximately 3.2 GB for the final recommendation.
Kubecost does not directly assist with resizing your PVs.
It is possible to configure a for controllers that causes the scheduler to (where possible) adhere to certain availability requirements for the controller. If a controller has a PDB set up, we read it and compute its minimum available replicas and use a simple threshold on the ratio min available / replicas
to determine if the PDB indicates readiness. We chose to interpret a ratio of > 0.5 to indicate a lack of readiness because it implies a reasonably high availability requirement.
The Checklist is now deployed alongside a which automatically suggests a set of Spot and on-demand nodes to use in your cluster based on the Checklist. If you do not want to use that, read the following for some important information:
It is a good idea to use to schedule only Spot-ready workloads on Spot nodes.
This feature is in beta. Please read the documentation carefully.
Kubecost can automatically implement its recommendations for container resource requests if you have the Cluster Controller component enabled. Using container request right-sizing (RRS) allows you to instantly optimize resource allocation across your entire cluster. You can easily eliminate resource over-allocation in your cluster, which paves the way for vast savings via cluster right-sizing and other optimizations.
There are no restrictions to receive container RRS recommendations.
To adopt these recommendations, you must enable the Cluster Controller on that cluster. In order for Kubecost to apply a recommendation, it needs write access to your cluster, which is enabled with the Cluster Controller.
Select Savings in the left navigation, then select Right-size your container requests. The Request right-sizing recommendations page opens.
Select Customize to modify the right-sizing settings. Your customization settings will tell Kubecost how to calculate its recommendations, so make sure it properly represents your environment and activity:
Window: Duration of deployment activity Kubecost should observe
Profile: Select from Development, Production, or High Availability*, which come with preconfigured values for CPu/RAM target utilization fields. Selecting Custom will allow you to manually configure these fields.
CPU/RAM recommendation algorithm: Always configured to Max.
CPU/RAM target utilization: Refers to the percentage of used resources over total resources available.
Add Filters: Optional configuration to limit the deployments which will have right-sizing recommendations applied. This will provide greater flexibility in optimizing your environment. Ensure you select the plus icon next to the filter value text box to add the filter. Multiple filters can be added.
When finished, select Save.
Your configured recommendations can also be downloaded as a CSV file by selecting the three dots button > Download CSV.
There are several ways to adopt Kubecost's container RRS recommendations, depending on how frequently you wish to utilize this feature for your container requests.
To apply RRS as you configured in one instance, select Resize Requests Now > Yes, apply the recommendation.
Also referred to as continuous container RRS, autoscaling allows you to configure a schedule to routinely apply RRS to your deployments. You can configure this by selecting Enable Autoscaling, selecting your Start Date and schedule, then confirming with Apply.
Both one-click and continuous container RRS can be configured via Savings Actions. On the Actions page, select Create Action, then select either:
Request Sizing: Will open the Container RRS page with the schedule window open to configure and apply.
Guided Sizing: Will open the Guided Sizing page and allow you to apply both one-click RRS, then continous cluster sizing
Kubecost will display volumes unused by any pod. You can consider these volumes for deletion, or move them to a cheaper storage tier.
You can access the Unclaimed Volumes page by selecting Savings in the left navigation, then selecting Manage unclaimed volumes.
Volumes will be displayed in a table, and can be sorted By Owner or By Namespace. You can view owner, storage class, and size for your volumes.
Using the Cluster dropdown, you can filter volumes connected to an individual cluster in your environment.
Kubecost displays all disks and IP addresses that are not utilized by any cluster. These may still incur charges, and so you should consider these orphaned resources for deletion.
You can access the Orphaned Resources page by selecting Savings in the left navigation, then selecting Manage orphaned resources.
Disks and IP addresses (collectively referred to as resources) will be displayed in a single table. Selecting an individual line item will expand its tab and provide more metrics about the resource, including cost per month, size (disks only), region, and a description of the resource.
You can filter your table of resources using two dropdowns:
The Resource dropdown will allow you to filter by resource type (Disk or IP Address).
The Region dropdown will filter by the region associated with the resource. Resources with the region “Global” cannot be filtered, and will only display when All has been selected.
Above your table will be an estimated monthly savings value. This value is the sum of all displayed resources’ savings. As you filter your table of resources, this value will naturally adjust.
For cross-functional convenience, you can copy the name of any resource by selecting the copy icon next to it.