NVIDIA GPU Monitoring Configurations
Monitoring GPU utilization
Kubecost supports monitoring of NVIDIA GPU utilization starting with the Volta architecture (2017). In order for Kubecost to understand GPU utilization, Kubecost depends on metrics being available from NVIDIA DCGM Exporter. Kubecost will search for GPU metrics by default, but since DCGM Exporter is the provider of those metrics it is a required component when GPU monitoring is used with Kubecost and must be installed if it is not already. In many cases, DCGM Exporter may already be installed in your cluster, for example if you currently monitor NVIDIA GPUs with other software. See the Pre-Installed DCGM Exporter section for important considerations. But if not, follow the instructions in the Install DCGM Exporter section to install and configure DCGM Exporter on each of your GPU-enabled clusters.
Managed DCGM offerings such as Google Cloud's managed DCGM are currently not supported. DCGM Exporter must be self-installed and managed in target clusters.
Pre-Installed DCGM Exporter
In cases where you have already installed DCGM Exporter in your cluster, Kubecost can leverage this installation as the source of its GPU metrics. However, in order for Kubecost's bundled Prometheus to find this installation the following requirements must be met.
DCGM Exporter has been installed with a matching Kubernetes Service. Installations consisting of only the DaemonSet are not supported. Helm-based installations of either the DCGM Exporter or the GPU operator charts include a Service by default.
The Service must have active Endpoints which list the available DCGM Exporter pods.
The Service and Endpoints must have at least one of a few possible labels assigned in order for Prometheus to locate them. The currently-supported label keys are shown below. Assigning a compatible label solely to pods or the parent DaemonSet is not supported.
app
app.kubernetes.io/component
app.kubernetes.io/name
The value of one of these labels must contain the string
dcgm-exporter
.
Install DCGM Exporter
DCGM Exporter is an implementation of NVIDIA Data Center GPU Manager (DCGM) for Kubernetes which exports metrics in Prometheus format. DCGM Exporter allows for running the DCGM software under Kubernetes on nodes which contain NVIDIA devices and takes care of the task of making DCGM metrics available to external tools such as Kubecost.
DCGM Exporter runs as a DaemonSet and its pods are intended to run only on nodes with one or more NVIDIA GPUs. Because Kubernetes clusters commonly have a mixture of nodes with GPUs and those without GPUs, you use label(s) to affine the DCGM Exporter pods to only those nodes containing NVIDIA GPUs. If DCGM Exporter pods run on nodes without NVIDIA GPUs, they enter a CrashLoopBackoff
state. The label(s) you use may vary by Kubernetes cloud provider, platform, or more. There are multiple approaches to selecting the appropriate label(s) used to attract the DCGM Exporter pods to applicable nodes.
Use a pre-provided label by your cloud provider (if applicable, varies by cloud provider).
Use a custom label you define on your GPU nodes. For example, by defining a custom label at the node pool level in your cloud provider.
Use a label assigned automatically by Kubernetes Node Feature Discovery (NFD).
The first two options require no additional cluster components be installed while the third requires the Kubernetes Node Feature Discovery (NFD) component. Kubecost recommends using an existing label assigned to your GPU nodes (provided by the cloud provider or yourself), if possible, as this is a simpler installation path.
In addition to the label requirement, there may be additional values required for a successful installation of DCGM Exporter which may vary by cloud provider and worker node operating system. This guide includes the following installation instructions.
General Quickstart: Start here if not on GKE.
GKE: For GKE users only.
Node Feature Discovery: For any Kubernetes environment where preexisting GPU node labels are not an option.
DCGM Exporter may also be deployed via the NVIDIA GPU operator, however the operator is a more complex component with specialized requirements and, as such, is outside the current scope of this documentation.
These instructions have been verified on version 3.3.8-3.6.0 of DCGM Exporter but prior versions of v3 should work as well.
General Quickstart
DCGM Exporter can be installed on most Kubernetes clusters with only a few values provided that a preexisting label can be used to identify GPU-only nodes. This label may be provided by a cloud vendor or yourself. Follow these steps to get started with DCGM Exporter.
In the below values, you provide your own label key and value in place of mylabel
and myvalue
. This label combination should be unique to NVIDIA GPU nodes.
Install DCGM Exporter using the values defined.
Ensure the DCGM Exporter pods are in a running state and only on the nodes with NVIDIA GPUs.
Finally, perform a validation step to ensure that metrics are working as expected. See the Validation section for details.
GKE
Managed DCGM offerings such as Google Cloud's managed DCGM are currently not supported. DCGM Exporter must be self-installed and managed in target clusters.
To install DCGM Exporter on a GKE cluster where the worker nodes use the default Container Optimized OS (COS), use the following values. The GKE-provided label cloud.google.com/gke-accelerator
is used to attract DCGM Exporter pods to nodes with NVIDIA GPUs.
These values have been verified on GKE 1.27 and DCGM Exporter 3.3.8-3.6.0. Ensure you check and follow the current values structure of the target version of DCGM Exporter to be installed if different.
Install DCGM Exporter from the available Helm chart while supplying the values defined above.
Ensure the DCGM Exporter pods are in a running state and only on the nodes with NVIDIA GPUs.
If necessary, create a ResourceQuota allowing DCGM Exporter pods to be scheduled.
For additional information on installing DCGM Exporter in Google Cloud, see here.
Finally, perform a validation step to ensure that metrics are working as expected. See the Validation section for details.
Node Feature Discovery
These instructions are useful for installing DCGM Exporter on any Kubernetes cluster regardless of whether run by a cloud provider or self-managed, on-premises. They leverage the Kubernetes Node Feature Discovery (NFD) component which involves installation of an additional infrastructure component. Following these steps are recommended when you are not on GKE or do not have a preexisting label which identifies NVIDIA GPU nodes.
When following these instructions on a cloud provider, there may be additional values or steps required depending on the component installed.
Node Feature Discovery (NFD) is a Kubernetes utility which automatically discovers information and capabilities about your worker nodes and saves this information in the form of labels applied to the node. For example, NFD will discover the CPU details, OS, and the PCI cards installed in a worker node on which the NFD pod is run. These labels can be useful in a number of scenarios beyond installation of DCGM Exporter. An example of some of the labels are shown below.
When run on a node with an NVIDIA GPU, NFD will apply the label feature.node.kubernetes.io/pci-10de.present="true"
. This label can then be used to attract DCGM Exporter pods to NVIDIA GPU nodes automatically.
10DE is the vendor ID assigned to the NVIDIA corporation.
NFD may be installed either standalone or as a component of the NVIDIA device plugin for Kubernetes. When installing NFD via the device plugin, you enable the GPU Feature Discovery (GFD) component at the same time. GFD uses the labels written by NFD to locate NVIDIA GPU nodes and write NVIDIA-specific information about the discovered GPUs to the node.
Cloud providers often install the device plugin on GPU nodes automatically. Therefore, in order to deploy GFD and NFD you may be required to upgrade or uninstall/reinstall the device plugin, which is a more advanced procedure. See instructions from your cloud provider first and refer to the NVIDIA device plugin for Kubernetes repository for further details.
To install NFD as a standalone component, follow the deployment guide here. A quick start command is also shown below. In some cases, you may have taints applied to GPU nodes which must be tolerated by the NFD DaemonSet. It is recommended to use the Helm installation guide to define tolerations if so.
Once NFD is installed, ensure one pod is running on your node(s) with NVIDIA GPUs.
After a few moments, check the labels of one such node to ensure the feature.node.kubernetes.io/pci-10de.present="true"
label has been applied.
An abridged output of the labels written to an EKS node is shown below.
With NFD having successfully discovered NVIDIA PCI devices and assigned the feature.node.kubernetes.io/pci-10de.present="true"
label, install DCGM Exporter using this label to attract pods to GPU nodes. When following this process on GKE, additional values may be required to successfully run DCGM Exporter. See the GKE section for more details.
Install DCGM Exporter using the values defined.
Ensure the DCGM Exporter pods are in a running state and only on the nodes with NVIDIA GPUs.
Finally, perform a validation step to ensure that metrics are working as expected. See the Validation section for details.
Customizing Metrics
DCGM Exporter presents a number of useful metrics by default. However, there are many more metrics available from DCGM which are not enabled by default. Kubecost may collect additional metrics about NVIDIA GPUs if they are emitted by DCGM Exporter. Configuring DCGM Exporter to emit additional metrics requires modification of the DCGM Exporter installation. Follow the procedure below to configure DCGM Exporter to emit additional metrics. Please be aware that emission of additional DCGM Exporter metrics, although they will be collected automatically by Kubecost's bundled Prometheus instance, does not imply that Kubecost will make use of them. This procedure should only be followed at the explicit advice of Kubecost support.
This procedure assumes you have installed DCGM Exporter according to one of the processes outlined in the Install DCGM Exporter section. It also assumes that DCGM Exporter with a minimum version of 3.3.8-3.6.0 has been installed via Helm, which has direct support for specifying custom metrics in the Helm values.
Supply Custom Metrics in Helm Values
Find the Helm values file used to deploy DCGM Exporter and add the customMetrics
key along with the full set of metrics you wish DCGM Exporter to emit. The values you supply must be the complete and final list of metrics to emit and is not additive. An example of this is shown below in which DCGM Exporter will be requested to emit only two total metrics.
Perform an upgrade of the Helm release using your modified values so the custom metrics are applied in the form of a ConfigMap mounted by the DCGM Exporter DaemonSet.
After upgrading, when DCGM Exporter pods return to service they should now be emitting the list of custom metrics provided in the new values.
For more information on DCM Exporter and its available Helm values and settings, see the official GitHub repository here.
Validation
To validate your DCGM Exporter configuration, port-forward into the DCGM Exporter service and ensure first that metrics are being exposed.
Use cURL
to perform a GET
request against the service and verify that multiple metrics and their values are shown.
An output similar to below should be shown.
If Kubecost has already been installed, next check the bundled Prometheus instance to ensure that the metrics from DCGM Exporter have been collected and are visible. This command exposes the Prometheus web interface on local port 8080
Open the Prometheus web interface in your browser by navigating to http://localhost:8080
. In the search box, begin typing the prefix for a metric, for example DCGM_FI_DEV_POWER_USAGE
. Click Execute to view the returned query and verify that there is data present. An example is shown below.
Additionally, check the DCGM_FI_PROF_GR_ENGINE_ACTIVE
metric. This is the metric Kubecost currently uses to determine GPU utilization. GPU efficiency features in the UI are only enabled when there are non-zero values for this metric.
Shared GPU Support
Kubecost supports NVIDIA GPU sharing using either the CUDA time-slicing or Multi-Process Service (MPS) methods. MIG is currently unsupported but is being evaluated for a future release. When employing either time-slicing or MPS, you must use the renameByDefault=true
option in the NVIDIA device plugin's configuration stanza. This parameter instructs the device plugin to advertise the resource nvidia.com/gpu.shared
on nodes where GPU sharing is enabled. Without this configuration option, the device plugin will instead advertise nvidia.com/gpu
which will mean Kubecost is unable to disambiguate an "exclusive" GPU access request from a shared GPU access request. As a result, Kubecost's cost information will be inaccurate.
Prior to enabling GPU sharing in your cluster, view the Limitations section to determine if this is right for you.
The following is an example of a time-slicing configuration which sets the renameByDefault
parameter.
With this configuration saved and applied to nodes, they will begin to advertise the nvidia.com/gpu.shared
device with a quantity equal to the replica count, defined in the configuration, multiplied by the number of physical GPUs inside the node. For example, a node with four (4) physical NVIDIA GPUs which uses this configuration will advertise sixteen (16) shared GPU devices.
Limitations
There are limitations of which to be aware when using NVIDIA GPU sharing with either time-slicing or MPS. Because NVIDIA does not support providing utilization metrics via DCGM Exporter for containers using shared GPUs, Kubecost will display a GPU cost of zero for these workloads. However, the GPU Savings Optimization card (Kubecost Enterprise) will be able to indicate in the utilization table which containers are configured for GPU sharing providing some visibility.
Last updated