Nvidia Run:ai
OpenMeter can integrate with Nvidia's Run:ai to collect allocated and utilized resources for your AI/ML workloads, including GPUs, CPUs, and memory. This is useful for companies using Run:ai to run GPU workloads and want to bill and invoice their customers based on consumption of allocated and utilized resources.
How it works
You can install the OpenMeter Collector as a Kubernetes pod in your Run:ai cluster to collect metrics from your Run:ai platform automatically. OpenMeter will then periodically scrape the metrics from your Run:ai platform and emit them as CloudEvents to your OpenMeter instance. This allows you to track usage and billing for your Run:ai workloads.
Once you have the usage data ingested into OpenMeter, you can use it to setup prices and billing for your customers based on their usage.
Example
Let's say you want to charge your customers $0.2 per GPU minute and $0.05 per CPU minute. The OpenMeter Collector will emit the following events every 30 seconds from your Run:ai workloads to OpenMeter Cloud:
Note how the collector normalizes the collected metrics to a minute (configurable) making it easy to set per second, minute or hour pricing similar to how AWS EC2 pricing works.
See OpenMeter Billing docs to setup prices and billing for your customers.
Run:ai Metrics
The OpenMeter Collector supports the following Run:ai metrics:
Pod Metrics
Metric Name | Description |
---|---|
GPU_UTILIZATION_PER_GPU | GPU utilization percentage per individual GPU |
GPU_UTILIZATION | Overall GPU utilization percentage for the pod |
GPU_MEMORY_USAGE_BYTES_PER_GPU | GPU memory usage in bytes per individual GPU |
GPU_MEMORY_USAGE_BYTES | Total GPU memory usage in bytes for the pod |
CPU_USAGE_CORES | Number of CPU cores currently being used |
CPU_MEMORY_USAGE_BYTES | Amount of CPU memory currently being used in bytes |
GPU_GRAPHICS_ENGINE_ACTIVITY_PER_GPU | Graphics engine utilization percentage per GPU |
GPU_SM_ACTIVITY_PER_GPU | Streaming Multiprocessor (SM) activity percentage per GPU |
GPU_SM_OCCUPANCY_PER_GPU | SM occupancy percentage per GPU |
GPU_TENSOR_ACTIVITY_PER_GPU | Tensor core utilization percentage per GPU |
GPU_FP64_ENGINE_ACTIVITY_PER_GPU | FP64 (double precision) engine activity percentage per GPU |
GPU_FP32_ENGINE_ACTIVITY_PER_GPU | FP32 (single precision) engine activity percentage per GPU |
GPU_FP16_ENGINE_ACTIVITY_PER_GPU | FP16 (half precision) engine activity percentage per GPU |
GPU_MEMORY_BANDWIDTH_UTILIZATION_PER_GPU | Memory bandwidth utilization percentage per GPU |
GPU_NVLINK_TRANSMITTED_BANDWIDTH_PER_GPU | NVLink transmitted bandwidth per GPU |
GPU_NVLINK_RECEIVED_BANDWIDTH_PER_GPU | NVLink received bandwidth per GPU |
GPU_PCIE_TRANSMITTED_BANDWIDTH_PER_GPU | PCIe transmitted bandwidth per GPU |
GPU_PCIE_RECEIVED_BANDWIDTH_PER_GPU | PCIe received bandwidth per GPU |
GPU_SWAP_MEMORY_BYTES_PER_GPU | Amount of GPU memory swapped to system memory per GPU |
Workload Metrics
Metric Name | Description |
---|---|
GPU_UTILIZATION | Overall GPU utilization percentage across all GPUs in the workload |
GPU_MEMORY_USAGE_BYTES | Total GPU memory usage in bytes across all GPUs |
GPU_MEMORY_REQUEST_BYTES | Requested GPU memory in bytes for the workload |
CPU_USAGE_CORES | Number of CPU cores currently being used |
CPU_REQUEST_CORES | Number of CPU cores requested for the workload |
CPU_LIMIT_CORES | Maximum number of CPU cores allowed for the workload |
CPU_MEMORY_USAGE_BYTES | Amount of CPU memory currently being used in bytes |
CPU_MEMORY_REQUEST_BYTES | Requested CPU memory in bytes for the workload |
CPU_MEMORY_LIMIT_BYTES | Maximum CPU memory allowed in bytes for the workload |
POD_COUNT | Total number of pods in the workload |
RUNNING_POD_COUNT | Number of currently running pods in the workload |
GPU_ALLOCATION | Number of GPUs allocated to the workload |
Getting Started
First, create a new YAML file for the collector configuration. You will have to use the run_ai Redpanda Connect input:
The above section will tell Redpanda Connect how to collect metrics from your Run:ai platform.
Configuration Options
Option | Description | Default | Required |
---|---|---|---|
url | Run:ai base URL | - | Yes |
app_id | Run:ai app ID | - | Yes |
app_secret | Run:ai app secret | - | Yes |
resource_type | Run:ai resource to collect metrics from (workload or pod ) | workload | No |
metrics | List of Run:ai metrics to collect | All available | No |
schedule | Cron expression for the scrape interval | */30 * * * * * | No |
metrics_offset | Time offset for queries to account for delays in metric availability | 0s | No |
http | HTTP client configuration | - | No |
The collector supports all the metrics for both workloads and pods, visit the Run:ai API docs for more information.
Next, you need to configure the mapping from the Run:ai metrics to CloudEvents using bloblang:
Finally, you need to configure the OpenMeter output:
Read more about configuring Redpanda Connect in the OpenMeter Collector guide.
Scheduling
The collector runs on a schedule defined by the schedule
parameter using cron
syntax. It supports:
- Standard cron expressions (e.g.,
*/30 * * * * *
for every 30 seconds) - Duration syntax with the
@every
prefix (e.g.,@every 30s
)
Resource Types
The collector can collect metrics from two different resource types:
workload
- Collects metrics at the workload level, which represents a group of podspod
- Collects metrics at the individual pod level
Installation
Check out the OpenMeter Collector guide for installation instructions.