Supported Features

Different types of workloads in NVIDIA Run:ai, including both NVIDIA Run:ai native workloads and workloads enabled via the Resource Interface (RI) offer varying levels of feature support. When selecting a workload, it is important to consider which platform capabilities are required for your use case.

The availability of specific features and capabilities can evolve across different NVIDIA Run:ai versions. Refer to the table and documentation for the most current support details.

Note

Other Kubernetes workloads can also be submitted via kubectl, receiving only minimal scheduling and platform capabilities.

Workload Submission

Scheduling and Resource Management

Operational and Platform Features

Functionality
Workspace
Standard Training
Distributed Training
Inference
Distributed Inference
Workloads via Resource Interface

Workload awareness

Specific workload-aware visibility, so that different pods are identified and treated as a single workload (for example GPU utilization, workload view, dashboards).

Last updated