Supported Features
Different types of workloads in NVIDIA Run:ai, including both NVIDIA Run:ai native workloads and workloads enabled via the Resource Interface (RI) offer varying levels of feature support. When selecting a workload, it is important to consider which platform capabilities are required for your use case.
The availability of specific features and capabilities can evolve across different NVIDIA Run:ai versions. Refer to the table and documentation for the most current support details.
Workload Submission
Workspace
Standard Training
Distributed Training
Inference
Distributed Inference
Workloads via Resource Interface
UI
API
CLI
Scheduling and Resource Management
Functionality
Workspace
Standard Training
Distributed Training
Inference
Distributed Inference
Workloads via Resource Interface
Operational and Platform Features
Functionality
Workspace
Standard Training
Distributed Training
Inference
Distributed Inference
Workloads via Resource Interface
Other Kubernetes Workloads
Other Kubernetes workloads can also be submitted via kubectl, receiving full monitoring and minimal scheduling capabilities.
All monitoring capabilities are supported including event history, metrics and logs.
Last updated