Supported Features
This page compares feature support across different workload types in NVIDIA Run:ai. Use it to understand which scheduling, resource management, and platform capabilities are available for each workload type before selecting a workload model or submission method.
Native workloads - Fully integrated into the platform - Workspace, Training and Inference.
Supported workload types - A broad range of workloads from the ML and Kubernetes ecosystem enabled through the Resource Interface (RI).
Externally submitted Kubernetes workloads - Workloads submitted outside of NVIDIA Run:ai. These workloads receive only minimal scheduling and platform capabilities.
Feature availability may vary across NVIDIA Run:ai versions and cluster deployments. Refer to this page and the linked documentation for the most up-to-date support details.
Workload Submission
UI
UI (via YAML)
API (Workloads v1)
API (Workloads v2)
CLI
Scheduling and Resource Management
Operational and Platform Features
Externally Submitted Kubernetes Workloads
Kubernetes workloads can be submitted outside of NVIDIA Run:ai, for example by using kubectl directly or Helm charts as part of an AI application. These workloads are scheduled by NVIDIA Run:ai and receive full monitoring support, along with a subset of scheduling capabilities.
All monitoring capabilities are supported including event history, metrics and logs.
Last updated