# Supported Features

This page compares feature support across different workload types in NVIDIA Run:ai. Use it to understand which scheduling, resource management, and platform capabilities are available for each workload type before selecting a workload model or submission method.

* [Native workloads](https://run-ai-docs.nvidia.com/saas/workloads-in-nvidia-run-ai/workload-types/native-workloads) - Fully integrated into the platform - Workspace, Training and Inference.
* [Supported workload types](https://run-ai-docs.nvidia.com/saas/workloads-in-nvidia-run-ai/workload-types/supported-workload-types) - A broad range of workloads from the ML and Kubernetes ecosystem enabled through the Resource Interface (RI).
* [Externally submitted Kubernetes workloads](#externally-submitted-kubernetes-workloads) - Workloads submitted outside of NVIDIA Run:ai. These workloads receive only minimal scheduling and platform capabilities.

Feature availability may vary across NVIDIA Run:ai versions and cluster deployments. Refer to this page and the linked documentation for the most up-to-date support details.

## Workload Submission <a href="#workload-submission-methods" id="workload-submission-methods"></a>

<table><thead><tr><th></th><th data-type="checkbox">Workspace</th><th data-type="checkbox">Standard Training</th><th data-type="checkbox">Distributed Training</th><th data-type="checkbox">Inference</th><th data-type="checkbox">Distributed Inference</th><th data-type="checkbox">Supported workload types</th></tr></thead><tbody><tr><td>UI</td><td>true</td><td>true</td><td>true</td><td>true</td><td>false</td><td>false</td></tr><tr><td>UI (via YAML)</td><td>false</td><td>false</td><td>false</td><td>false</td><td>false</td><td>true</td></tr><tr><td>API (Workloads v1)</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>false</td></tr><tr><td>API (Workloads v2)</td><td>false</td><td>false</td><td>false</td><td>false</td><td>false</td><td>true</td></tr><tr><td>CLI</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr></tbody></table>

## Scheduling and Resource Management

<table><thead><tr><th>Functionality</th><th data-type="checkbox">Workspace</th><th data-type="checkbox">Standard Training</th><th data-type="checkbox">Distributed Training</th><th data-type="checkbox">Inference</th><th data-type="checkbox">Distributed Inference</th><th data-type="checkbox">Supported workload types</th></tr></thead><tbody><tr><td><a href="../../../platform-management/runai-scheduler/scheduling/concepts-and-principles#fairness-fair-resource-distribution">Fairness</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td><a href="../../../platform-management/runai-scheduler/scheduling/concepts-and-principles#priority-and-preemption">Priority and preemption</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td><a href="../../../platform-management/runai-scheduler/scheduling/concepts-and-principles#over-quota">Over quota</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td><a href="../../platform-management/aiinitiatives/resources/node-pools">Node pools</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td><a href="../../../platform-management/runai-scheduler/scheduling/concepts-and-principles#placement-strategy-bin-pack-and-spread">Bin packing / Spread</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td><a href="../../platform-management/runai-scheduler/resource-optimization/fractions">Multi-GPU fractions</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td><a href="../../platform-management/runai-scheduler/resource-optimization/dynamic-fractions">Multi-GPU dynamic fractions</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td><a href="../../platform-management/runai-scheduler/resource-optimization/node-level-scheduler">Node level scheduler</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td><a href="../../platform-management/runai-scheduler/resource-optimization/memory-swap">Multi-GPU memory swap</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td>Elastic scaling</td><td>false</td><td>false</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td><a href="../../../platform-management/runai-scheduler/scheduling/concepts-and-principles#gang-scheduling">Gang scheduling</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td><a href="../../platform-management/aiinitiatives/resources/topology-aware-scheduling">Network topology-aware scheduling</a></td><td>false</td><td>false</td><td>true</td><td>false</td><td>true</td><td>true</td></tr><tr><td><a href="../../platform-management/aiinitiatives/resources/using-gb200">GB200 NVL72 and Multi-Node NVLink domains (MNNVL)</a></td><td>false</td><td>false</td><td>true</td><td>false</td><td>true</td><td>true</td></tr><tr><td><a href="../../platform-management/policies/scheduling-rules">Scheduling rules</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>false</td><td>false</td></tr></tbody></table>

## Operational and Platform Features

<table><thead><tr><th>Functionality</th><th data-type="checkbox">Workspace</th><th data-type="checkbox">Standard Training</th><th data-type="checkbox">Distributed Training</th><th data-type="checkbox">Inference</th><th data-type="checkbox">Distributed Inference</th><th data-type="checkbox">Supported workload types</th></tr></thead><tbody><tr><td><a href="../../infrastructure-setup/procedures/system-monitoring">Monitoring</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td>Workload awareness</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td><a href="../../../infrastructure-setup/authentication/overview#role-based-access-control-rbac-in-run-ai">RBAC</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td></tr><tr><td><a href="../workloads">Workload actions (stop/run)</a></td><td>true</td><td>true</td><td>true</td><td>false</td><td>false</td><td>false</td></tr><tr><td><a href="../using-inference/custom-inference">Rolling updates</a></td><td>false</td><td>false</td><td>false</td><td>true</td><td>false</td><td>false</td></tr><tr><td><a href="../workload-templates">Workload templates</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>false</td><td>false</td></tr><tr><td><a href="../assets">Workload assets</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>false</td><td>false</td></tr><tr><td><a href="../../platform-management/policies/workload-policies">Workload Policies</a></td><td>true</td><td>true</td><td>true</td><td>true</td><td>true</td><td>false</td></tr></tbody></table>

{% hint style="info" %}
**Workload awareness**

Specific workload-aware visibility, so that different pods are identified and treated as a single workload (for example GPU utilization, workload view, dashboards).
{% endhint %}

## Externally Submitted Kubernetes Workloads

Kubernetes workloads can be submitted outside of NVIDIA Run:ai, for example by using `kubectl` directly or Helm charts as part of an [AI application](https://run-ai-docs.nvidia.com/saas/ai-applications/introduction-to-ai-applications). These workloads are scheduled by NVIDIA Run:ai and receive full monitoring support, along with a subset of scheduling capabilities.

* Supported scheduling capabilities include:
  * [Fairness](https://run-ai-docs.nvidia.com/saas/platform-management/runai-scheduler/scheduling/concepts-and-principles#fairness-fair-resource-distribution)
  * [Priority and preemption](https://run-ai-docs.nvidia.com/saas/platform-management/runai-scheduler/scheduling/concepts-and-principles#priority-and-preemption)
  * [Over quota](https://run-ai-docs.nvidia.com/saas/platform-management/runai-scheduler/scheduling/concepts-and-principles#over-quota)
  * [Node pools](https://run-ai-docs.nvidia.com/saas/platform-management/aiinitiatives/resources/node-pools)
  * [Bin packing / Spread](https://run-ai-docs.nvidia.com/saas/platform-management/runai-scheduler/scheduling/concepts-and-principles#placement-strategy-bin-pack-and-spread)
  * [Multi-GPU fractions](https://run-ai-docs.nvidia.com/saas/platform-management/runai-scheduler/resource-optimization/fractions)
  * [Multi-GPU dynamic fractions](https://run-ai-docs.nvidia.com/saas/platform-management/runai-scheduler/resource-optimization/dynamic-fractions)
  * [Node level scheduler](https://run-ai-docs.nvidia.com/saas/platform-management/runai-scheduler/resource-optimization/node-level-scheduler)
  * [Multi-GPU memory swap](https://run-ai-docs.nvidia.com/saas/platform-management/runai-scheduler/resource-optimization/memory-swap)
  * [Gang scheduling](https://run-ai-docs.nvidia.com/saas/platform-management/runai-scheduler/scheduling/concepts-and-principles#gang-scheduling)
* All [monitoring](https://run-ai-docs.nvidia.com/saas/workloads#show-hide-details) capabilities are supported including event history, metrics and logs.
