# Workload Priority Control

The workload priority management feature allows you to change the priority of a workload within a project. The priority determines the workload's position in the project scheduling queue managed by the NVIDIA Run:ai [Scheduler](https://run-ai-docs.nvidia.com/self-hosted/2.22/platform-management/runai-scheduler/scheduling/how-the-scheduler-works). By adjusting the priority, you can increase the likelihood that a workload will be scheduled and preferred over others within the same project, ensuring that critical tasks are given higher priority and resources are allocated efficiently. The workload's priority also affects whether it can consume over-quota resources and whether it is subject to preemption by higher-priority workloads.

You can change the priority of a workload by selecting one of the predefined values from the NVIDIA Run:ai priority dictionary. This can be done using the NVIDIA Run:ai UI, API or CLI, depending on the workload type.

{% hint style="info" %}
**Note**

This applies only within a single project. It does not impact the scheduling queues or workloads of other projects.
{% endhint %}

## Priority Dictionary

Workload priority is defined by selecting a priority from a predefined list in the NVIDIA Run:ai priority dictionary. Each string corresponds to a specific Kubernetes [PriorityClass](https://run-ai-docs.nvidia.com/self-hosted/2.22/platform-management/runai-scheduler/concepts-and-principles#priority-and-preemption), which in turn determines scheduling behavior, such as whether the workload is preemptible or allowed to run over quota.

<table><thead><tr><th width="160.5390625">Priority</th><th>Kubernetes Value</th><th>Preemption</th><th>Over Quota</th></tr></thead><tbody><tr><td><code>very-low</code></td><td>25</td><td>Preemptible</td><td>Available</td></tr><tr><td><code>low</code></td><td>40</td><td>Preemptible</td><td>Available</td></tr><tr><td><code>medium-low</code></td><td>65</td><td>Preemptible</td><td>Available</td></tr><tr><td><code>medium</code></td><td>80</td><td>Preemptible</td><td>Available</td></tr><tr><td><code>medium-high</code></td><td>90</td><td>Preemptible</td><td>Available</td></tr><tr><td><code>high</code></td><td>125</td><td>Non-preemptible</td><td>Not available</td></tr><tr><td><code>very-high</code></td><td>150</td><td>Non-preemptible</td><td>Not available</td></tr></tbody></table>

### Preemptible vs Non-Preemptible Workloads

* **Non-preemptible workloads** must run within the project's deserved quota, cannot use over-quota resources, and will not be interrupted once scheduled.
* **Preemptible workloads** can use opportunistic compute resources beyond the project's quota but may be interrupted at any time.

## Default Priority per Workload

Both NVIDIA Run:ai and third-party workloads are assigned a default priority per workload type.

{% hint style="info" %}
**Note**

* [Legacy priority values](https://app.gitbook.com/s/Uc7kDeOTlZaDiMM2pR07/platform-management/runai-scheduler/scheduling/workload-priority-control) are still supported for backward compatibility.
* Changing the priority is not supported for NVCF workloads.
  {% endhint %}

### NVIDIA Run:ai Workloads

<table><thead><tr><th>Workload Type</th><th data-type="checkbox">Low</th><th data-type="checkbox">High</th><th data-type="checkbox">Very High</th></tr></thead><tbody><tr><td><a href="../../../workloads-in-nvidia-run-ai/using-workspaces/running-workspace">Workspaces</a></td><td>false</td><td>true</td><td>false</td></tr><tr><td><a href="../../../workloads-in-nvidia-run-ai/using-training/standard-training/train-models">Standard training</a></td><td>true</td><td>false</td><td>false</td></tr><tr><td><a href="../../../workloads-in-nvidia-run-ai/using-training/distributed-training/distributed-training-models">Distributed training</a></td><td>true</td><td>false</td><td>false</td></tr><tr><td><a href="../../../workloads-in-nvidia-run-ai/using-inference/custom-inference">Custom inference</a></td><td>false</td><td>false</td><td>true</td></tr><tr><td><a href="../../../workloads-in-nvidia-run-ai/using-inference/hugging-face-inference">Hugging Face inference</a></td><td>false</td><td>false</td><td>true</td></tr><tr><td><a href="../../../workloads-in-nvidia-run-ai/using-inference/nim-inference">NVIDIA NIM inference</a></td><td>false</td><td>false</td><td>true</td></tr></tbody></table>

### Third-Party Workloads

<table><thead><tr><th>Workload Type</th><th data-type="checkbox">Low</th><th data-type="checkbox">High</th><th data-type="checkbox">Very High</th></tr></thead><tbody><tr><td>NVIDIA Cloud Functions (NVCF)</td><td>false</td><td>false</td><td>true</td></tr><tr><td>Deployment</td><td>false</td><td>false</td><td>true</td></tr><tr><td>Seldon Deployment</td><td>false</td><td>false</td><td>true</td></tr><tr><td>StatefulSet</td><td>false</td><td>false</td><td>true</td></tr><tr><td>ReplicaSet</td><td>false</td><td>false</td><td>true</td></tr><tr><td>Pod</td><td>false</td><td>false</td><td>true</td></tr><tr><td>Service</td><td>false</td><td>false</td><td>true</td></tr><tr><td>CronJob</td><td>false</td><td>false</td><td>true</td></tr><tr><td>RayService</td><td>false</td><td>false</td><td>true</td></tr><tr><td>PipelineRun</td><td>false</td><td>false</td><td>true</td></tr><tr><td>Workflow</td><td>false</td><td>false</td><td>true</td></tr><tr><td>ScheduledWorkflow</td><td>false</td><td>false</td><td>true</td></tr><tr><td>DevWorkspace</td><td>false</td><td>true</td><td>false</td></tr><tr><td>Notebook</td><td>false</td><td>true</td><td>false</td></tr><tr><td>Job</td><td>false</td><td>true</td><td>false</td></tr><tr><td>TaskRun</td><td>false</td><td>true</td><td>false</td></tr><tr><td>VirtualMachineInstance</td><td>true</td><td>false</td><td>false</td></tr><tr><td>TFJob</td><td>true</td><td>false</td><td>false</td></tr><tr><td>PyTorchJob</td><td>true</td><td>false</td><td>false</td></tr><tr><td>XGBoostJob</td><td>true</td><td>false</td><td>false</td></tr><tr><td>MPIJob</td><td>true</td><td>false</td><td>false</td></tr><tr><td>AmlJob</td><td>true</td><td>false</td><td>false</td></tr><tr><td>RayCluster</td><td>true</td><td>false</td><td>false</td></tr><tr><td>RayJob</td><td>true</td><td>false</td><td>false</td></tr></tbody></table>

## Setting Priority During Workload Submission

{% hint style="info" %}
**Note**

Changing a workload's priority may impact its ability to be scheduled. For example, switching a workload from a `low` priority (which allows over-quota usage) to `high` priority (which requires in-quota resources) may reduce its chances of being scheduled in cases where the required quota is unavailable.
{% endhint %}

* **NVIDIA Run:ai workloads** - You can set the priority when submitting workloads via the UI, CLI, or API:
  * **UI** - Set workload priority under **General** settings (flexible submission only)
  * **API** - Set using the `PriorityClass` field
  * **CLI** - Set using the `--priority` flag
* **Third-party workloads** - Set the workload's priority by adding the following label under the `metadata.labels` section of your workload definition and use the following values, `very-low`, `medium-low`, `medium`, `medium-high`, `high`, `very-high` :

  ```yaml
  metadata:
    labels:
      priorityClassName: <priority>
  ```

## Updating the Default Priority Mapping

Administrators can change the default priority assigned to a workload type by updating the mapping using the [NVIDIA Run:ai API](https://run-ai-docs.nvidia.com/api/2.22/). To update the priority mapping:

1. Retrieve the list of workload types and their IDs using `GET /api/v1/workload-types`.
2. Identify the `workloadTypeId` of the workload type you want to modify.
3. Retrieve the list of available priorities and their IDs using `GET /api/v1/workload-priorities`.
4. Send a request to update the workload type with the new priority using\
   `PUT /api/v1/workload-types/{workloadTypeId}` and include the `priorityId` in the request body.

## Using API

Go to the [Workload priorities](https://run-ai-docs.nvidia.com/api/2.22/workloads/workload-properties#get-api-v1-workload-priorities) API reference to view the available actions.
