Over quota, fairness and preemption
This quick start provides a step-by-step walkthrough of the core scheduling concepts - over quota, fairness, and preemption. It demonstrates the simplicity of resource provisioning and how the system eliminates bottlenecks by allowing users or teams to exceed their resource quota when free GPUs are available.
Over quota - In this scenario, team-a runs two training workloads and team-b runs one. Team-a has a quota of 3 GPUs and is over quota by 1 GPU, while team-b has a quota of 1 GPU. The system allows this over quota usage as long as there are available GPUs in the cluster.
Fairness and preemption - Since the cluster is already at full capacity, when team-b launches a new b2 workload requiring 1 GPU , team-a can no longer remain over quota. To maintain fairness, the NVIDIA Run:ai Scheduler preempts workload a1 (1 GPU), freeing up resources for team-b.
Note
If enabled by your Administrator, the NVIDIA Run:ai UI allows you to create a new workload using either the Flexible or Original submission form. The steps in this quick start guide reflect the Original form only.
Prerequisites
You have created two projects - team-a and team-b - or have them created for you.
Each project has an assigned quota of 2 GPUs.
Step 1: Logging in
Browse to the provided NVIDIA Run:ai user interface and log in with your credentials.
Step 2: Submitting the first training workload (team-a)
Go to the Workload Manager → Workloads
Click +NEW WORKLOAD and select Training
Select under which cluster to create the workload
Select the project named team-a
Under Workload architecture, select Standard
Select a preconfigured template or select the Start from scratch to launch a new training quickly
Enter a1 as the workload name
Click CONTINUE In the next step:
Create a new environment:
Click +NEW ENVIRONMENT
Enter a name for the environment. The name must be unique.
Enter the training Image URL -
runai.jfrog.io/demo/quickstart
Click CREATE ENVIRONMENT
The newly created environment will be selected automatically
Select the ‘one-gpu’ compute resource for your workload (GPU devices: 1 )
If ‘one-gpu’ is not displayed in the gallery, follow the below steps:
Click +NEW COMPUTE RESOURCE
Enter a name for the compute resource. The name must be unique.
Set GPU devices per pod: 1
Set GPU memory per device:
Select % (of device) - Fraction of a GPU device's memory
Set the memory Request - 100 (the workload will allocate 100% of the GPU memory)
Optional: set the CPU compute per pod - 0.1 cores (default)
Optional: set the CPU memory per pod - 100 MB (default)
Click CREATE COMPUTE RESOURCE
The newly created compute resource will be selected automatically
Click CREATE TRAINING
Step 3: Submitting the second training workload (team-a)
Go to the Workload Manager → Workloads
Click +NEW WORKLOAD and select Training
Select the cluster where the previous training workload was created
Select the project named team-a
Under Workload architecture, select Standard
Select a preconfigured template or select the Start from scratch to launch a new training quickly
Enter a2 as the workload name
Click CONTINUE In the next step:
Select the environment created in Step 2
Select the ‘two-gpus’ compute resource for your workload (GPU devices: 2)
If ‘two-gpus’ is not displayed in the gallery, follow the below steps:
Click +NEW COMPUTE RESOURCE
Enter a name for the compute resource. The name must be unique.
Set GPU devices per pod: 2
Set GPU memory per device:
Select % (of device) - Fraction of a GPU device's memory
Set the memory Request - 100 (the workload will allocate 100% of the GPU memory)
Optional: set the CPU compute per pod - 0.1 cores (default)
Optional: set the CPU memory per pod - 100 MB (default)
Click CREATE COMPUTE RESOURCE
The newly created compute resource will be selected automatically
Click CREATE TRAINING
Step 4: Submitting the first training workload (team-b)
Go to the Workload Manager → Workloads
Click +NEW WORKLOAD and select Training
Select the cluster where the previous training was created
Select the project named team-b
Under Workload architecture, select Standard
Select a preconfigured template or select the Start from scratch to launch a new training quickly
Enter b1 as the workload name
Click CONTINUE In the next step:
Select the environment created in Step 2
Select the compute resource created in Step 2
Click CREATE TRAINING
Over quota status
System status after run:

Step 5: Submitting the second training workload (team-b)
Go to the Workload Manager → Workloads
Click +NEW WORKLOAD and select Training
Select the cluster where the previous training was created
Select the project named team-b
Under Workload architecture, select Standard
Select a preconfigured template or select the Start from scratch to launch a new training quickly
Enter b2 as the workload name
Click CONTINUE In the next step:
Select the environment created in Step 2
Select the compute resource created in Step 2
Click CREATE TRAINING
Basic fairness and preemption status
Workloads status after run:

Next steps
Manage and monitor your newly created workload using the Workloads table.
Last updated