Over quota, fairness and preemption

This quick start provides a step-by-step walkthrough of the core scheduling concepts - over quota, fairness, and preemption. It demonstrates the simplicity of resource provisioning and how the system eliminates bottlenecks by allowing users or teams to exceed their resource quota when free GPUs are available.

  • Over quota - In this scenario, team-a runs two training workloads and team-b runs one. Team-a has a quota of 3 GPUs and is over quota by 1 GPU, while team-b has a quota of 1 GPU. The system allows this over quota usage as long as there are available GPUs in the cluster.

  • Fairness and preemption - Since the cluster is already at full capacity, when team-b launches a new b2 workload requiring 1 GPU , team-a can no longer remain over quota. To maintain fairness, the NVIDIA Run:ai Scheduler preempts workload a1 (1 GPU), freeing up resources for team-b.

Prerequisites

  • You have created two projects - team-a and team-b - or have them created for you.

  • Each project has an assigned quota of 2 GPUs.

Step 1: Logging in

Browse to the provided NVIDIA Run:ai user interface and log in with your credentials.

Step 2: Submitting the first training workload (team-a)

  1. Go to the Workload Manager → Workloads

  2. Click +NEW WORKLOAD and select Training

  3. Select under which cluster to create the workload

  4. Select the project named team-a

  5. Under Workload architecture, select Standard

  6. Select a preconfigured template or select the Start from scratch to launch a new training quickly

  7. Enter a1 as the workload name

  8. Click CONTINUE In the next step:

  9. Create a new environment:

    • Click +NEW ENVIRONMENT

    • Enter a name for the environment. The name must be unique.

    • Enter the training Image URL - runai.jfrog.io/demo/quickstart

    • Click CREATE ENVIRONMENT

    The newly created environment will be selected automatically

  10. Select the ‘one-gpu’ compute resource for your workload (GPU devices: 1 )

    • If ‘one-gpu’ is not displayed in the gallery, follow the below steps:

      • Click +NEW COMPUTE RESOURCE

      • Enter a name for the compute resource. The name must be unique.

      • Set GPU devices per pod: 1

      • Set GPU memory per device:

        • Select % (of device) - Fraction of a GPU device's memory

        • Set the memory Request - 100 (the workload will allocate 100% of the GPU memory)

      • Optional: set the CPU compute per pod - 0.1 cores (default)

      • Optional: set the CPU memory per pod - 100 MB (default)

      • Click CREATE COMPUTE RESOURCE

    The newly created compute resource will be selected automatically

  11. Click CREATE TRAINING

Step 3: Submitting the second training workload (team-a)

  1. Go to the Workload Manager → Workloads

  2. Click +NEW WORKLOAD and select Training

  3. Select the cluster where the previous training workload was created

  4. Select the project named team-a

  5. Under Workload architecture, select Standard

  6. Select a preconfigured template or select the Start from scratch to launch a new training quickly

  7. Enter a2 as the workload name

  8. Click CONTINUE In the next step:

  9. Select the environment created in Step 2

  10. Select the ‘two-gpus’ compute resource for your workload (GPU devices: 2)

    • If ‘two-gpus’ is not displayed in the gallery, follow the below steps:

      • Click +NEW COMPUTE RESOURCE

      • Enter a name for the compute resource. The name must be unique.

      • Set GPU devices per pod: 2

      • Set GPU memory per device:

        • Select % (of device) - Fraction of a GPU device's memory

        • Set the memory Request - 100 (the workload will allocate 100% of the GPU memory)

      • Optional: set the CPU compute per pod - 0.1 cores (default)

      • Optional: set the CPU memory per pod - 100 MB (default)

      • Click CREATE COMPUTE RESOURCE

    The newly created compute resource will be selected automatically

  11. Click CREATE TRAINING

Step 4: Submitting the first training workload (team-b)

  1. Go to the Workload Manager → Workloads

  2. Click +NEW WORKLOAD and select Training

  3. Select the cluster where the previous training was created

  4. Select the project named team-b

  5. Under Workload architecture, select Standard

  6. Select a preconfigured template or select the Start from scratch to launch a new training quickly

  7. Enter b1 as the workload name

  8. Click CONTINUE In the next step:

  9. Select the environment created in Step 2

  10. Select the compute resource created in Step 2

  11. Click CREATE TRAINING

Over quota status

System status after run:

Step 5: Submitting the second training workload (team-b)

  1. Go to the Workload Manager → Workloads

  2. Click +NEW WORKLOAD and select Training

  3. Select the cluster where the previous training was created

  4. Select the project named team-b

  5. Under Workload architecture, select Standard

  6. Select a preconfigured template or select the Start from scratch to launch a new training quickly

  7. Enter b2 as the workload name

  8. Click CONTINUE In the next step:

  9. Select the environment created in Step 2

  10. Select the compute resource created in Step 2

  11. Click CREATE TRAINING

Basic fairness and preemption status

Workloads status after run:

Next steps

Manage and monitor your newly created workload using the Workloads table.

Last updated