Run your first standard training
This quick start provides a step-by-step walkthrough for running a standard training workload.
A training workload contains the setup and configuration needed for building your model, including the container, images, data sets, and resource requests, as well as the required tools for the research, all in a single place.
Note
If enabled by your Administrator, the NVIDIA Run:ai UI allows you to create a new workload using either the Flexible or Original submission form. The steps in this quick start reflect the Original form only.
Prerequisites
Before you start, make sure:
You have created a project or have one created for you.
The project has an assigned quota of at least 1 GPU.
Step 1: Logging in
Browse to the provided NVIDIA Run:ai user interface and log in with your credentials.
Step 2: Submitting a standard training workload
Go to the Workload manager → Workloads
Click +NEW WORKLOAD and select Training
Select under which cluster to create the workload
Select the project in which your workload will run
Under Workload architecture, select Standard
Select a preconfigured template or select the Start from scratch to launch a new workload quickly
Enter a name for the standard training workload (if the name already exists in the project, you will be requested to submit a different name)
Click CONTINUE
In the next step:
Create an environment for your workload
Click +NEW ENVIRONMENT
Enter quickstart as the name
Enter
runai.jfrog.io/demo/quickstart
as the Image URLClick CREATE ENVIRONMENT
The newly created environment will be selected automatically
Select the ‘one-gpu’ compute resource for your workload (GPU devices: 1)
If ‘one-gpu’ is not displayed in the gallery, follow the below steps:
Click +NEW COMPUTE RESOURCE
Enter a name for the compute resource. The name must be unique.
Set GPU devices per pod - 1
Set GPU memory per device
Select % (of device) - Fraction of a GPU device’s memory
Set the memory Request - 100 (the workload will allocate 100% of the GPU memory)
Optional: set the CPU compute per pod - 0.1 cores (default)
Optional: set the CPU memory per pod - 100 MB (default)
Click CREATE COMPUTE RESOURCE
The newly created compute resource will be selected automatically
Click CREATE TRAINING
Next steps
Manage and monitor your newly created workload using the Workloads table.
After validating your training performance and results, deploy your model using inference.
Last updated