This quick start provides a step-by-step walkthrough for running a Jupyter Notebook using workspaces.
A workspace contains the setup and configuration needed for building your model, including the container, images, data sets, and resource requests, as well as the required tools for the research, all in one place. See Running workspaces for more information.
If enabled by your Administrator, the NVIDIA Run:ai UI allows you to create a new workload using either the Flexible or Original submission form. The steps in this quick start guide reflect the Original form only.
Before you start, make sure:
You have created a project or have one created for you.
The project has an assigned quota of at least 1 GPU.
Browse to the provided NVIDIA Run:ai user interface and log in with your credentials.
Run the below --help command to obtain the login options and log in according to your setup:
runai login --help
Log in using the following command. You will be prompted to enter your username and password:
runai login
To use the API, you will need to obtain a token as shown in API authentication.
Go to the Workload manager → Workloads
Click +NEW WORKLOAD and select Workspace
Select under which cluster to create the workload
Select the project in which your workspace will run
Select a preconfigured template or select the Start from scratch to launch a new workspace quickly
Enter a name for the workspace (if the name already exists in the project, you will be requested to submit a different name)
Click CONTINUE
In the next step:
Select the ‘jupyter-lab’ environment for your workspace (Image URL: jupyter/scipy-notebook
)
If ‘jupyter-lab’ is not displayed in the gallery, follow the below steps:
Click +NEW ENVIRONMENT
Enter a name for the environment. The name must be unique.
Enter the jupyter-lab Image URL - jupyter/scipy-notebook
Tools - Set the connection for your tool
Click +TOOL
Select Jupyter tool from the list
Set the runtime settings for the environment
Click +COMMAND
Enter command - start-notebook.sh
Enter arguments - --NotebookApp.base_url=/${RUNAI_PROJECT}/${RUNAI_JOB_NAME} --NotebookApp.token=''
Note: If host-based routing is enabled on the cluster, enter the argument --NotebookApp.token=''
only.
Click CREATE ENVIRONMENT
The newly created environment will be selected automatically
Select the ‘one-gpu’ compute resource for your workspace (GPU devices: 1)
If ‘one-gpu’ is not displayed in the gallery, follow the below steps:
Click +NEW COMPUTE RESOURCE
Enter a name for the compute resource. The name must be unique.
Set GPU devices per pod - 1
Set GPU memory per device
Select % (of device) - Fraction of a GPU device’s memory
Set the memory Request - 100 (the workload will allocate 100% of the GPU memory)
Optional: set the CPU compute per pod - 0.1 cores (default)
Optional: set the CPU memory per pod - 100 MB (default)
Click CREATE COMPUTE RESOURCE
The newly created compute resource will be selected automatically
Click CREATE WORKSPACE
Copy the following command to your terminal. Make sure to update the below with the name of your project and workload. For more details, see CLI reference:
runai project set "project-name"
runai workspace submit "workload-name" \
--image jupyter/scipy-notebook --gpu-devices-request 0 \
--command --external-url container=8888 -- start-notebook.sh \
--NotebookApp.base_url=/${RUNAI_PROJECT}/${RUNAI_JOB_NAME} --NotebookApp.token=
Copy the following command to your terminal. Make sure to update the below with the name of your project and workload. For more details, see CLI reference:
runai config project "project-name"
runai submit "workload-name" --jupyter -g 1
Copy the following command to your terminal. Make sure to update the below parameters. For more details, see Workspaces API:
curl -L 'https://<COMPANY-URL>/api/v1/workloads/workspaces' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer <TOKEN>' \
"name": "workload-name",
"projectId": "<PROJECT-ID>",
"clusterId": "<CLUSTER-UUID>",
"spec": {
"command" : "start-notebook.sh",
"args" : "--NotebookApp.base_url=/${RUNAI_PROJECT}/${RUNAI_JOB_NAME} --NotebookApp.token=''",
"image": "jupyter/scipy-notebook",
"compute": {
"gpuDevicesRequest": 1
},
"exposedUrls" : [
{
"container" : 8888,
"toolType": "jupyter-notebook",
"toolName": "Jupyter"
}
]
}
}
<COMPANY-URL>
- The link to the NVIDIA Run:ai user interface
<TOKEN>
- The API access token obtained in Step 1
<PROJECT-ID>
- The ID of the Project the workload is running on. You can get the Project ID via the Get Projects API.
<CLUSTER-UUID>
- The unique identifier of the Cluster. You can get the Cluster UUID via the Get Clusters API.
toolType
will show the Jupyter icon when connecting to the Jupyter tool via the user interface.
toolName
will show when connecting to the Jupyter tool via the user interface.
Select the newly created workspace with the Jupyter application that you want to connect to
Click CONNECT
Select the Jupyter tool. The selected tool is opened in a new tab on your browser.
To connect to the Jupyter Notebook, browse directly to https://<COMPANY-URL>/<PROJECT-NAME>/<WORKLOAD-NAME>
To connect to the Jupyter Notebook, browse directly to https://<COMPANY-URL>/<PROJECT-NAME>/<WORKLOAD-NAME>
To connect to the Jupyter Notebook, browse directly to https://<COMPANY-URL>/<PROJECT-NAME>/<WORKLOAD-NAME>
Manage and monitor your newly created workload using the Workloads table.