# Launching Workloads with GPU Fractions

This quick start provides a step-by-step walkthrough for running a Jupyter Notebook workspace using [GPU fractions](https://run-ai-docs.nvidia.com/self-hosted/2.20/platform-management/runai-scheduler/resource-optimization/fractions).

NVIDIA Run:ai's GPU fractions provides an agile and easy-to-use method to share a GPU or multiple GPUs across workloads. With GPU fractions, you can divide the GPU/s memory into smaller chunks and share the GPU/s compute resources between different workloads and users, resulting in higher GPU utilization and more efficient resource allocation.

## Prerequisites

Before you start, make sure:

* You have created a [project](https://run-ai-docs.nvidia.com/self-hosted/2.20/platform-management/aiinitiatives/organization/projects) or have one created for you.
* The project has an assigned quota of at least 0.5 GPU.

## Step 1: Logging In

{% tabs %}
{% tab title="UI" %}
Browse to the provided NVIDIA Run:ai user interface and log in with your credentials.
{% endtab %}

{% tab title="CLI v2" %}
Log in using the following command. You will be prompted to enter your username and password:

```sh
runai login --help
```

{% endtab %}

{% tab title="CLI v1 (Deprecated)" %}
Log in using the following command. You will be prompted to enter your username and password:

```sh
runai login
```

{% endtab %}

{% tab title="API" %}
To use the API, you will need to obtain a token as shown in [API authentication](https://app.gitbook.com/s/b5QLzc5pV7wpXz3CDYyp/getting-started/how-to-authenticate-to-the-api).
{% endtab %}
{% endtabs %}

## Step 2: Submitting a Workspace

{% tabs %}
{% tab title="UI" %}

1. Go to the Workload manager → Workloads
2. Click **+NEW WORKLOAD** and select **Workspace**
3. Select under which **cluster** to create the workload
4. Select the **project** in which your workspace will run
5. Select **Start from scratch** to launch a new workspace quickly
6. Enter a **name** for the workspace (if the name already exists in the project, you will be requested to submit a different name)
7. Click **CONTINUE**

   In the next step:
8. Select the **'jupyter-lab'** environment for your workspace (Image URL: `jupyter/scipy-notebook)`

   * If the 'jupyter-lab' is not displayed in the gallery, follow the below steps:
     * Click **+NEW ENVIRONMENT**
     * Enter a **name** for the environment. The name must be unique.
     * Enter the jupyter-lab **Image URL** - `jupyter/scipy-notebook`
     * Tools - Set the connection for your tool
       * Click **+TOOL**
       * Select **Jupyter** tool from the list
     * Set the runtime settings for the environment

       * Click **+COMMAND**
       * Enter **command** - `start-notebook.sh`
       * Enter **arguments** - `--NotebookApp.base_url=/${RUNAI_PROJECT}/${RUNAI_JOB_NAME} --NotebookApp.token=''`

       **Note:** If [host-based routing](https://run-ai-docs.nvidia.com/self-hosted/2.20/infrastructure-setup/advanced-setup/container-access/external-access-to-containers#host-based-routing) is enabled on the cluster, enter the `--NotebookApp.token=''` only.
     * Click **CREATE ENVIRONMENT**

   The newly created environment will be selected automatically
9. Select the **'small-fraction'** compute resource for your workspace (GPU % of devices: 10)

   * If 'small-fraction' is not displayed in the gallery, follow the below steps:
     * Click **+NEW COMPUTE RESOURCE**
       * Enter a **name** for the compute resource. The name must be unique.
       * Set **GPU devices per pod** - 1
       * Set **GPU memory per device**
         * Select **% (of device)** - Fraction of a GPU device's memory
         * Set the memory **Request** - 10 (the workload will allocate 10% of the GPU memory)
       * Optional: set the **CPU compute per pod** - 0.1 cores (default)
       * Optional: set the **CPU memory per pod** - 100 MB (default)
     * Click **CREATE COMPUTE RESOURCE**

   The newly created compute resource will be selected automatically
10. Click **CREATE WORKSPACE**
    {% endtab %}

{% tab title="CLI v2" %}
Copy the following command to your terminal. Make sure to update the below with the name of your project and workload. For more details, see [CLI reference](https://run-ai-docs.nvidia.com/self-hosted/2.20/reference/cli/runai):

```sh
runai project set "project-name"
runai workspace submit "workload-name" --image jupyter/scipy-notebook \
--gpu-devices-request 0.1 --command --external-url container=8888 \
--name-prefix jupyter --command -- start-notebook.sh \
--NotebookApp.base_url=/${RUNAI_PROJECT}/${RUNAI_JOB_NAME} --NotebookApp.token=
```

{% endtab %}

{% tab title="CLI v1 (Deprecated)" %}
Copy the following command to your terminal. Make sure to update the below with the name of your project and workload. For more details, see [CLI reference](https://docs.run.ai/latest/Researcher/cli-reference/Introduction/):

```sh
runai config project "project-name"
runai submit "workload-name" --jupyter -g 0.1
```

{% endtab %}

{% tab title="API" %}
Copy the following command to your terminal. Make sure to update the below parameters. For more details, see [Workspaces](https://run-ai-docs.nvidia.com/api/2.20/workloads/workspaces) API:

```bash
curl -L 'https://<COMPANY-URL>/api/v1/workloads/workspaces' \ 
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer <TOKEN>' \ 
-d '{ 
    "name": "workload-name", 
    "projectId": "<PROJECT-ID>",  
    "clusterId": "<CLUSTER-UUID>", 
    "spec": {
        "command" : "start-notebook.sh",
        "args" : "--NotebookApp.base_url=/${RUNAI_PROJECT}/${RUNAI_JOB_NAME} --NotebookApp.token=''",
        "image": "jupyter/scipy-notebook",
        "compute": {
            "gpuDevicesRequest": 1,
            "gpuRequestType": "portion",
            "gpuPortionRequest": 0.1

        },
        "exposedUrls" : [
            { 
                "container" : 8888,
                "toolType": "jupyter-notebook", 
                "toolName": "Jupyter" 
            }
        ]
    }
}
```

* `<COMPANY-URL>` - The link to the NVIDIA Run:ai user interface
* `<TOKEN>` - The API access token obtained in [Step 1](#step-1-logging-in)
* `<PROJECT-ID>` - The ID of the Project the workload is running on. You can get the Project ID via the [Get Projects](https://run-ai-docs.nvidia.com/api/2.20/organizations/projects#get-api-v1-org-unit-projects) API.
* `<CLUSTER-UUID>` - The unique identifier of the Cluster. You can get the Cluster UUID via the [Get Clusters](https://run-ai-docs.nvidia.com/api/2.20/organizations/clusters#get-api-v1-clusters) API.
* `toolType` will show the Jupyter icon when connecting to the Jupyter tool via the user interface.
* `toolName` will show when connecting to the Jupyter tool via the user interface.

{% hint style="info" %}
**Note**

The above API snippet runs with NVIDIA Run:ai clusters of 2.18 and above only.
{% endhint %}
{% endtab %}
{% endtabs %}

## Step 3: Connecting to the Jupyter Notebook

{% tabs %}
{% tab title="UI" %}

1. Select the newly created workspace with the Jupyter application that you want to connect to
2. Click **CONNECT**
3. Select the Jupyter tool. The selected tool is opened in a new tab on your browser.
   {% endtab %}

{% tab title="CLI v2" %}
To connect to the Jupyter Notebook, browse directly to <mark style="color:blue;">https\://\<COMPANY-URL>/\<PROJECT-NAME>/\<WORKLOAD-NAME></mark>
{% endtab %}

{% tab title="CLI v1 (Deprecated)" %}
To connect to the Jupyter Notebook, browse directly to <mark style="color:blue;">https\://\<COMPANY-URL>/\<PROJECT-NAME>/\<WORKLOAD-NAME></mark>
{% endtab %}

{% tab title="API" %}
To connect to the Jupyter Notebook, browse directly to <mark style="color:blue;">https\://\<COMPANY-URL>/\<PROJECT-NAME>/\<WORKLOAD-NAME></mark>
{% endtab %}
{% endtabs %}

## Next Steps

Manage and monitor your newly created workload using the [Workloads](https://run-ai-docs.nvidia.com/self-hosted/2.20/workloads-in-nvidia-run-ai/workloads) table.
