NVIDIA NIM Distributed Inference Deployment
This tutorial demonstrates how to run a distributed inference workload using the DeepSeek-R1 model on the NVIDIA Run:ai platform. You can use this workflow as a reference and adapt it for your own models, container images, and hardware configurations.
In this tutorial, you will learn how to:
Set up environment prerequisites for NIM-based distributed inference
Create a user application for API integrations with NVIDIA Run:ai
Create a user credential to store your NGC API key
Create a PVC-based data source for model caching
Deploy a distributed inference workload using the NVIDIA Run:ai REST API
Access the inference endpoint to send requests
Prerequisites
Before you start, make sure the following requirements are met:
Your administrator has:
Created a project for you.
Installed and configured LWS (Leader-Worker Set) on the cluster.
Configured external access if needed. Endpoints ending with
.svc.cluster.localare accessible only inside the cluster; external access must be enabled by your administrator as described in the inference requirements section.Configured a Docker registry credential in your project (where the workload will run). NVIDIA NIM images are pulled from the NGC catalog and require the following values for authentication:
Username -
$oauthtokenPassword -
<YOUR_NGC_API_KEY>Docker registry URL -
nvcr.io
You have:
An NGC account with an active NGC API key. To obtain a key, go to NGC → Setup → API Keys, then generate or copy an existing key.
Step 1: Logging In
Browse to the provided NVIDIA Run:ai user interface and log in with your credentials.
To use the API, you will need to obtain a token as shown in Creating a user application.
Step 2: Creating a User Application
Applications are used for API integrations with NVIDIA Run:ai. An application contains a client ID and a client secret. With the client credentials, you can obtain a token and use it within subsequent API calls.
In the NVIDIA Run:ai user interface:
Click the user avatar at the top right corner, then select Settings
Click +APPLICATION
Enter the application’s name and click CREATE
Copy the Client ID and Client secret and store securely
Click DONE
To request an API access token, use the client credentials to get a temporary token to access NVIDIA Run:ai using the Tokens API. For example:
Step 3: Creating a User Credential
User credentials allow users to securely store private authentication secrets, which are accessible only to the user who created them. See User credentials for more details.
In the NVIDIA Run:ai user interface:
Click the user avatar at the top right corner, then select Settings
Click +CREDENTIAL and select Generic secret from the dropdown
Enter a name for the credential. The name must be unique.
Optional: Provide a description of the credential
Enter the following:
Key -
NGC_API_KEYValue -
<YOUR_NGC_API_KEY>
Click CREATE CREDENTIAL
Step 4: Creating a PVC Data Source
To make it easier to reuse code and checkpoints in future workloads, create a data source in the form of a Persistent Volume Claim (PVC). The PVC can be mounted to workloads and will persist after the workload completes, allowing any data it contains to be reused.
To create a PVC, go to Workload manager → Data sources.
Click +NEW DATA SOURCE and select PVC from the dropdown menu.
Within the new form, set the scope.
Enter a name for the data source. The name must be unique.
For the data options, select New PVC and the storage class that suits your needs:
To allow all nodes to read and write from/to the PVC, select Read-write by many nodes for the access mode.
Enter
2 TBfor the claim size to ensure we have plenty of capacity for future workloads.Select Filesystem (default) as the volume mode. The volume will be mounted as a filesystem, enabling the usage of directories and files.
Set the Container path to
/opt/nim/.cachewhich is where the PVC will be mounted inside containers.
Click CREATE DATA SOURCE
After creating the data source, wait for the PVC to be provisioned. The PVC claim name (which is displayed in the UI as the Kubernetes name) will appear in the Data sources grid once it’s ready. This claim name is the exact value that will be used for the <pvc-claim-name> when submitting the workload.
Copy the following command to your terminal. Make sure to update the following parameters:
<COMPANY-URL>- The link to the NVIDIA Run:ai user interface.<TOKEN>- The API access token obtained in Step 2.
For all other parameters within the JSON body, refer to the PVC API.
After creating the data source, wait for the PVC to be provisioned. Use the List PVC assets API to retrieve the claim name. This claim name is the exact value that will be used for the <pvc-claim-name> when submitting the workload.
Step 5: Creating the Workload
The configuration below defines how the workload is distributed across nodes, how authentication is applied, and how model assets are cached for reuse.
How the Configuration Works
workers- Defines the number of worker nodes that participate in the distributed inference. In this tutorial, one worker and one leader run across two nodes, each contributing 8 GPUs for a total of 16 GPUs.servingPort.port- The port exposed by the leader node for receiving inference requests. Port 8000 is the default for NIM’s OpenAI-compatible API server.servingPort.authorizationType- Controls who can access the inference endpoint. In this tutorial, it’s set toauthenticatedUsers, meaning only authenticated NVIDIA Run:ai users with valid tokens can send requests. By default, inference endpoints are public; adding authentication ensures that only authorized users can access the deployed model.Leader and worker configuration - The leader container runs the NIM API server and orchestrates distributed execution. Workers connect to the leader using the address injected via
$(LWS_LEADER_ADDRESS), forming a multi-node distributed inference runtime that spans multiple GPUs and nodes.NIM_LEADER_ROLE- Indicates whether the container is the leader (1) or a worker (0). Required for multi-node execution.NIM_NODE_RANK- Automatically assigned rank used by NIM to coordinate model-parallel execution across nodes (leader = 0, workers = 1…N).NIM_MULTI_NODE- Enables NIM’s multi-node mode. Must be set to1for both leader and worker containers for distributed inference to initialize correctly.NIM_TENSOR_PARALLEL_SIZE- Sets the tensor parallel degree. A value of8horizontally partitions each model layer across all 8 GPUs on a node.NIM_PIPELINE_PARALLEL_SIZE- Sets the pipeline parallel degree. A value of2splits the model into two sequential execution stages across nodes.NIM_NUM_COMPUTE_NODES- Specifies the total number of nodes participating in the distributed inference workload. Must match the LWS group size.NIM_MODEL_PROFILE- Defines the optimized NIM configuration for model architecture, GPU type, and parallelism. For DeepSeek-R1 on H100,sglang-h100-bf16-tp8-pp2aligns with 8-way tensor parallel and 2-way pipeline parallel execution.NIM_USE_SGLANG- Enables SGLang, the accelerated inference runtime required for DeepSeek-R1.NIM_TRUST_CUSTOM_CODE- Allows the container to load custom Python modules or kernels from the NIM image.NGC_API_KEY- Stored securely as a user credential and injected into both leader and worker containers. Required to authenticate with NGC and download DeepSeek-R1 model assets.PVC mount at
/opt/nim/.cache- The shared Persistent Volume Claim is mounted at/opt/nim/.cache. It caches downloaded model weights, tokenizer files, and compiled artifacts so future runs start significantly faster.gpuDevicesRequest: 8- Specifies the number of GPUs requested by each leader and worker pod, matching the required tensor parallel size.runAsUid,runAsGid, andrunAsNonRoot- Defines the security context under which the container runs. In OpenShift environments, both the leader and worker pods must run with a non-root user and group (UID/GID 1000). Without this configuration, pods may encounter permission errors during the model download phase because the mounted cache path is owned byroot.
Submitting the Workload
Copy the following command to your terminal. Make sure to update the following parameters. For more details, see Distributed Inferences API:
<COMPANY-URL>- The link to the NVIDIA Run:ai user interface.<TOKEN>- The API access token obtained in Step 2.<PROJECT-ID>- The ID of the Project the workload is running on. You can get the Project ID via the Get Projects API.<CLUSTER-UUID>- The unique identifier of the Cluster. You can get the Cluster UUID via the Get Clusters API.<genericsecret-name>- The name of the user credential created in Step 3. Replace this with the name of the credential preceded by the system prefixgenericsecret-. For example, if you named your credentialngc, the value here isgenericsecret-ngc.<pvc-claim-name>- The claim name associated with the PVC created in Step 4.
Step 6: Verifying the Workload Status
After submitting the workload, wait for it to reach the Running status in the Workloads table. A workload becomes Ready to accept inference requests only after all its pods have fully initialized, including model loading.
Large models may require several minutes to load their weights, especially when the model is stored on a PVC. During this time, the workload may remain in Initializing even though pods are already running.
To monitor progress:
Select the workload and click the SHOW DETAILS button at the upper-right side of the action bar. The details pane appears, presenting the Logs tab to track model-download and model-loading progress. Select the relevant pod from the dropdown and review the pod logs.
The workload transitions to Running only when the leader pod finishes loading the model and all readiness checks pass.
Once the workload reaches Running and shows an available Connection, you can proceed to access the inference endpoint.
Step 7: Accessing the Inference Workload
You can programmatically consume an inference workload via API by making direct calls to the serving endpoint, typically from other workloads or external integrations. Once an inference workload is deployed, the serving endpoint URL appears in the Connections column of the Workloads table. To retrieve the service endpoint programmatically, use the Get Workloads API. The endpoint URL will be available in the response body under urls.
By default, inference endpoints in NVIDIA Run:ai are configured with public access, meaning no authentication is required to send requests. In this tutorial, the servingPort is configured with the authenticatedUsers as the authorizationType. This means only authenticated users with a valid access token can call the inference endpoint.
Use the token you obtained in Step 2 to authenticate your requests, and include it in the request header as shown below:
Step 8: Cleaning up the Environment
After the workload finishes, it can be deleted to free up resources for other workloads. If you also want to reclaim the disk space used by the PVC, you can delete the PVC once it’s no longer needed.
Last updated